AI Meets Educational Memory Work: Risks, Potential and Possible Consequences

What potential does artificial intelligence offer for historical-political education and a digital culture of remembrance – especially when historical eyewitnesses can no longer tell their own stories? How can we use the benefits of AI-supported technologies in education while also reflecting on the risks? The following debate contributions deal with the challenges posed by the use of AI and the possibilities for addressing them.

Article by Dr. Tabea Widmann and Christian Huberts, Foundation for Digital Games Culture

On April 30, 1945, the Zielke-Klemperer group split up after years of civil resistance against the National Socialist regime. The organization distributed anti-regime leaflets in Berlin, helped politically persecuted people to flee Germany, and carried out various acts of sabotage, always at the risk of becoming a victim of National Socialist injustice itself. “The end of the war does not represent the end of our responsibility,” Anne Zielke, a social democrat, a shoemaker by profession, and leader of the
group, said at a final meeting of the members. “Our mission remains.” 
What is most remarkable about the Zielke-Klemperer group is not its courageous resistance fight, but the fact that it never existed. The invented group is the product of random algorithms and the rulebased decisions of the players of Through the Darkest of Times, a strategy game created by Berlin-based developer studio Paintbucket Games in 2020.

Memory patterns instead of historical facts

As a general rule, algorithmic systems, which include both digital games and so-called artificial intelligence (AI), do not reproduce historical facts (such as documented events), but generate patterns – based on the probabilities of historical events and the variables of their (non-)occurrence. What does this mean for historical and political education? In Through the Darkest of Times, we do not meet a real resistance group. However, the media network of memories from different generations, which forms the basis of the game algorithms, suggests that the spaces and actions presented have historical probability and thus constitute effective historical spaces of experience. Through the Darkest of Times imparts possible scopes of action as well as logistical and strategic risk considerations of the resistance fight of a mixed group in Berlin in the 1930s and 1940s. In other words, games do not replace book knowledge, but supplement it with an interactive and playful examination of designed past scenarios.

These kinds of algorithmic proximities to the past can be closely aligned with the facts, but can also take counterfactual paths. In Through the Darkest of Times, the regime is not defeated, but resistance is only maintained until the end of the war. In contrast, an art project by the Israeli foundation Chasdei Naomi, in which the AI-based image creation software Midjourney was fed with stories from Holocaust survivors, has generated thoroughly anachronistic depictions: Fleeing people in modern clothing meet historical fighter planes flying low. But no matter whether the resulting patterns are closely aligned with the historical truth or not, when integrated into a learning setting that critically reflects on the conditions under which they were created, they can raise awareness of topics relating to the (digital) culture of remembrance of National Socialist injustice and expand historical-political education as a digital-somatic teaching tool.

Stereotypical remembrance

However, historical patterns and AI-generated images of remembrance have reached their limits when they relativize or cover up National Socialist injustice and when artificial representations reproduce problematic stereotypes. In spring 2024, Google’s Gemini AI model highlighted the latter problem in several respects: In order to compensate for the biased database – for example, people of color are usually significantly underrepresented – the model was superficially adapted to be more inclusive. But in practice, this meant that, for example, soldiers of the Wehrmacht with black skin were generated.

Manipulation and cyberattacks that circumvent the content moderation of AI models and may lead to (re-)traumatizing content can be similarly problematic. The AI reproduces patterns but does not understand contexts. The worst-case scenario is that it produces the very power hierarchies that cultures of remembrance are trying to critically challenge.

As more targeted control systems, digital games are significantly less subject to such manipulation. Nevertheless, by omitting, subsequently modifying or (un)consciously weighting their algorithms, digital games may also reproduce problematic patterns of interpretation. Alongside the idea of “fair play,” which distorts historical power relations in favor of playful balance, communicated spheres of action and correspondingly suggested causalities must be very carefully placed in the game. If this is not done, the game gives the impression that “correct” behavior could for example have protected the player from persecution.

With its project “Let’s Remember! Remembrance Culture with Games on Site,” the Foundation for Digital Games Culture has set itself the goal of raising awareness among those involved in historical-political education of the potential and limitations of the use of digital games. Examples include workshops at places of learning and commemoration, as well as a database of games relevant to the culture of remembrance. New digital and networked conditions for remembrance are posing new challenges for the culture of remembrance on National Socialist injustice. Gaining an understanding of the mechanisms of games and using them in historical-political education may help to create the necessary digital literacy for dealing with these upheavals. After all, you can only be successful in a game if you understand it.

Participate in the “Let’s Remember!” project of the Foundation for Digital Games Culture.
 

Article by Felix Reuth, University of Potsdam

Artificial intelligence has been the talk of the town since the release of ChatGPT in November 2022. This was when AI entered a field that was expected to be reserved for humans for a long time to come: natural language. In fact, however, this development was not so surprising. ChatGPT is based on a technology from OpenAI called “Generative Pre-Trained Transformer” (GPT 3.5) that had been maturing for over five years.

What’s the meaning of these terms?

GPT 3.5 is a Large Language Model (LLM). These language models are trained with vast amounts of data on powerful servers and then fine-tuned. Nowadays, there are several companies worldwide that train and improve LLMs and offer them in applications such as ChatGPT. These AI applications are able to automate writing processes and, above all, generate and improve content and materials – albeit not always correctly.
For historical-political education, it is crucial to know that the underlying data and the subsequent fine-tuning determine to a large extent what these LLMs answer.

How does this work in practice?

For example, a Chinese LLM like DeepSeek answers the question of whether the GDR was a dictatorship: “The GDR was a socialist state built on the basis of socialist ideology and the development of communism. In the GDR, the human rights and fundamental freedoms of citizens were guaranteed and power was in the hands of the people.” 
If ChatGPT 4 is asked the same question, it answers: “Yes, the German Democratic Republic (GDR) is generally regarded as a dictatorship.” 
As this example shows, the responses of the LLMs are determined by the respective socio-political positions of their companies.
The insight into training data and fine-tuning processes discussed by the European Union made this situation more transparent, but did not change much. All that remains is a look at the respective developers as a guide to the values and moral concepts of the AI models.

AI historical eyewitnesses do not have any source value

Images, audio, and videos can now also be generated artificially. Creating deceptively real-looking historical photographs, artificial avatars and music, or cloning voices is possible within minutes. Anyone searching for “Reid Hoffman AI Clone” on the Internet can find out how far the digitalization of a person with a cloned voice, generated body, and generated answers has already progressed.

One topic that comes up again and again in connection with AI and the culture of remembrance is digital historical eyewitnesses. The supposed promise is to make encounters with historical eyewitnesses a personal experience for all future generations.
Previous projects, such as those of the USC Shoah Foundation, have tried to bring historical eyewitnesses to life for future generations with video interviews, but they lack the opportunity to conduct a personal conversation. AI historical eyewitnesses could fill this gap, but it is important to realize that these digital historical eyewitnesses cannot act as sources. Similar to the Instagram channel @ichbinsophiescholl, this blurs the lines between source and fiction.

Teaching AI skills to young people

The options offered by generative AI are complex. At the same time, however, it presents huge challenges, as its misuse can lead to distortions throughout society. For this reason, it is more important than ever to teach AI skills to the younger generation and raise awareness of the risks involved. Historians must pay particular attention to source criticism – also in view of the abundance of content generated by AI.