
The goal of the VITraDes job is to make the use of AI in journalism more transparent and in this method add to trustworthy public interaction. To that end, the project group is establishing useful guidelines for labeling AI-generated material and producing educational resources for media specialists. The interdisciplinary initiative combines experts from law, journalism and principles. Within the project, the scientists will analyze existing legal arrangements and self-imposed commitment in journalism as well as arrange workshops with journalists and other pertinent stakeholders. Structure on this, they will establish and assess educational programs.
In addition, the research group will examine how such self-regulatory techniques by the media can also be embraced by online platforms. The objective is to establish a requirement that is not only legally certified but likewise workable in journalistic practice. The task results ought to not just support the media sector however also form a basis for media self-regulation and regulative methods at the nationwide federal government level. Eventually, this need to likewise reinforce trust in journalism and secure the public from false info.
Comprehensive expertise and arises from other jobs
Participating in the task on behalf of the Institute of Journalism (IJ) are Prof. Christina Elmer and research assistant Lisa-Marie Eckardt, who are contributing their extensive knowledge on using AI applications in journalism and arises from research study tasks on the problem of details manipulation. “Above all, this consists of two EU-funded tasks: In the ‘German-Austrian Digital Media Observatory,’ we are responsible for coordinating the German-language hub of the European fact-checking network EDMO, establishing media literacy programs and taking a look at the steps that platforms take versus disinformation. In the ADAC.iO task, we are concentrating on campaigns by foreign stakeholders and analyzing their methods and dissemination mechanisms,” states Professor Christina Elmer. “In parallel, we can draw on findings from an algorithmic accountability task in cooperation with the Department of Statistics and from interdisciplinary courses with journalism, data and computer science students,” includes Lisa-Marie Eckardt. IJ tasks such as “AI Media Physician” (Prof. Holger Wormer) and the research study “Journalism and Democracy” (Prof. Michael Steinbrecher) deliver additional valuable insights.
The VITraDes task is led by Prof. Jessica Heesen from the International Center for Ethics in the Sciences and Humanities (IZEW) at the University of Tübingen. Prof. Ruth Janal from the University of Bayreuth is another project partner. In addition, several companies are supporting the job as associated partners, including the public broadcasting company Bayerische Rundfunk (BR), CORRECTIV’s fact-checking neighborhood “Faktenforum”, the AI Hub of the broadcasting company Südwestdeutscher Rundfunk, Studio 47 and the Deutscher Journalisten-Verband (DJV), an organization representing the interests of journalists in Germany.
About the financing project
The VITraDes task becomes part of the new financing project “Recognizing, Comprehending and Neutralizing Disinformation” of the Federal Ministry of Research Study, Technology and Area. The project will initially money eleven research study tasks up until 2029 that aim to enhance societal and technological resilience versus digital disinformation. The effort becomes part of “Digital. Secure. Sovereign.”, the Federal Government’s research study structure program on IT security.
Contacts for inquiries: