Artificial intelligence (AI) is capable of taking over routine tasks in journalism, but it must not replace the essential role of journalists. While it can be a useful support tool, human oversight and editorial control are always necessary.
This was the key message at the presentation of the study "Advantages and Risks of Using Artificial Intelligence in the Media Sector," organized by the Media Self-Regulation Council.
The Executive Secretary of the Media Self-Regulation Council, Ranko Vujović, emphasized that AI is scarcely discussed in Montenegro, especially regarding its application in media, even though globally, as he noted, there is a sort of race to dominate the AI market.
“To illustrate the scale of this phenomenon, just over a month ago, the ChatGPT service, one of the most well-known AI tools, experienced an outage. At that time, actual user numbers were revealed: globally, ChatGPT is used by about 400 million people weekly. For comparison, the social network X, still regarded as one of the most popular platforms, has around 600 million users monthly,” said Vujović.
He pointed out that these figures clearly demonstrate the reach and spread of AI worldwide, compared to the limited knowledge and attention this topic receives in Montenegro.
“In this study, we tried to address various aspects of AI. We started by focusing on European regulation. The EU only adopted its Artificial Intelligence Act last year. We also analyzed Council of Europe documents on this topic, as well as self-regulatory codes from various European bodies—only about ten exist so far—but they are already starting to include AI-related principles in their frameworks,” Vujović explained.
He added that around 20 interviews were conducted for the study with experts, editors, and journalists, aiming to formulate recommendations for Montenegrin media regarding the use of AI tools.
“AI can be a huge help in various spheres—from business to social development. However, it also brings enormous potential for misuse, regardless of the field of application,” warned Vujović.
He shared a personal example illustrating the speed and power of these new tools.
“I had dinner with an expert working professionally with AI. During dinner, he gave his ‘assistant’—an AI-based program—the task of writing a 150-page book on a scientific topic. The program completed it in 15–20 minutes. A 150-page book was done. Imagine what that means for Montenegro, where we already face issues with plagiarized academic papers, degrees, and doctorates. Imagine the potential misuse by those willing to exploit these technologies,” said Vujović.
Because of all this, he believes it is necessary to introduce the AI topic into public discourse, not just because of its potential benefits but also due to the high risk of abuse.
Aneta Spaić, Professor of Media Law at the Faculty of Law, University of Montenegro, addressed key aspects of European and international regulations regarding the application of AI, particularly in the media context.
She emphasized that due to the emergence, use, and abuse of AI in the EU single market, there was a need to establish a unified legislative framework to ensure ethical and responsible application, especially in highly sensitive areas.
“The adoption of the AI Act in July 2024 was preceded by numerous strategic documents—such as the Coordinated Plan on AI, the European Commission's Declaration, and the Ethics Guidelines for Trustworthy AI—all of which served as a prelude,” Spaić recalled.
She explained that European regulation is based on three key principles: establishing a coherent set of legislative measures, setting clear responsibilities, and revising safety-related legislation.
She especially emphasized that AI use must not conflict with the General Data Protection Regulation (GDPR), nor with consumer protection and algorithmic transparency, as defined in the Digital Services and Digital Markets Acts (2022).
Regarding the AI Act itself, Spaić said it classifies AI applications into four risk categories: unacceptable, high, limited, and minimal.
“Unacceptable risk includes subliminal techniques that influence human consciousness and behavior. High-risk systems involve biometric technologies, infrastructure, judiciary, education, and employment and are subject to special oversight and sanctions. The third category includes systems prone to manipulation, such as algorithms in communications, while the fourth involves minimal risk, such as gaming or spam filters,” Spaić explained.
She stressed that although media are not explicitly mentioned in the AI Act, they must adhere to its principles.
“Automated systems and algorithms used in media must be transparent, clearly labeled, and subject to editorial responsibility. Editorial teams must be held accountable for the consequences of using AI, including algorithmic bias, which can conflict with media values like pluralism, objectivity, and truth,” said Spaić.
She particularly stressed the need to regulate accountability for personal data breaches that may occur through AI use in newsrooms, emphasizing that “there is no exemption from responsibility, not even when violations result from automated systems.”
Ilija Jovićević, Ombudsman for Dan, said the creation of the document on AI use in journalism was the result of extensive dialogue with experts at the national, regional, and international levels, as well as with journalists and editors.
He stated that the aim was to examine all aspects of this powerful yet still unpredictable technology, especially in the context of journalism.
“The insights, experiences, and suggestions we received helped us better understand the risks and patterns we often hear about in AI discussions. Based on those interviews—which were much more extensive than we can present here—we followed a functional approach and summarized them in this document,” said Jovićević.
According to him, all interviewees agreed that AI has significantly changed how information is gathered and organized and has accelerated and simplified many journalistic tasks.
“There are many advantages—from enriching and connecting content, easier preparation for multiple platforms, to structuring information. The greatest benefit lies in efficiency and time-saving, which is crucial in journalism,” said Jovićević.
He mentioned the most common uses of AI: transcription, translation, creating illustrations, infographics, headlines, and organizing and accessing content.
Jovićević believes everything depends on journalists' digital and technological literacy, available resources, and newsroom readiness to integrate these tools into daily work.
He warned of the risks of disinformation, manipulative content, and the so-called “homogenization” of content—i.e., its uniformity and loss of authenticity.
“This can be avoided through a professional approach. Newsrooms that aim for analytical and high-quality content can use AI to prevent homogenization and preserve journalistic integrity,” Jovićević stated.
The key recommendation, he said, is to clearly distinguish between synthetic and authentic content.
“If the content is generated by AI, it must be disclosed. Some media even label authentic content to avoid confusion. A middle-ground solution exists—label only content where AI intervened in its essence, while restyled or reformatted content need not be specifically labeled,” said Jovićević.
He emphasized that while AI can replace certain routine, "pedestrian" tasks like data automation, it cannot and must not replace journalists in their essential roles.
“AI can help, but there must always be human oversight and editorial control. As one expert put it—just as typists once existed and no longer do, today a journalist has an assistant, artificial intelligence, that helps save time and improve their work,” said Jovićević.
Paula Petričević, Ombudsperson for Vijesti and Monitor, stressed the need for open discussions about the price we pay for the rapid and pervasive integration of AI.
“Everything has become more accessible, faster, and easier—research, learning, content creation. But what are the consequences for us as individuals, for our communities, and globally?” Petričević asked.
She listed several risks—from reduced cognitive abilities to the homogenization and impoverishment of the media landscape.
“One interviewee mentioned the ‘laziness’ of journalists. I would say that applies to all AI tool users—professors, students, designers, translators, lawyers. It leads to the atrophy of human skills and a loss of autonomy,” said Petričević.
Besides individual effects, she mentioned systemic consequences: job losses, obsolete professions, declining content quality, and threats to democratic processes.
“All of this brings us to the question of redefining the role of media in contemporary society. I’m reminded of a question someone asked in the 20th century: ‘Why philosophy?’ Today, we more and more often ask—why journalism?” said Petričević.
She reminded that if journalism is defined as a mechanism that ensures citizens have access to accurate, high-quality information, then it cannot and must not be replaced by AI.
“AI tools cannot and must not replace the professional work and human judgment of journalists,” emphasized Petričević.
She particularly addressed the challenges posed by AI in terms of generating disinformation, deepfakes, and manipulations, stating that greater attention must be paid to the so-called ethical grey zone.
“In this zone, it's difficult to answer two key questions: which tasks can be delegated to AI, and when are we obligated to inform the public that content was created or edited with AI support?” said Petričević.
Based on the study’s positions, she said, guidelines were proposed for the future Code of Journalists of Montenegro.
“The first guideline is transparency—clearly labeling content fully generated by AI, including translations without human proofreading. In cases of technical assistance (e.g., transcription), labeling is not mandatory,” said Petričević.
She stressed that content must not be published without human oversight and editorial accountability, regardless of the AI tool’s level of involvement.
“AI-generated content must undergo fact-checking, given the risk of so-called hallucinations—i.e., fabrication of facts,” said Petričević, noting that more detailed guidelines are included in the study.
Milica Nikolić, a UNESCO representative, reminded that since 2015, UNESCO has been working with numerous partners on these issues.
“Through various initiatives and projects, many studies have been produced, and this publication perfectly continues that tradition. Its practical, everyday value is significant—for journalists and institutions shaping media policy alike,” said Nikolić.
She expressed hope that this study will serve as a guide to improve the media sector, which is currently facing numerous complex challenges.
“It is clear that relationships between institutions and media must be redefined—not only through regulation but through a comprehensive approach that recognizes public interest as a fundamental part of journalism,” concluded Nikolić.
The event was organized as part of a UNESCO project funded by the European Union: "Building Trust in Media in South-East Europe: Supporting Journalism as a Public Good," implemented by self-regulatory bodies from Albania, Bosnia and Herzegovina, Montenegro, Kosovo*, North Macedonia, Serbia, and Turkey.