Technology

Publications are responsible for AI errors: Press Council

The Press Council of South Africa (PCSA) has said that publications bear responsibility for the factual accuracy of any information published using generative artificial intelligence tools.

This is contained in a set of guidelines published by the council that outline the role of AI in the modern newsroom.

“Member publications retain editorial responsibility for everything that is published, no matter which tools are used in production. To ensure compliance, any AI-generated material must be checked by human eyes and hands,” said the Press Council in a statement.

This principle of accountability is one of eight others in the guidance notes, which aim to answer questions such as: who is responsible for AI-generated misinformation?; Should publications alert their audiences whenever and wherever they have used AI tools?; And, can an AI infringe on a publication’s intellectual property rights?

Other guiding principles are:

  • Accuracy: Generative AI is known to be prone to the invention of facts (known as “hallucination”). Journalists should carefully check facts in AI-generated text. AI tools have made it easier to generate misinformation. Claims circulating on social media and elsewhere need to be even more carefully checked than before.
  • Bias: Algorithms reflect and amplify race, gender and other biases that emerge in published material. Media organisations should keep a keen lookout for bias when using AI tools and correct them where they do.
  • Transparency: News organisations should offer their audiences maximum transparency about their use of AI tools. A comprehensive statement of the organisation’s policy and use of specific tools should be easily available to audiences and kept current. If tools have been used in the generation of particular items, this should be indicated clearly.
  • Targeting: AI tools used to tailor content to audience preferences should be used in a way that guards against the creation of “filter bubbles”.
  • Organisational considerations: AI tools may relieve journalists of some routine tasks. Media organisations should not use AI innovations simply to cut costs. Any savings should be reinvested in quality journalism. Staff should be given training in the use of AI, to enable them to adapt to new technological requirements.
  • Privacy: Personal data may be used in the development of AI systems, and member publications should take care that relevant rights and legislation (like the Protection of Personal Information Act) are not infringed.
  • Intellectual property: The training sets employed by generative AI use large amounts of data without acknowledging the intellectual property rights of the originators. This includes text published by news media. Though solutions to the problem are not yet clear, journalists and media organisations need to be aware of the issue, both with respect to their own intellectual property and their use of AI tools that may not have fully recognised the rights of others.

The Press Council noted the polarising effect that AI has had on journalism as a profession, with some enthusiastically accepting the technology and others expressing fear that it will erase jobs and spread misinformation.

“Though it is too early to foresee the impact with any certainty, it is important for news organisations, newsroom leaders and journalists to be thoughtful when they deploy new AI tools, and to consider them in the light of the ethical principles that support audience trust in journalism,” it said.

Read: Generative AI’s wild 2023 – and what comes next

The council did warn, however, that the guidelines are superseded by the Press Code, which remains the “authoritative document” whose rules still apply.  — (c) 2024 NewsCentral Media

  • TechCentral is a member of the Press Council and adheres to its code of ethics, and is the largest technology publication in South Africa that does so

Get breaking news alerts from TechCentral on WhatsApp