Medias and Artificial Intelligence : Which skills will be needed in the newsrooms ?
[This is a translation of the french original piece published in august 2018 on this account]
The development of artificial intelligence brings the fear of jobs’disappearance in all sectors, including newsrooms. But we don’t wonder which new ones will be created and which new skills will become necessary. Here are some ideas :
1 / To teach to algorithms
To write texts or create videos automatically, artificial intelligence algorithms have to learn, to be fed, and the food and the rules we choose to give them determines how they act afterwards. They need a person who guides, supervises, corrects and improves them. This person must also understand how an algorithm works.
In addition, the selection of the corpus that serves as a basis for learning (texts, images, videos ..) is not neutral. Indeed, artificial intelligence reproduces racist and gender stereotypes and biases. There is therefore an important issue of teaching to the algorithms that will be used in automated writing. This learning is about language but not only. It should also be possible to teach empathy, compassion or sarcasm to refine their ability to write and analyze texts. Finally, it is necessary to integrate the editorial line so that it can be seen and respected as it is by journalists.
2 / To guarantee the ethics of algorithms and the editorial line
Guessing whether a text is written by an algorithm or a journalist is not always easy and will probably be less and less so. As in all other fields of their use, the algorithms used by the media must be transparent. The media should explain how automated content is generated, from what, how it is supervised, modified, amended and by whom, and avoid the effect “black box”.
As Buzzfeed datajournalist Peter Aldhous explains, “if you can not explain how your algorithm works, there is a fundamental problem of transparency”. This also applies to automated personalisation. In a broader sense, transparency also applies to the origin of the content used for surveys, especially in data journalism. Medias have already widely adopted the good practice of sourcing the raw data sets used and explaining their methods of working in data journalism.
Finally, it is necessary to be able to identify possible errors and biases, recognize them, find their origin and correct them. This transparency is one of the foundations of trust from readers and internet users.
3 / To manage the role of the algorithms in the newsroom and the relation AI-humans
To ensure this transparency, it is also imperative to explain how the algorithm works to its bosses, colleagues, and not become the “geek” of service. Newsrooms often face cultural concerns about technology that hinder the deployment of innovations in-house. Difficult to convince of the interest of a project, fear generated by a new tool …
The training of journalists and teams about AI and algorithms would allow an understanding of the issues as well as stronger involvement and collaboration (without learning to code). It would also open up more questions internally about the results produced by robots and the processes to obtain them, and thus a collective enrichment of journalistic practices. It is also a way to question the interaction between “robots/algorithms/AI” and humans, the role for each one.
If knowing how an algorithm works reassures, it would be good also to state clear rules about who does what (what types of texts will be automated written for example), with what purpose (to generate a quantity of repetitive texts impossible to write by journalists but useful to SEO and the readers ?), with which supervision (who sets up the algorithm, who follows the results, corrects them, exploits them, who is responsible for them …). Erin Wright, NYT moderator, says in a video of the newspaper: “Working with an artificial intelligence, I feel like I’m in Star Trek where my colleague is a robot and we get on well together.”
If algorithms do not take the place of journalists in the newsrooms, they will discharge them from more and more repetitive tasks, they will cover more territories and / or subjects, more quickly. But algorithms are limited to the datas they find or the one they are provided with, and contextualization is still necessary. It remains to define what kind of help will bring the algorithms in the editorial offices, how, with what transparency and control. This is just the beginning…