Automation and artificial intelligence (AI) are changing how we do everything, from manufacturing cars to diagnosing disease. A 2018 report from the McKinsey Institute, which analyzed the impact of automation on 25 core workplace skills across five sectors, suggests virtually every industry affected, and it is still the early days.
While it is natural to worry about robots replacing humans, AI is far more limited than Hollywood would have us believe. In most cases, automation assists humans, streamlining tasks and removing repetitive and lower-priority work. At the same time, a high-tech economy calls for a high-tech workforce. Manufacturing jobs may be lost, many more programming jobs will be gained.
While it is relatively easy to envision robots in a factory, do they have a role in more creative fields, like journalism? Yes—to a certain extent. Here’s how.
One might worry setting AI loose in the newsroom would undermine quality in a field that relies so heavily on truth and impartiality. Fret not: according to a report from TechEmergence, well-deployed automation minimizes the grunt work in the day-to-day news cycle, freeing up journalists to cover other issues more thoroughly.
Automation can also reduce mistakes, such as catching grammatical mistakes, avoiding minor data errors, and even identifying full-blown false news stories. While AI can manage digital press releases and scour the newswire for new information, humans are much more apt at finding original stories, conducting interviews, and explaining the nuances of particular storylines.
In other words, robots will not be given press passes any time soon. Automation in journalism is primarily working behind the scenes to make the fourth pillar of democracy better and more efficient. Here are a few of the ways newsrooms are using AI.
Journalists used to spend a good deal of time monitoring newswires, social media, press releases, and community calendars. Now, automated applications can manage the load more quickly than humans can. Using machine-learning algorithms to aggregate and synthesize information—from news headlines to Tweets—AI can now identify new and relevant stories, though it is up to humans to review these insights.
These days, some of the most influential news surfaces on social media first as people post and share photos, video, and first-person accounts of an event. Reuters designed News Tracer to analyze the thousands of tweets published each day and assign topics with a “newsworthy” rating using a carefully refined algorithm. The AI then dons its virtual journalist hat by analyzing the credibility of the source and associated facts by cross-checking data, digging into user profiles, and even analyzing the structure of the tweets themselves. Items that pass the sniff test then move to Reuters journalists, who verify them independently and act upon them, if worthy.
Another Reuters tool called Lynx Insight is a digital data-scientist and copywriting assistant all in one. Released in 2018, Lynx sifts through data from newswires, databases, and other digital sources to spot patterns that might pique human interest, such as changes in the stock market or budding sports stats.
According to Reuters, News Tracer and Lynx Insight have helped the company break dozens of major news stories before the competition. The head start gives journalists on the ground an edge in capturing and reporting eyewitness accounts, media footage, and expert feedback.
News does not happen in a vacuum. Most stories build upon others or require some degree of background information for perspective. Pre-digital age journalists were typically specialists assigned to a single beat, such as local sports or politics. While they could dig through storage rooms and microfiche to find and verify past stories, they relied heavily upon their knowledge and experiences to pull a story together and give it context.
We now live in a world where we can access a seemingly infinite amount of information online, which can make a journalist’s job both easier and harder. The information that they need is available but buried in massive amounts of data for humans to dig through. AI can scour through digital records and databases, identify keywords, and make connections with other potentially relevant information. Journalists are then able to use these connections to inform and verify their work, giving media consumers a more complete and accurate depiction.
In 2015, the Washington Post rolled out a new tool called the Knowledge Map. When a WaPo reporter submits a story online, the Knowledge Map identifies names, keywords, and other relevant items and checks them against a database of previously published stories and other credible resources. The tool then automatically generates links in the body of the article. Readers who want to know more about a topic can follow the link for more background. According to the newspaper’s engineering director for data science Sam Han, the Knowledge Map provides “highly personalized and contextual data” for editorial content.
One could say that the AI-directed tasks covered thus far complement human-generated copy, mostly by aggregating and synthesizing information. However, as AI becomes more sophisticated, it is capable of much more.
Advances in natural language generation (NLG) allow some programs to independently draft or summarize news articles using algorithms that help the machine to produce coherent copy, common conventions and all. While the tech is not perfect and AI-written articles still benefit from human review, these programs remove some of the lower priority grunt work so that journalists can spend more time on higher-value tasks. At the very least, they produce an information-rich initial draft upon which more talented and discerning writers can build.
WordSmith and Quill are two of the well-known NLG platforms. Quill analyzes and interprets raw data, identifies additional contextually relevant information, and produces an article using journalistic tone, style, and language.
WordSmith, which functions in a similar way, has been a favorite of Yahoo!, and the Press Association is working on a NLG tool called Reporter and Data and Robots (RADAR). Newsrooms tend to have steep editorial standards, so AI-written news is often limited to short and to-the-point summaries. The Washington Post’s robot reporter Heliograf, for instance, wrote over 500 short news snippets in 2017.
Automation technology is progressing so quickly that it can be difficult to predict what journalism will look like in ten years. Nonetheless, it is safe to say that machines are unlikely to usurp journalists anytime soon—if ever.
According to the World Economic Forum, AI in the newsroom can tackle a lot of basic tasks, but persistent challenges limit its reach. In a nutshell: journalists are creative, intuitive, and discerning. Algorithms, on the other hand, lack self-awareness, struggle with unstructured data, and cannot currently make judgment calls regarding libel or copyright infringement.
Ultimately, it is up to humans to ensure that news is accurate, unbiased, coherent, and compelling. That said, the role of journalists is changing. Today’s publications need tech-savvy reporters who know how to use a range of programs, can discern digital source credibility, and understand the ins and outs of social media. Tomorrow’s journalism students should expect higher-tech courses, from digital publishing to basic coding.