It is important to note that the AI tools available to the public are updated frequently, and our specific takeaways are relevant to the state of AI tools in May 2024. We are hopeful that many of our takeaways will remain relevant for journalists to think about as they interact with different AI tools as they develop. We hope readers will take what we learned while using the AI tools and apply it to their own experimentation with AI.
<aside>
🧰 General Takeaways
</aside>
- The necessity of human reporting and fact-checking remains untouched by the realm of AI tools. While AI tools can provide a helpful starting place with drafting, they cannot create stories we deem worthy of being published on their own.
- All AI tools produce stories with more editorial and academic style writing than journalists typically use.
- To mitigate this to get the most effective output, it is important to prompt the AI tools accordingly. For example, prompting the tool to write objective stories and/or remove editorialization, is often necessary to generate a story that upholds journalistic best practices.
- AI tools are trained on out-of-date information, which negatively impacts their ability to write up-to-date stories as the context they add is often out-of-date, or blatantly incorrect.
- AI tool outputs often reveal biases from their training data that demand attention from the human writer. For example, when writing about a school board meeting, the output’s style was more academic than that of a story for the Arts & Culture section, which proved to be more embellished and editorialized
- AI tools generally do not get quotes right, even when fed exact text. Instead, they will paraphrase a quote but still use quotation marks. Ensure that all quotes are triple-checked for accuracy.
- You can add in the prompt: “Ensure that any direct quotes used are the exact quote wording and not paraphrased/rewritten.” However, this is not a fool proof method.
<aside>
🔎 Takeaways from specific AI tools used to create articles.
</aside>
<aside>
đź“Ś CHATGPT
</aside>
-
Outputs were consistently the most editorialized/used the most embellished language
-
ChatGPT 3.5 was virtually unusable. ChatGPT 4 and ChatGPT 4o were more effective
ChatGPT 4o came out when we had finished producing stories, but we still played with it a bit. It outputs the best stories we have gotten from GPT to date.
-
Most promising for aggregating data, however did hallucinate information (generate completely false information).
-
Image Generation
- Dall-E technology with Chat GPT 4 and Chat GPT 4o.
- Cannot consistently produce accurate lettering
- Images produced are consistent with content of story — just a bit fantastical
<aside>
đź“Ś GEMINI
</aside>
- Only tool currently able to interact with video links, which was helpful to summarize and pull quotes from YouTube or other video sources. However, it cannoy create articles based on those summaries.
- Sometimes fact-checks itself, citing where from the input it got information.
- Consistently masters the LQTQ format and writes strong, relevant leads.
- Maintains objectivity with a journalistic voice suitable for Bloomberg or sports news.
- Incorporates diverse viewpoints through quotes.
- Shows some awareness of story context and significance.
- Struggles to seamlessly integrate new information into existing drafts.
<aside>
đź“Ś CLAUDE
</aside>
- Outputs from Claude had the most traditional news story tone.
- Effectively uses the LQTQ format.
- On occasion, proactively seeks out additional information, showcasing journalistic instincts. However, it is important to watch out for misinformation when Claude attempts to add its own info.
- May understand broader story context and related events.
<aside>
đź“Ś EDITING TOOLS
</aside>
- Grammarly Premium:
- Hemingway Editor
- More helpful for simplifying wording than providing journalistic edits but sometimes simplifies too much