Loading...

How WIRED Will Use Generative AI Tools

How WIRED Will Use Generative AI Tools<br />
<b>Warning</b>:  Undefined array key /var/www/vhosts/lawyersinamerica.com/httpdocs/app/views/singleBlog/singleBlogView.php on line 59
">
ideas
Mar 2023


How WIRED Will Use Generative AI Tools

Like pretty much everyone else in the past few months, journalists have been trying out generative AI tools like ChatGPT to see whether they can help us do our jobs better. AI software can't call sources and wheedle information out of them, but it can produce half-decent transcripts of those calls, and new generative AI tools can condense hundreds of pages of those transcripts into a summary.

Writing stories is another matter, though. A few publications have tried--sometimes with disastrous results. It turns out current AI tools are very good at churning out convincing (if formulaic) copy riddled with falsehoods.

This is WIRED, so we want to be on the front lines of new technology, but also to be ethical and appropriately circumspect. Here, then, are some ground rules on how we are using the current set of generative AI tools. We recognize that AI will develop and so may modify our perspective over time, and we'll acknowledge any changes in this post. We welcome feedback in the comments.

Text Generators (e.g. LaMDA, ChatGPT)

We do not publish stories with text generated by AI, except when the fact that it's AI-generated is the whole point of the story. (In such cases we'll disclose the use and flag any errors.) This applies not just to whole stories but also to snippets--for example, ordering up a few sentences of boilerplate on how Crispr works or what quantum computing is. It also applies to editorial text on other platforms, such as email newsletters. (If we use it for non-editorial purposes like marketing emails, which are already automated, we will disclose that.)

We do not publish stories with text generated by AI

This is for obvious reasons: The current AI tools are prone to both errors and bias, and often produce dull, unoriginal writing. In addition, we think someone who writes for a living needs to constantly be thinking about the best way to express complex ideas in their own words. Finally, an AI tool may inadvertently plagiarize someone else's words. If a writer uses it to create text for publication without a disclosure, we'll treat that as tantamount to plagiarism.

We do not publish text edited by AI either. While using AI to, say, shrink an existing 1,200-word story to 900 words might seem less problematic than writing a story from scratch, we think it still has pitfalls. Aside from the risk that the AI tool will introduce factual errors or changes in meaning, editing is also a matter of judgment about what is most relevant, original, or entertaining about the piece. This judgment depends on understanding both the subject and the readership, neither of which AI can do.

We do not publish text edited by AI either.

We may try using AI to suggest headlines or text for short social media posts. We currently generate lots of suggestions manually, and an editor has to approve the final choices for accuracy. Using an AI tool to speed up idea generation won't change this process substantively.

We may try using AI to suggest headlines or text for short social media posts.

We may try using AI to generate story ideas. An AI might help the process of brainstorming with a prompt like "Suggest stories about the impact of genetic testing on privacy," or "Provide a list of cities where predictive policing has been controversial." This may save some time and we will keep exploring how this can be useful. But some limited testing we've done has shown that it can also produce false leads or boring ideas. In any case, the real work, which only humans can do, is in evaluating which ones are worth pursuing. Where possible, for any AI tool we use, we will acknowledge the sources it used to generate information.

We may try using AI to generate story ideas.

We may experiment with using AI as a research or analytical tool. The current generation of AI chatbots that Google and Microsoft are adding to their search engines answer questions by extracting information from large amounts of text and summarizing it. A reporter might use these tools just like a regular search engine, or to summarize or trawl through documents or their own interview notes. But they will still have to go back to the original notes, documents, or recordings to check quotes and references. In this sense, using an AI bot is like using Google Search or Wikipedia: It might give you initial pointers, but you must follow the links back to the original sources.

We may experiment with using AI as a research or analytical tool.

In practice, though, AI will make mistakes and miss things that a human would find relevant--perhaps so much so that it doesn't save any time. Even if these tools do prove useful, we won't want our reporters to rely on them any more than we'd let them rely on the limited information on Wikipedia. We'll continue to insist on the same standards of research and original reporting as always. We also know that there are many professionally published research databases out there that come with lawful and highly accurate text- and data-mining tools, so we will constantly evaluate whether those meet our needs.

Image Generators (e.g. Dall-E, Midjourney, Stable Diffusion)

We do not publish AI-generated images or video. AI-generated art is already all over the internet, but artists and image libraries are suing the image generators for violating copyright by using their work as training data. In some countries, there are laws prohibiting such use. At least until the legal issues are settled, we won't publish such art, even if it's made by a working artist we've commissioned and paid. As with text, we will make exceptions when the fact that AI was used is the point of the story, and will disclose it, as well as get permission. (For instance, we asked some artists who had designed covers for WIRED to create AI-enabled variations on their own work in order to illustrate an essay on the potential of generative AI.)

We do not publish AI-generated images or video.

We specifically do not use AI-generated images instead of stock photography. Selling images to stock archives is how many working photographers make ends meet. At least until generative AI companies develop a way to compensate the creators their tools rely on, we won't use their images this way.

We specifically do not use AI-generated images instead of stock photography.

We or the artists we commission may use AI tools to spark ideas. This is the visual equivalent of brainstorming--type in a prompt and see what comes up. But if an artist uses this technique to come up with concepts, we will still require them to create original images using their normal process, and not merely reproduce what the AI suggested.

We or the artists we commission may use AI tools to spark ideas.

Top