Complete Story
 

09/24/2024

A Case Study on Developing an AI Editorial Policy

AI should never be responsible for primary content development

Like any new development in technology, generative artificial intelligence (AI) has its supporters and its detractors. There are valid points on both sides: AI can be used to streamline workflows and increase efficiency, but it can also open your organization up to misinformation and ethical issues. That’s why it’s important to be proactive. If your association management company generates content in any way—whether for publications, websites or social media—it is probably a good idea to put a policy in place detailing acceptable uses of generative AI tools.

What Is Generative AI?

Generative AI is essentially the use of computer algorithms to generate or modify content in a way that mimics human creation. Generative AI can be used to develop text, images, video, and audio. Large language models (LLMs), such as ChatGPT and Claude, are a type of generative AI that can read massive amounts of text and identify patterns, which they then use to generate new text that follows those patterns.

Do Your Research

As the editor and publications specialist at Capital Association Management (CAM), I was tasked with developing our AI editorial policy. With AI being such a hot topic of conversation, especially among creators and editors, I found many resources to help me with this process. I have included them in the resource list below.

Please select this link to read the complete article from ASAE’s Center for Association Leadership.

Printer-Friendly Version