Here’s how to use AI chatbot capabilities as an inclusive communicator
As communicators, we must be mindful of the words we use and consider our audiences with a diverse and inclusive background. It can be easy to unintentionally input our own biases into our writing, which could potentially disenfranchise our reader and not fully communicate our message.
When we take steps to recognize these potential biases, we begin to open ourselves up to more inclusive writing and a broader audience.
By now, you may have heard of the AI language model that has captivated peoples’ attention: ChatGPT. ChatGPT is an AI chatbot that interacts in a conversational way. As described by the company that created it, open.ai, “the dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests.”
ChatGPT, and other AI chatbots, can be great tools to unlock knowledge and maximize efficiency for the average user. A Wharton professor recently showed how they were able to use ChatGPT to create a project that involved doing market research, creating a positioning document, writing an email campaign, developing a website, creating a logo and “hero shot” graphic, making a social media campaign for multiple platforms and scripting and creating a video—all in just 30 minutes. He admitted that it would have taken him and a team days to complete the same tasks with the same efficiency.
The ease and speed at which this tool created a 1,700-word email campaign may give us communicators some pause, but it is essential to understand that AI chatbots are not only reliant on our input prompts, but are also trained on the data and input we provide. Just like the classic saying “you are what you eat,” we must be conscious of what we are feeding this technology.

6 AI chatbot concerns and how to address them with inclusive solutions
When using ChatGPT or any AI chat technology, it is important to be conscientious regarding the inclusivity of your inputs—and the outputs they prompt. So, I prompted ChatGPT to give me 6 concerns that users should note when employing an AI chat technology like itself.
Here’s what it had to say:
1. Bias: AI models can unintentionally perpetuate biases that exist in the data they are trained on. This can result in language that reinforces stereotypes or marginalizes certain groups.
- Solution: Check for any biased language or assumptions in the output and make any necessary adjustments or edits.
2. Representation: AI models may not accurately represent diverse communities and experiences due to a lack of diverse data used in training.
- Solution: Ensure that the input data is diverse and inclusive of different perspectives and experiences to improve the accuracy and representation of the output.
3. Accessibility: Not everyone may be able to use or interact with an AI language model, particularly those with disabilities.
- Solution: Ensure that the content created using an AI language model is accessible to all users, including those with visual or hearing impairments.
4. Privacy: There may be concerns around the collection and use of personal data when using AI models.
- Solution: Ensure that all data collected and used is done so ethically and transparently, and follow best practices for data privacy and security.
5. Transparency: AI-generated content may not always be distinguishable from human-generated content, which can lead to confusion or mistrust.
- Solution: Clearly label any content generated by AI to ensure transparency and build trust with users.
6. Ethics: There are ethical considerations to keep in mind when using AI language models, particularly around issues such as job displacement and the potential for misuse.
- Solution: Consider the ethical implications of using AI language models and ensure that its use aligns with the values and goals of your organization.

When we make the effort to be aware, we make progress forward
Unconscious bias can affect not only our actions, but also our relationships and the way we communicate. A tool like dei.ai helps us identify these blind spots that reveal themselves in the way we write, from word choice to phrasing to grammar.
Think Grammarly, but with a DEI background.
Bonus: this extension can be added right to Chrome.
By being mindful of these concerns and taking appropriate actions to address them, you can help ensure that the content created using an AI language model like ChatGPT is inclusive, accessible and ethical.
Have an internal communications or change effort that needs to win the hearts and minds of every employee in order to succeed? Book a call with our experts.
About the Author

Wil Taylor, Creative Associate
Wil brings 10+ years working in the creative field to Pivot, with expertise in visual, graphic and web design. Originally from Wisconsin, this Milwaukee native came to Minnesota to study graphic design at the University of Minnesota. Wil specializes in branding, providing complete identity needs including logo and web design, marketing materials, social media development and brand voice. A strong believer in the power of positive thinking in the workplace, Wil regularly develops internal wellness campaigns to assist employees with effective mental health techniques.