insert comma white

, ,

Why Subject Matter Experts Are More Important Than Ever in the Age of AI

May 28, 2024

A survey of 1,000+ journalists on the state of journalism in 2024 by Muck Rack found that journalists consider subject matter experts (82%) and researchers (77%) the most credible sources.

We lean on subject matter experts, expert opinion, and researchers to guide us through the complexities of the world. With your deep knowledge of the most niche topics, experts can bring to light the exact information we need in the moment we are looking for it. 

But the further development of artificial intelligence (AI) may bring a slight shift in where and how we get our information. Though it is a powerful tool, AI is not a replacement for human expertise.

The Push & Pull Between AI & Reliable Information

The current case with The New York Times and OpenAI suggests competing interests. NY Times seeks to protect its copyrighted work as OpenAI relies on its published materials to train the chatbot (*allegedly*), challenging intellectual property rights. Yet, the recent deal between News Corp and OpenAI and OpenAI and The Associated Press suggests otherwise. On one hand, relying on news outlets as a source of information can improve the quality of output from AI engines. On the other hand, calls for concerns around content ownership, jobs in journalism, and an existential question – to work with AI or against it?

The use of AI in our daily lives is likely to increase. Recent updates on OpenAI/ChatGPT and Anthropic/Claude, for example, make it easier to depend on these technologies. Yet, AI remains insufficient in many capacities, raising questions about its validity and ethical nature. And recent issues with AI hallucinations and biases further exemplify its imperfections and questions about its reliability. 

In this blog post, I highlight some of the ways that subject matter experts, researchers, academics, and thought leaders are a key source of reliable information in the age of AI, and how they can help improve AI’s functions.

Diverse Voices

How a Lack of Diversity can Impact AI

Meta recently announced the creation of a new AI advisory council. The council’s main function is to offer “insights and recommendations on technological advancements, innovation, and strategic growth opportunities.” One could commend Meta for creating this council in a time where so many questions and issues surround AI and its role in our daily lives. Yet, the council is entirely made up of white men

There is plenty of research that shows how AI is deficient. TechCrunch reported on how AI systems are already perpetuating and amplifying racial discrimination in employment, housing, and the criminal justice system (read: disproportionately impacting Black people). With English being its native language, AI was found to be biased against non-native English writers. Not to mention the horrible ways it can deepfake human bodies, which then gets shared and amplified on social media. 

I wonder how the white men on the Meta AI Council will discuss these crucial points, having experienced any of them.

Course Correction?

Lest we not forget the great Gemini diversity debacle of early 2024. Gemini, the Google-powered AI bot, was creating ‘overly diversifying’ historical images, disregarding historical fact. I am not sure if this was an overcorrection, but it is a case in point that AI has the potential for bias and the inability to fully grasp ethical and moral nuances without some sort of (!!!diverse!!!) human intervention. 

“Diversity fuels innovation, and AI is no exception. When women join the AI conversation, they bring a richness of perspective that shapes technology for the better. We can’t afford to miss out on our unique voices – it’s a collaborative social experiment, and women’s contributions are essential,” says Jennifer Madrid, Marketing Strategist & Creator of She Vibes High. “By bringing their collaborative spirit and creative problem-solving to the table, women experts play a pivotal role in building a more inclusive and impactful future with AI.”

AI has much to learn in terms of diverse voices and experiences. Yet, companies like Meta on the cusp of AI development and expansion aren’t including those voices. It is here where subject matter experts can make a difference in how AI evolves.

Researchers can Help Inform and Develop AI

The Power of Lived Experiences

One of the major negotiation points during the 2023 Hollywood writers’ strike was the implementation of AI in creative writing. Writers demanded guarantees that production houses would limit the use of AI. 

Prof. Ben Zhao told TechCrunch that AI advancements can be used as an excuse for corporations to devalue human labor, potentially making it easier to minimize and undermine the role of creative writers. “It’s to the advantage of the studios and bigger corporations to basically over-claim ChatGPT’s abilities,” Zhao was quoted saying in the TechCrunch article. “This is a human enterprise that involves working with other people, and that simply cannot be done by an AI.”

It is still too soon to tell how many jobs may be replaced by AI, though many people are already lighting their hair on fire over it. But I shouldn’t be so glib. I’ve seen The Matrix. I believe any organization that replaces human writers/content creators/knowledge producers with AI will do so at its peril.

Knowledge Creation

In terms of subject matter experts and researchers, the value of lived experiences alongside research is incredibly important. Especially in fields like ethics and the human impact of AI, as we see with the formation of Meta’s AI council. Decades of study and exploration in specialized fields inform research, theory, and practice. Subject matter experts can connect disparate ideas and advance human knowledge. Sure, AI can read all the published works on a topic and potentially interpret it (based on what you’ve prompted it). But the output would not be able to speak to the authentic human experience. It cannot move knowledge forward, it only interprets what is already created. 

Related Post: How AI Can Help You Communicate Your Research to the Public

Quality Control

For AI

There is a real need for quality control within the AI infrastructure. If you’ve used any AI platforms, you probably have first-hand experience with this. 

Once, I asked Gemini to write a post for me with research and citations to support its points. It wrote the blog post, citations and all. Yet, on second glance, every single citation was made up. Clearly, there is still a need for fact-checking and human oversight in content creation.

“At BEE we see the need for AI-Human hybrid systems to be prioritized as AI alone is subject to hallucinations and does a relatively bad job at noticing nuances in context and instead favors just ‘sounding’ smart than actually being smart,” said Marco Hansell, Founder of Bee Brand AI. “As a system that is meant to mimic human intelligence we need human co-pilots to make sure the end result is not only accurate but effective for the goal we are accomplishing.” 

You can also see the need for quality control in real-time with the role out of AI Overviews in Google. Google recently added AI Overviews as a feature in Search that is meant to “help you find your answer quicker” as it takes out the step where you are reviewing multiple websites for your answer. 

“Sometimes you want a quick answer, but you don’t have time to piece together all the information you need. Search will do the work for you with AI Overviews,” outlined Liz Reid, VP Head of Google Search in a press release announcing the launchAnd similar to the AI-generated photo debacle with Gemini in early 2024, AI Overviews is experiencing some serious (and hilarious) misinformation issues. Shocking, I know.

For Subject Matter Experts

But quality control should be extended to subject matter experts and researchers too. The peer review process (an imperfect system) was built for this. But quality control also means doing your research and understanding the experts’ background and why they may or may not be the person to speak on this particular topic. 

Researchers, scholars, and experts are also responsible for sharing why they are subject matter experts. Qualifying yourself is a part of the job. This is no time to close off access to expertise out of fear or ego. It is also about building trust with the public. And along with building trust comes openly talking about your biases. We are all biased, no matter how you try to slice the pie. But outwardly talking about these biases and how they may impact your perspective ultimately helps build trust amongst the audience.

Related post: Bridging the Gap: Unleashing the Power of Research Communication Through Reflexivity

The Rise of the ‘Intellectual Influencers’

I argued before that scholars and academics are content creators, and now I am here to tell you that they are also influencers. The actual influencers must be clutching their pearls.

Business Insider is forecasting the imminent takeover of the space by ‘intellectual influencers’ who are here to ‘edutain’ you. 

“Thought leaders,” as they seem to have just learned, are “an influencer is somebody that influences you because of their personality and personal taste and personal preferences, whereas a thought leader influences you because of their expertise.” according to a content creator and the CEO of Viral Marketing Stars quoted in the Business Insider article.

Related Post: Academics as Content Curators

The article highlights how social media purveyors seek more high-level information, looking to learn from subject matter experts rather than people cosplaying as experts. Nothing against the influencers and the influencer industrial complex. Get it how you live it. But after years of social media exposure, humans are seeking more authentic, expert-driven content.


“The human touch of lived experience, knowledge, and critical thinking, combined with the appropriate use of AI, will be the gold standard for the future information landscape,” says Kala Philo, Tech Writer and Editor at, a female-led newsletter for busy leaders about Web3, blockchain, and emerging tech. “Regulatory standards for AI use are coming, and blockchain technology will be crucial for transparency and protecting human authors’ IP. Although people are understandably nervous about the rise of GenAI, the truth is, there has never been a better time for subject matter experts to blog.” 

Subject matter experts are needed, sought after, and valued due to their expertise. AI offers us many options, assists, and reframes, but there is no replacing the value of the lived experience, subject matter expertise, and high-level research. Take solace in your position as a researcher and subject matter expert. Embrace quality control, and do your part in creating knowledge for the future and impacting the world.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related articles

A woman sitting on a couch with a laptop in her lap and she is laughing

Alicia Cintron, PhD

Research Public Communications Trainer & Coach + traveling Scholarpreneur

Welcome! Here you’ll find insight, musings, and thoughts about research, public engagement, communication, travel, and higher education. Have an idea for a topic for us to cover? Shoot us a note

Alicia Cintron, PhD

My personal favorites

Sign up for our monthly newsletter!


Unsure of where to start on your public scholarship journey?

Click here to schedule a FREE strategy session!

Get your FREE From Published To Public Checklist HERE

Interested in us covering a specific topic?

Contact us today!
hand touching books on a shelf

Sign up to access our newsletter, From Published to Public.

Fill out the form below to subscribe! 

From Published to Public Checklist

Fill out the form below to access the From Published to Public Checklist📝 for free today!

Skip to content