Artificial intelligence is learning to understand people in surprising new ways

New research shows AI can analyze personality traits from written text—and even explain how it makes its decisions.

AI now detects personality traits from text and explains its reasoning, advancing psychology and ethical tech.

AI now detects personality traits from text and explains its reasoning, advancing psychology and ethical tech. (CREDIT: CC BY-SA 4.0)

A growing body of research shows that AI can detect key parts of your personality just from the words you use. It doesn’t need long interviews or tests. Instead, it looks at your writing—social media posts, essays, or even everyday messages—and picks up signals about who you are.

Researchers are now taking this a step further. They’ve developed methods to peek inside AI’s “mind” to understand how it makes those judgments. This new level of explainability could change how we assess personality, making it easier, faster, and more accurate across many areas of life—from therapy to education and even job hiring.

How AI Sees Personality in Words

A team of scientists from the University of Barcelona recently ran an in-depth study using cutting-edge AI models to explore personality detection. They tested models trained on large sets of written texts, comparing two well-known personality systems: the Big Five and the Myers-Briggs Type Indicator (MBTI).

From left to right, experts Daniel Ortiz, David Saeteros and David Gallardo, at the University of Barcelona. (CREDIT: University of Barcelona)

The Big Five model breaks personality into five traits: openness, conscientiousness, extraversion, agreeableness, and emotional stability. MBTI, often used in online quizzes and corporate settings, divides people into four categories: introvert vs. extrovert, sensing vs. intuitive, thinking vs. feeling, and judging vs. perceiving.

The study used two leading AI models: BERT and RoBERTa. These models are good at processing natural language. To train them, the researchers used essays and posts from people whose personality traits had already been measured using questionnaires.

Once the AI could detect patterns, the team turned to a method called integrated gradients. This tool reveals which words or phrases had the most impact on the model’s predictions. The goal was to ensure that the AI wasn’t just guessing based on random patterns or biases in the data.

According to the scientists, integrated gradients allow researchers to “open the black box” of AI. This means they can tell why the system makes a specific decision. For example, the word “hate” might seem negative, but in a phrase like “I hate to see others suffer,” it may actually show empathy. Without context, a model might misinterpret that.



"This methodology has allowed us to visualize and quantify the importance of various linguistic elements in the model’s predictions," the researchers said.

AI Models BERT and RoBERTa

BERT (Bidirectional Encoder Representations from Transformers) and RoBERTa (Robustly Optimized BERT Pretraining Approach) are both transformer-based language models developed for natural language understanding, but they differ in their training strategies, data usage, and certain architectural choices.

BERT, introduced by Google in 2018, was groundbreaking because it used bidirectional context in pretraining, meaning it considered both left and right context simultaneously to predict masked words. It was trained on the BookCorpus and English Wikipedia using two pretraining objectives: Masked Language Modeling (MLM) and Next Sentence Prediction (NSP).

The MLM task randomly masks some tokens and predicts them based on context, while NSP trains the model to determine whether one sentence logically follows another. This combination helped BERT excel in a variety of NLP tasks, such as question answering, sentence classification, and named entity recognition.

A schematic depiction of the BERT model and its training process. (CREDIT: Cameron R. Wolfe / Substack)

RoBERTa, released by Facebook AI in 2019, kept the core BERT architecture but modified the pretraining methodology to improve performance. It removed the NSP task entirely, as experiments showed that it wasn’t necessary for strong downstream results.

RoBERTa was trained on a much larger dataset — including BookCorpus, English Wikipedia, CC-News, OpenWebText, and Stories — and used significantly more training steps with larger batch sizes. Additionally, it dynamically changed the masking pattern during training rather than keeping it fixed, which helped the model learn more robust language representations.

In essence, BERT established the bidirectional transformer framework, while RoBERTa refined it through better optimization choices, more data, and altered training objectives. The result is that RoBERTa generally outperforms BERT on benchmark NLP tasks, not because of a radically different architecture, but due to more aggressive and well-tuned pretraining strategies. BERT remains historically significant as the foundation, while RoBERTa represents a direct, empirically improved successor.

Why the Big Five Beats MBTI

The study revealed that the Big Five model worked better with AI tools than MBTI. The researchers found that MBTI led the models to focus more on surface-level clues rather than deep, consistent patterns in language.

Occurrences for each letter of the MBTI in the personality cafÃ(c) dataset. (CREDIT: David Saeteros, et al.)

“Despite being widely used in computer science and some applied fields of psychology, the MBTI model has serious limitations,” they explained. “Our results indicate that the models tend to rely more on artefacts than on real patterns.”

In contrast, the Big Five showed more reliable links between language and personality. AI predictions based on this model were more stable and matched known psychological patterns. This makes the Big Five a stronger choice for future research and practical use.

A New Tool for Psychology and Beyond

Automatic personality detection has wide applications. In psychology, it opens the door to more natural ways of understanding people. It could replace or support traditional personality tests. By studying language, therapists and researchers might spot changes in mood or personality over time. This could help in early diagnosis or track a patient’s progress during treatment.

“With these methods, psychologists will identify linguistic patterns associated with different personality traits that, with traditional methods, might go unnoticed,” the team explained. Beyond the clinic, AI-based personality analysis could help in hiring, customizing education, or building smarter digital assistants. It could even shape how social scientists study populations, making it easier to examine huge sets of written data.

In hiring, for instance, employers might use writing samples to learn if someone fits a certain work style. In education, teachers could better tailor learning based on student personality. AI could even help digital assistants, like chatbots or virtual tutors, respond more naturally by adjusting their behavior based on user traits.

Bar plot for the geometric mean positive attribution scores for Agreeableness. (CREDIT: David Saeteros, et al.)

The team emphasized the need for ethics and transparency in all uses. “It is important to stress that all such applications should be based on scientifically sound models and incorporate the explainability techniques we have explored, to ensure ethical and transparent use,” they added.

The Future: Combining AI and Traditional Tests

Although this technology is powerful, researchers don’t expect it to replace standard personality tests anytime soon. Instead, they see it working alongside traditional tools to give a richer, more complete view of someone’s personality.

“We see an evolution towards a multimodal approach,” they said. “Traditional assessments are combined with natural language analysis, digital behavior and other data sources to get a more complete picture.” This mix of methods could make personality research more accurate and useful. For example, digital behavior, such as online activity or voice tone, might be added to written text. The team is also exploring tools like Whisper.ai, which can turn spoken words into text, for future analysis.

AI models are especially helpful in places where people don’t want to take long tests, or where there’s a lot of writing to review. That makes them useful in real-life settings where time or access is limited.

The researchers plan to test their findings across different types of writing, platforms, languages, and cultures. They want to see if these language-personality links hold true for people in other countries or who speak other languages.

The researchers also aim to study other mental and emotional traits, not just personality. They are working with professionals in therapy and human resources to apply these tools in the real world. This helps make sure the AI has a useful, fair, and positive impact. The goal is not just better science, but better tools that work for people from all walks of life.

Research findings are available online in the journal PLOS One.

Note: The article above provided above by The Brighter Side of News.


Like these kind of feel good stories? Get The Brighter Side of News' newsletter.


Mac Oliveau
Mac OliveauScience & Technology Writer

Mac Oliveau
Science & Technology Writer | AI and Robotics Reporter

Mac Oliveau is a Los Angeles–based science and technology journalist for The Brighter Side of News, an online publication focused on uplifting, transformative stories from around the globe. Passionate about spotlighting groundbreaking discoveries and innovations, Mac covers a broad spectrum of topics—from medical breakthroughs and artificial intelligence to green tech and archeology. With a talent for making complex science clear and compelling, they connect readers to the advancements shaping a brighter, more hopeful future.