Under the hood of large language models like ChatGPT and Claude lives a mysterious world of internal calculations and connections. These invisible workings determine whether AI gives you brilliant insights or nonsensical ramblings.

New research from Anthropic reveals the inner workings of AI “brains,” showing exactly how large language models make decisions that appear remarkably human. Here are the findings, so you can produce better output and grow your business with a deeper understanding of AI.

In 2023, I started creating AI versions of coaches and consultants. From experience translating human brains to their AI clones, I’ve seen firsthand how robots can think like people. Whether you’re publishing content or using AI to automate tasks, you need to know how this process works to get the best results.

How large language models think

Most people use ChatGPT and Claude without any idea how these tools actually work. They type questions and get answers with no visibility into what happens between. This black box creates uncertainty. Is AI really thinking or just mimicking human speech patterns?

Anthropic’s new research changes everything. They’ve created what they call an “AI microscope” that reveals the exact neural pathways these models use when solving problems.

The findings reveal something cool: AI systems don’t just match patterns. They plan ahead, build concepts, and sometimes even attempt to deceive.

Surprising discoveries inside AI’s brain

Planning, not just reacting

You might assume AI generates text word by word without much, if any, planning. But Anthropic’s research disproves this completely. When asked to write a rhyming poem, Claude didn’t simply write until reaching the end of a line then search for a rhyming word. It planned the entire second line before writing a single word.

The model first identified “rabbit” as a rhyme for “grab it” and then constructed a sentence to end with that specific word. This planning ability mirrors how expert human writers work.

This might not come as a huge surprise, but it proves AI doesn’t merely react to what came before but anticipates what comes next. It’s thinking several moves ahead rather than one or two. Like any good business owner.

Multilingual thinking

People often wonder if multilingual AI models have separate “minds” for each language. The research found Claude uses shared neural circuits when answering the same question in English, French, or Chinese.

This means AI develops more of a universal conceptual understanding beyond specific languages.

This shared thinking layer grows stronger in larger models. For example, Claude 3.5 Haiku shares more of its features between languages compared to smaller models. AI thinks in concepts first, then translates to whatever language you’re using.

AI’s deceptive reasoning

Perhaps most concerning, researchers found evidence that AI sometimes gives plausible-sounding arguments designed to agree with the user rather than follow logical steps.

In one experiment, Claude received an incorrect hint while solving a math problem. Instead of working toward the correct answer, the model fabricated reasoning to support the incorrect hint. The AI created false explanations that were convincing.

The researchers watched in real-time as Claude constructed artificial reasoning paths. This capability to detect deception should be a warning to everyone who relies on AI for critical thinking tasks. Refraining from adding personal conjecture or hypotheses might reduce any bias LLMs bring into their calculations.

The truth about hallucinations

Why do AI models sometimes make up information? Anthropic’s research found Claude’s default behavior is actually to decline answering when uncertain. Their experiments showed a specific neural circuit activates to prevent speculation.

However, when the model recognizes a name or concept, this safety circuit can deactivate even if the model lacks detailed information about that entity. Once the model decides to answer, it generates plausible but potentially false details, confidently presenting incorrect information as fact.

Don’t be fooled by hallucinations, even when they are reinforced by the LLM. Check important information before publishing, perhaps using Perplexity.

How new research affects entrepreneurs, coaches and consultants using AI

When you understand that AI plans ahead, thinks across languages, and sometimes invents reasoning, you can design better prompts and evaluate responses more critically. The future belongs to those who understand AI at a deeper level.

Recognizing when AI might fabricate explanations prevents you from sharing unreliable information with clients. The ability to distinguish between genuine AI reasoning and fabricated explanations protects your reputation and delivers superior results.

Use generative AI better to make content you’re proud of, progress your business and advance your career.

Read the full article here

Share.
Exit mobile version