ChatGPT hallucinates. So do we humans.

“AI hallucination is a phenomenon wherein a large language model (LLM) perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”

—Β IBM

And just like with other humans, we humans must figure out if we trust the output we’re getting.

How do we do that with other humans?

We apply our own trust calibration factor. We ask ourselves questions like:

“Do I trust this person?”
“How much do I trust this person?”
“Do I trust this person’s character/integrity/expertise/credentials?”
“Why should I trust this person?”

But it’s rarely a toggle switch of either “trust” or “do not trust.” It’s usually more of a spectrum. Unfortunately, we sometimes have to take action as if it’s a toggle switch, but we likely don’t feel that way. 

“I will (not) get the vaccine.”
“I will (not) have an abortion.”

When it comes to AI, we need to do the same thing. We need to apply some trust calibration factor. The good news is that some people are starting to apply some very smart logic to this problem.

It’s called “Thermometer,” and it helps AI avoid being overconfident about its own answers, which in turn helps us avoid being overconfident about its answers.

The fundamental problem, though, is the same as we humans have — do we trust ourselves to properly calibrate our trust factors?

Pin It on Pinterest

Share This