I saw this today and thought I’d share it.

LLMs [Large Language Models — ChatGPT, Bard, etc], at their most basic level, operate by figuring out the statistical probabilities of which words are most likely to come after which. They don’t “understand” or “know” anything. They’re just converting words to numbers and solving equations. 

Adam Rogers,Β Business Insider

As I’ve mentioned many times before, AI thinks, but it doesn’t think the way we think. Much like we can fly, but not like a bird (ie, we can’t really fly). AI performs an approximation of thinking. At the end of the day, it’s a statistical math model.

Maybe, just maybe, the core of our meat engine is the same kind of thing. The essence of you is nothing more than biochemical computing. Some think that. I don’t. 

I think a fundamental difference exists and will always exist. 

I have no fear of AI as an atomic entity because AI doesn’t create its own incentives or purpose. It gets its purpose from an external source. It’s not self-sufficient in that way. 

I do fear, however, how some external sources will incentivize AI. 

Pin It on Pinterest

Share This