Some very smart people (Tyler CowenFrancois CholletArc Prize Foundation, etc) are very high (or low, depending on one’s view of AGI) on o3. If not yet technically AGI, it’s definitely a breakthrough that gets close in practice.

I’ve started using it myself. I’m currently working on a technical blog article, eBook on hiring, documenting code, writing a little code, customer proposals, a corporate merger proposal, sales and marketing channels and content, and generating and exploring ideas to make 4TLAS work. 

I don’t yet have my own opinion, but I wrote this a couple months ago:

How will we know?

First, we had the Turing Test. Now we have others that test specifically for consciousness and cognition along particular axes β€” Perturbational Complexity Index (PCI), Unlimited Associative Learning (UAL), AI Consciousness Test (ACT), False Belief Task, Mirror Test, Deception Test, and Pain Response Test. 

I think we’re missing the point here. Any objective test we devise comes with a set of rules. Rules, once they are known, can be gamed. Especially by an AI.

Here’s how I’ll know that an AI has become sentient:

When it walks, figuratively or concretely, up to the table, sits down, and says, “Here’s what I wanna do…”

AGI and AI sentience aren’t exactly the same thing, although they’re close.

But I still stand by that.Β 

Pin It on Pinterest

Share This