Apple just published an interesting paper.
The TL;DR:
Current Large Language Models (LLMs) and Large Reasoning Models (LRMs) still suck as the complexity of the problem increases. Also, adding more resources (i.e., compute power, etc.) and better instructions don’t help. They simply hit a wall.
What does that tell us?
That AI still doesn’t think. Not the way we do. It can mimic, compute, pattern match, and summarize. But reason, connect, and leap? That’s still ours to do.
That your brain is still your best asset. For now, no model outthinks a thoughtful human, especially when problems get complicated. Critical thinking is still job security. Humanness is still job security.
Will this change? Will the models get more human-like?
Maybe, but maybe not. AI will get better, but what better means is not yet understood.
For now and the foreseeable future, you still matter. Your brain still matters. Your humanness still matters.