Here’s some gold from Tyler Cowen regarding current AI and why it’s already good at economics:
“…Good chains of reasoning in economics are not too long and complicated. …The length of these effective reasoning chains is well within the abilities of the top LLMs today.
Plenty of good economics requires a synthesis of theoretical and empirical considerations. LLMs are especially good at synthesis.
In economic arguments and explanations, there are very often multiple factors. LLMs are very good at listing multiplefactors, sometimes they are “too good” at it, “aargh! not another list, bitte…”
…
A lot of core economics ideas are “hard to see from scratch,” but “easy to grasp once you see them.” This too plays to the strength of the models as strong digesters of content.”
I see the same reasoning as to why current AI is very good at coding modules and self-contained smallish applications but not very good (yet?) at multi-domain problems. For example, an AI can easily generate an iOS app that sends a text message, but it cannot generate the 5G stack for the phones and the infrastructure over which that text message will flow, even though the constituents part of the entire system/network are well-known and spec’d.
Coding in the module/smallish application domains is bounded and well understandable, and the chains of reasoning are “not too long and complicated.”
For unbounded problems, truly creative solutions, and long chains of reasoning, you’re still the pilot.
AI is but a copilot.