Moravec’s paradox is the observation that some tasks that are easy for humans are very difficult for AI systems to replicate, while some tasks that are challenging for humans are relatively easy for AI.
We also call this the “easy-hard problem for AI.”
Here are some tasks that are (generally) easy for humans but hard for AI:
- Recognizing objects in complex real-world scenes (finding the squirrel in the woods)
- Understanding natural language in context (sarcasm, idioms, etc)
- Transferring knowledge to new tasks (learning the guitar after learning the piano)
- Reasoning about abstract concepts (justice, empathy, etc)
- Common sense understanding (if you have an umbrella, I know it’s probably raining)
Here are some tasks that are easy for AI but (generally) hard for humans:
- Strategy games (chess, go, etc)
- Complex calculations (weather forecasting)
- Large datasets (genomic data)
- Predictive modeling (protein folding)
- Image recognition (MRI analysis)
The good news for us humans is that we can use AI to augment our capabilities for the hard (for us) tasks. You can and should be doing that already.
Will AI someday use humans to augment its capabilities for the hard (for it) tasks?