More creatives are signing open letters to stop training LLM models on their content without proper licensing (i.e., financial compensation).
Fundamentally, this is a legal copyright issue (i.e., money). Not a “stop the AI from taking over!” issue.
Licensing issues (i.e., bucks) aside, would you want an AI version of yourself? (Note: A ChatBot version of you).
Well, what could it do for you?
Picture a 10-minute morning meeting with your AI-you each day. Of course, it’s been trained on all of the previous days of you, so it knows your history, how to sound like you, your preferences, and how you generally respond to the requests, challenges, and tasks of the day. But the purpose of the morning meeting is to understand the today-you. It asks you about how you’re feeling, things on your mind, what happened yesterday, and stuff that is important (as far as you know).
Presumably, this AI-you could stand-in and take your place in particular aspects of your daily life that don’t require you physically. Let’s also assume that AI-you really is a good version of you.
Maybe, in some ways, a better version of you because it’s you at your best, most consistent self. Rarely, if ever, would you review what AI-you did and think to yourself, “that’s not how I would do it” or “I’m not happy with that.”
But I don’t think this comes down to pragmatism.
I think it’s simply, “Would you want an AI Version of you?”