Thank you for this David - I’ve been having some emotional and pain with the advancements of Ai and its impact on the creative sector. But I’m honestly - thanking you for showing me a way to challenge and engage the conversation - rather than leaving the social media spaces. Keep up your great work!
DZ - Help me understand, since I'm starting from no knowledge. You listed 2 limitations above about moral questions. Are these YOUR observations or are these limits as stated by the developers?
It seems most likely to me that developers put guardrails on the AI so that it can’t directly aid and abet criminal activity (which isn’t really a moral stance, it’s just logistical) but beyond that I think it’s likely true that this AI isn’t capable of complex moral reasoning about the implications of its own existence in historical context etc.
I’m certain that AI exist or will soon exist that are capable of that moral reasoning, and that’s going to be more interesting and problematic. And it will always be the case that the humans who design the AI will be culpable to some degree. We know Musk has heavily invested in this product and is monitoring its impact so he can take credit when it feels most advantageous.
Thank you for this David - I’ve been having some emotional and pain with the advancements of Ai and its impact on the creative sector. But I’m honestly - thanking you for showing me a way to challenge and engage the conversation - rather than leaving the social media spaces. Keep up your great work!
DZ - Help me understand, since I'm starting from no knowledge. You listed 2 limitations above about moral questions. Are these YOUR observations or are these limits as stated by the developers?
It seems most likely to me that developers put guardrails on the AI so that it can’t directly aid and abet criminal activity (which isn’t really a moral stance, it’s just logistical) but beyond that I think it’s likely true that this AI isn’t capable of complex moral reasoning about the implications of its own existence in historical context etc.
I’m certain that AI exist or will soon exist that are capable of that moral reasoning, and that’s going to be more interesting and problematic. And it will always be the case that the humans who design the AI will be culpable to some degree. We know Musk has heavily invested in this product and is monitoring its impact so he can take credit when it feels most advantageous.