I think about this all the time, too. For me, and I'll write about this soon, the real difference that makes a difference is that the model has no point of view. It doesn't have experiences and emotional responses to those experiences so it can't possibly have insights into things that humans are interested in. But it can fake it. The tough part is that people can fake it, too. They usually do. And that's what makes the GPT output so convincing.
True! But somehow when people fake it, it hits different, doesn't it? As in, a person can lie to you (and people lie all the time), and that sucks, but at least it's a person that's doing the lying. So we still care, it still moves us -- both on the front end, if we believe the lie, and then again when we realize we've been lied to. Both are meaningful emotional experiences. With AI, you might be moved by the lie, but when you realize it's just a machine, you just want to get away from the thing. It fails to move you.
One of things I mean about people "faking it," is there are a collection of shorthand phrases that people tend to repeat, like a politician trained on talking points, only they do it unconsciously. Usually this is about politics, philosophy or religion. Sometimes this leads to contradictory or even incoherent views because people don't always think about what they are even saying. Like the crowd in Life of Brian all saying in unison "yes! We are all different! We are all individuals!" But I call it Fake Thinking when someone says the thing, but they don't understand what they are saying or can't define the thing they are attacking. I think it's analogous to AI because AI doesn't understand anything it's saying. It repeats words that someone who does understand it might say.
That's a really good comparison. It would probably be a very productive line of research to figure out in what ways we behave like LLMs (or other forms of AI), in contrast to the behaviors that we can't get AIs to replicate. My tentative thesis is that if you can get an AI to do it, it probably isn't fundamentally valuable.
As I’m working on a faux finished room for the first time in a couple of decades, this post is quite welcome, even though I realize you’re referencing written art. 😉
I think about this all the time, too. For me, and I'll write about this soon, the real difference that makes a difference is that the model has no point of view. It doesn't have experiences and emotional responses to those experiences so it can't possibly have insights into things that humans are interested in. But it can fake it. The tough part is that people can fake it, too. They usually do. And that's what makes the GPT output so convincing.
True! But somehow when people fake it, it hits different, doesn't it? As in, a person can lie to you (and people lie all the time), and that sucks, but at least it's a person that's doing the lying. So we still care, it still moves us -- both on the front end, if we believe the lie, and then again when we realize we've been lied to. Both are meaningful emotional experiences. With AI, you might be moved by the lie, but when you realize it's just a machine, you just want to get away from the thing. It fails to move you.
One of things I mean about people "faking it," is there are a collection of shorthand phrases that people tend to repeat, like a politician trained on talking points, only they do it unconsciously. Usually this is about politics, philosophy or religion. Sometimes this leads to contradictory or even incoherent views because people don't always think about what they are even saying. Like the crowd in Life of Brian all saying in unison "yes! We are all different! We are all individuals!" But I call it Fake Thinking when someone says the thing, but they don't understand what they are saying or can't define the thing they are attacking. I think it's analogous to AI because AI doesn't understand anything it's saying. It repeats words that someone who does understand it might say.
That's a really good comparison. It would probably be a very productive line of research to figure out in what ways we behave like LLMs (or other forms of AI), in contrast to the behaviors that we can't get AIs to replicate. My tentative thesis is that if you can get an AI to do it, it probably isn't fundamentally valuable.
As I’m working on a faux finished room for the first time in a couple of decades, this post is quite welcome, even though I realize you’re referencing written art. 😉
WAY not over ChatGPT
Have fun with your robot.