A story that just dropped...I think yesterday? Anyway this week...is a lawyer here in the States who had ChatGPT do his research for him, and he brought that legal argument to the courtroom, and lo and behold all the legal cases/precedents the AI referenced were entirely made up. That said, this is more an indictment of the humans using AI - not understanding the current limitations and not caring to find out - than an indictment of AI itself. I'm certain we're going to abuse AI and harm real people with it in ways that are entirely avoidable, and I agree that it's much further away from being truly useful or impressive than most seem to think (most think it's only a year or two or three away), but look at self-driving cars: it's all a very long ways off from working as well as even though the true-believers thought they'd have taken over the roads by now. But there's something ineffable about the flexibility of the human brain that AI has a very long way to go to replicate, if it ever will.
That said, I'm no champion of AI. I'm certain we're going to abuse it and harm real people in ways that are tragically, entirely avoidable, and I agree that it's much further away from being truly useful or impressive than most seem to think. Most think it's only a year or two or three away, but look at self-driving cars: they're still a very long ways off from working as well as we were sold, even though the true-believers thought they'd have taken over the roads by now. But there's something ineffable about the flexibility of the human brain that AI has a very long way to go to replicate, if it ever will.
GREAT case in point: driverless cars are fine when driving around other AI-operated cars, but they fail when trying to predict what human drivers will do. We can say that "people aren't logical", but that isn't true: "defensive driving" is essentially people learning the logic behind the "illogical". The possibilities aren't random, or true blue "illogical", the logic is simply a hodgepodge of so many possibilities and iterations with different motivations and precedents behind them. You can learn it, which means it's learnable, but AI isn't that kind of thinking thing yet. The question is if it ever can be.
A story that just dropped...I think yesterday? Anyway this week...is a lawyer here in the States who had ChatGPT do his research for him, and he brought that legal argument to the courtroom, and lo and behold all the legal cases/precedents the AI referenced were entirely made up. That said, this is more an indictment of the humans using AI - not understanding the current limitations and not caring to find out - than an indictment of AI itself. I'm certain we're going to abuse AI and harm real people with it in ways that are entirely avoidable, and I agree that it's much further away from being truly useful or impressive than most seem to think (most think it's only a year or two or three away), but look at self-driving cars: it's all a very long ways off from working as well as even though the true-believers thought they'd have taken over the roads by now. But there's something ineffable about the flexibility of the human brain that AI has a very long way to go to replicate, if it ever will.
That said, I'm no champion of AI. I'm certain we're going to abuse it and harm real people in ways that are tragically, entirely avoidable, and I agree that it's much further away from being truly useful or impressive than most seem to think. Most think it's only a year or two or three away, but look at self-driving cars: they're still a very long ways off from working as well as we were sold, even though the true-believers thought they'd have taken over the roads by now. But there's something ineffable about the flexibility of the human brain that AI has a very long way to go to replicate, if it ever will.
GREAT case in point: driverless cars are fine when driving around other AI-operated cars, but they fail when trying to predict what human drivers will do. We can say that "people aren't logical", but that isn't true: "defensive driving" is essentially people learning the logic behind the "illogical". The possibilities aren't random, or true blue "illogical", the logic is simply a hodgepodge of so many possibilities and iterations with different motivations and precedents behind them. You can learn it, which means it's learnable, but AI isn't that kind of thinking thing yet. The question is if it ever can be.