I decided to ask AI who I was, and the resulting answer proved to me what dogshit AI acually is.
Caveat: I am definitely biased against AI, so you might think I’m looking for flaws - but be damned if I didn’t find them. For context: I am Ryan K Lindsay, Aussie comic writer, I’ve written some comics, I overthink on a professional level.
Okay, let’s go…
I love the nothing here newsletter - go sub it now. It looks at current affairs/events and casts analytical eyes over it in a way that helps me stay informed about stuff that matters, in a way I probably wouldn’t otherwise.
Recently, they posted a series of articles where writers from the hivemind asked AI who they were. In both instances, AI got most details correct and then 100% flubbed and fabricated some others. In both instances, the writers were given new books to their publishing history, and the AI was even able to detail what those imaginary books were about.
It just further cemented to me that AI is not a phenomenal resource. It always sounds confident as it bends the truth, or lies, and if you call it out it just meekly offers a mea culpa, but it has no way of changing its ways in any meaningful manner - unless we grab it by the leash and really drag it through each and every fact of the world, rubbing its nose in each of the spots that are truth.
As such, I asked AI about myself as a writer, and the result was definitely useless, though fairly close, and ultimately it gave me a new book, too - and also showed it didn’t actually know anything about any of my books that do actually exist. Check it out.
—----
That first question gets such a correct response. It’s so close to being complete fact, and so much of it is spot on. I guess if you weren’t paying attention, it could completely fool you.
I do wonder if it lists Image Comics because it’s misinformed, or if it’s taking my back up short stories in Grim Leaper and Shutter as connection enough to the publisher?
The comics it lists for me are good, the artists being credited is even better, but then it says I wrote two novels. Now, I’ve written 5 novels [well, 4 and one novella] and none have ever seen print. It says one is called INK ISLAND, and that’s a nope because that’s a one-shot comic with Craig Bruyn, and then it lists this strange new title JUNKYARD DOGS.
I had to ask what this book was and the plot is…interesting, if completely vague. It’s crime, sounds like something I’d write, especially a heist. A location called ‘Scalped Hill’ is just terrible. And I can’t imagine me writing any of those 4 main characters, though. I like that this imaginary RKL has written from multiple perspectives, though, and the fact the book relies on character development and has an overall message to explore is good. Good work, AI generated RKL, you must write good AI generated stuff because IRL RKL has never even considered this title, no less written this hunk of junk.
Which made me wonder, if I keep digging, what else will the AI just flat out lie about to my face.
Not gonna lie, this one sounds way more interesting and nasty and would be a blast to write. What a shame I already told this story as an all-ages friendly comic. I guess the AI couldn’t come to grips with me writing out of the box it had put me into.
So if it made up a novel, and turned another comic into a novel, what did it know about my other comics? It listed them, and named the right artists, so I had to know more.
I mean, AI could just harvest the info for ETERNAL from the Amazon synopsis, or any number of reviews or interviews or the publisher’s site, right? Where the hell does the AI need to go to get this so wrong? I cannot even fathom exactly how it ends up telling me this - though I appreciate it telling me my work is up there with “The Sandman” and Preacher.
This one is so so close. It’s got a lot right, more than ETERNAL got, but it’s ultimately wrong about the story.
This one is fairly close, though it feels like what someone would think was happening if they couldn’t quite understand what was happening. Did the AI just skim read the preview pages online?
Delving deeper into my back catalogue, it gets so much right. Titles, artists, and then just casually drops in some completely wrong stuff. “The Devastator” does not exist. And now it’s given me another novel, and this one very unimaginatively named, as if someone was playing RKL Mad Libs for story titles.Booooo!
At this stage, I’m starting to see a pattern - every book has bold linework and vibrant colours. They’re like other works, and nominated for awards in the industry.
If someone were writing a book report on me [why? who knows] then this would all seem pretty good, but it’s just so far off centre.
I love that Bucky is secretly a deer - like he can just chill and hide the antlers.
I also appreciate that they’ve nailed that this comic is more cult, more niche - because it’s only ever been a Kickstarter comic before.
Straight out the gate - what? Summer solving the case of the disappearing fisherman, ha, no. Just, no. But the rest follows the usual AI chum - though it’s gritty lines this time.
It had been garbling my actual books, so I had to go look at the new one it made up. This time it subbed out Chris Panda, who I actually worked with on SHE, and it brought in Dean Kotz, who I have never met in my life.
The story is interesting, again, in a vague way. But it does not exist, so where is this info coming from?
To wrap things up, I asked about a title I’ve currently been working on/pitching and was truly baffled to see it turn up a comic I already did with that title. Tom Bonin, an Aussie artist I had a beer with about a decade and a half ago, is suddenly a collaborator on this strange story, which thankfully doesn’t sound anything like the story I’m pitching.
So over the course of this chat with the GPT, I quickly learnt it didn’t know shit about me, what it did know was a loose approximation most of the time, and the rest filled in with confident guesses, and the fact it would yield a result when I put in a fake title makes me really concerned about the validity of anything it would say.
To test it out, I tried a few names of fellow writers I know, and every time it gave false facts peppered amongst the truth. Then I tested it with ‘Matt Fraction’ and it even got some details wrong about him.
That seems insane to me.
I should test it with Stephen King or someone with a huge name, but honestly, it shouldn’t matter if I was trying me or a mate who hasn’t published yet, if there’s no info then the AI should just state that.
Anyone using AI to support any journalism or school learning, take heed. The only thing AI is good for is giving you enough fuel to laugh at how dogshit AI is. Beyond that, you really need to fact check it constantly, and at that point just write it yourself, I guess.
AI didn’t even give me any cool new story ideas, or make me sound better, and really if someone is going to make shit up about me don’t make it as pedestrian as I already am, thank you. Go big or go home, and AI, maybe just skip home and get in the sea.
A story that just dropped...I think yesterday? Anyway this week...is a lawyer here in the States who had ChatGPT do his research for him, and he brought that legal argument to the courtroom, and lo and behold all the legal cases/precedents the AI referenced were entirely made up. That said, this is more an indictment of the humans using AI - not understanding the current limitations and not caring to find out - than an indictment of AI itself. I'm certain we're going to abuse AI and harm real people with it in ways that are entirely avoidable, and I agree that it's much further away from being truly useful or impressive than most seem to think (most think it's only a year or two or three away), but look at self-driving cars: it's all a very long ways off from working as well as even though the true-believers thought they'd have taken over the roads by now. But there's something ineffable about the flexibility of the human brain that AI has a very long way to go to replicate, if it ever will.
That said, I'm no champion of AI. I'm certain we're going to abuse it and harm real people in ways that are tragically, entirely avoidable, and I agree that it's much further away from being truly useful or impressive than most seem to think. Most think it's only a year or two or three away, but look at self-driving cars: they're still a very long ways off from working as well as we were sold, even though the true-believers thought they'd have taken over the roads by now. But there's something ineffable about the flexibility of the human brain that AI has a very long way to go to replicate, if it ever will.
GREAT case in point: driverless cars are fine when driving around other AI-operated cars, but they fail when trying to predict what human drivers will do. We can say that "people aren't logical", but that isn't true: "defensive driving" is essentially people learning the logic behind the "illogical". The possibilities aren't random, or true blue "illogical", the logic is simply a hodgepodge of so many possibilities and iterations with different motivations and precedents behind them. You can learn it, which means it's learnable, but AI isn't that kind of thinking thing yet. The question is if it ever can be.