Table of Contents:
Up until recently, I’ve avoided AI like the digital plague I thought it was. Have I changed my mind?
I refer to my recent post: The Trouble With Troubleshooting. In it, there’s a section in which I refer to using AI as a tool to assist in troubleshooting issues with software, hardware etc. A not-very-accurate tool, as it transpired.
Like it or loathe it. But do you trust it?
In the not-very-important scenarios that I’m using AI, I’m not finding it terribly accurate in terms of yielding definitive answers. There’s an old IT saying – probably quite appropriate in this case – which goes like this: “garbage in, garbage out”.
Garbage in, garbage out
The saying refers mainly to programming. In the days when people wrote lines and lines of code (do they still?) it was said that when the program was run, if it didn’t work as expected (i.e. it’s output was rubbish), then the input it was given was probably rubbish too. Feed it rubbish, you’ll get rubbish back.
In my opinion – at least for the AI systems that I have access to – that is exactly the case. I use ChatGPT and CoPilot mainly, but I have used Proton’s Lumo on the very rare occasion. And I don’t pay any subscriptions to use them (with the exception of Lumo, that’s included in my Proton subscription as standard), I use the free versions. ChatGPT does impose limits (a certain number of file uploads, picture generations, questions etc.) as I suspect CoPilot will do when my O365 subscription that I accidentally had upgraded to include CoPilot (I wrote a whiney open letter to Microsoft about it) drops back down to it’s just O365 subscription at the end of 2025.
Whatever the case – subscription issues aside – it’s still a case of garbage in, garbage out.
The subtle art of asking a question
As I mentioned in passing, I’ve started to use AI to filter through information on the internet. ChatGPT in particular will actually cite the sources it uses to provide its answer, which is a good way to verify its actually doing stuff.
I’m mainly using for a couple of things:
- Assisting in troubleshooting some ongoing PC issues that I have
- Providing me with some hyper-realistic pictures that I can use on here without infringing any copyright.
PC troubleshooting – an act of a desperate man
I’ve had many troublesome, niggling issues with my new PC (I did write about it here. (they are still ongoing, I’ll post a Part 2 in the fullness of time). I’ve spent hours and hours looking through hundreds of support forums and websites looking for any clues as to the root cause of the issues. Mainly out of frustration – as I couldn’t seem to find anything that I thought related to my issues – I asked ChatGPT.
It was at that point my AI learning curve began.
The AI learning curve
It’s not just AI that has to learn how to interact. It’s us flesh-and-bloodians as well. If you ask a question, AI will give you an answer: and a vague question yields a vague answer. Garbage in, garbage out.
So the first thing I had to do was to learn how to phrase a question. Learn how to include details in it that you would normally take for granted.
For example, if you’re looking for a how-to set of instructions to install and configure full disk encryption for Ubuntu. Your search engine query might look like: “how-to disk encryption ubuntu 24.04”. Your search engine of choice then pops off and returns pages of info for you. You can then scroll down the pages and cherry pick the information that looks likely.
If you asked the same question of AI, you would get a (sometimes totally) different set of answers, presented in a different format. AI won’t give you a list of page links to visit. No, no. It’ll access a few of them and then list the information – presented in a handy instructional format – that it thinks you’ll require for you to install and configure disk encryption on Ubuntu 24.04.
Hold on – that’s not right!
The first time I asked a similar question, I got back a handy list of instructions to implement it. “Ooh lovely” I thought, and proceeded to work through the instructions on a test system. A test system, because I’ve been in that place where I’ve done something on a live system and the live system became dead, very quickly. I got about a third of the way through before I encountered issues where commands wouldn’t work, weren’t found or just not available.
Hmm. OK.
Then it occurred to me that I’d only written a brief outline of what I wanted to do. I hadn’t described any hardware, software or any variables whatsoever. Garbage in, garbage out.
Garbage in, garbage out (again)
Using a search engine yields pages of results. You know exactly what you’re looking for, as all the information about your systems, the variables, e.g. the bike spindle you’re looking for etc. etc. – it’s all in your head. AI isn’t in your head (yet) and doesn’t know or is even faintly aware of any of that other information.
So you have to tell it. Phrase the question to include as much information as you can. The more information that you give it, the better quality answer you’ll get.
AI Quirks
I quickly got bored with repeating myself. I created an account with ChatGPT (already had one with CoPilot) in order that it may “remember” conversations. I was under the impression that by doing so – specifically in the case of my PC motherboard issues – I wouldn’t have to keep entering the specs and type every time in that conversation.
Which worked to a degree. At the start of a new conversation (i.e. asking a question) I’d write out the specs relating to the question. We’d then proceed to do Q&A until I got an answer I could work with. But sometimes it’ll deviate away from the specs I’d originally stated and I’d have to remind it what we were talking about. And yet at other time – in other conversations – it would refer back to some specs I’d input in a totally different conversation, sometimes days earlier. They both do that – ChatGPT and CoPilot.
Ooh! Quirky!
Politeness
(As far as I know) ChatGPT, CoPilot, Lumo and the like are not real. Therefore the responses generated are not generated from a person, they’re generated from a program. So do you have to/need to be polite? Technically speaking I suppose not, but the program is polite when interacting with me. I have an account, so the program knows what my name is – and will use it (politely) in responses.
I don’t see any reason not to be polite back, so I am.
But it’s a machine!
Yes it is. Quite a big one, too. ChatGPT is spread over several datacentres, using thousands (if not hundreds of thousands) of computing, GPU and memory nodes. All interconnected with very high speed network connections. Much like Skynet, I imagine 😉
Scary Implications
Speaking of Skynet do I think there’s any possibility of a Terminator scenario ever happening? Don’t be silly now. It’s only a film. 😨
But I do think there could be some scary implications that could have repercussions for some people. At least at some point, maybe in the not-too-distant future.
So far, my interaction with AI has turned out to be mostly positive. Mostly. I’ve had some good information given to me by it, that’s helped me with a lot of little “everyday” things, from writing PowerShell scripts to automate tasks, to troubleshooting and narrowing down the root cause of my PC motherboard issues.
On the other hand, it’s given me an awful lot of crap as well. Incorrect or inaccurate (sometimes wildly innacurate) information that whilst it wouldn’t have done anything detrimental physically, could have meant a lot more work for me restoring systems that a script (written by AI) had destroyed. It’s even fed me crap in the same conversation: suggesting a command to run in Linux and then telling me I shouldn’t have used it. In the same conversation (me screaming at the screen had no effect, unsurprisingly).
I’m OK with that though, as I don’t ask much from AI and I make sure there’s the application of common sense and a load of testing on benign systems before I actually implement anything in anger. At the end of the day, I’m doing nothing that affects anyone or anything else that matters.
Thought for the day
But what if there’s someone on the other end of an AI response that doesn’t get vetted. What if there’s someone that trusts the AI to be correct. Given the amount of mistakes AI can make, this is the aspect of it that makes me shiver with that little frisson of fear.
OK, I’m using a free version of ChatGPT and CoPilot that probably wouldn’t be used to control a missile system, but still. The thought is there: what if those “professional” systems are just as bad. Forget the “Skynet” scenario of AI taking over the world, it’s the “well spotted, I didn’t mean to destroy that city” scenario, or the “well spotted, the tissue I should have cut out is on the other arm”.
Fortunately, these scenarios remain in the realms of science fiction and films.
So far.
Other AI’s are available
So that’s my personal experience of AI, as in the interactions that I’ve had with it. But what of the other AI things in the world that I’ve had no “creative input” into?
I’m talking mainly about the videos I’ve seen on YouTube and the articles I’ve read on the news: relating to AI actresses, AI authoring (as in books) and AI films etc. Receiving a special mention is AI music.
AI visual media (videos\films)
Most of what I’ve seen (mainly on YouTube) has been not that great. Not in terms of content, but in terms of the look of the thing. It’s great having a short 2 minute video depicting some steampunk environment or other, but the characters do weird things like stepping through objects (instead of over them) and none of them talk. The best quality AI generated videos I’ve seen have been short parody videos that have popped up recently. These aren’t too bad – but you can absolutely tell without doubt that they’re AI generated. Some of them have “blooper reels” at the end of them, where the (human) author has had to fine-tune movements and actions generated by the AI. Exploding fish, flying caravans, that sort of thing. They all have human input though, so does that make it “actual” AI?
AI actors and AI authoring
There was an instance of an AI actress recently in the news (Tilly Norwood) that caused something of a stir amongst the Hollywood cognoscenti. Created by Dutch AI company Xicoia, the company claimed that Tilly would be “the next Scarlett Johansson or Natalie Portman” and later said that “audiences were more interested in a film’s story than whether its actors were real”.
Which for most of us is probably true. As a member of the great unwashed, I wouldn’t generally get to see the likes of Scarlett Johansson or Natalie Portman in the flesh as it were, let alone speak to them. So what difference would it make if the actress was real or not? It’s true to say that Scarlett Johansson (or Natalie Portman) would have a finite range of characters that they could play, and it would be down to them (and probably a director), to determine how to interpret and play those characters. The difference with an AI, I would suppose, would be that it wouldn’t have that information and would have to have some human input to interpret a character (maybe??). It’s still human input whether it’s Scarlett or Tilly. Just input from different sources. So for me, it is about the end product. Is an AI actress capable of acting in this film?
Which does bring the instances of short videos on YouTube to mind. With that in mind, I did watch the two minute “AI Commissioner” sketch on YouTube – the one that “introduced Tilly Norwood.”
Wikipedia had this to say about the sketch:
“On 30 July 2025, a comedy sketch named “AI Commissioner” was released, featuring Norwood as an “actress” along with other AI-generated characters. It was created with ten AI software tools, with a script generated by ChatGPT. Stuart Heritage of The Guardian described it as technically competent but “relentlessly unfunny to watch”, with “sloppily written, woodenly delivered dialogue”, and that Norwood’s teeth kept “blurring into a single white block.” Joshua Wolens of PC Gamer wrote that Norwood’s exaggerated mouth movements gave the impression “that her skeleton was about to leave her body,” while William Hughes of The A.V. Club wrote that the sketch’s attempt at mimicking human body and mouth movements produced “such a hideous uncanny valley effect” that it gave them “a full-on case of the screaming fantods.”
OK, so the script wasn’t generally that bad, but the characters and graphics looked exactly like the parody videos on YouTube: not great. It was obvious it was AI from the start: and if that’s the current standard of AI generated people, then Hollywood has nothing whatsoever to be worried about. Or indeed Scarlett Johansson (or Natalie Portman, for that matter).
I have never (knowingly) read an AI generated book, however. I do wonder what it would be like, but judging by what I’ve seen so far: I think I’ll stick with the human authors, thanks.
I’m not 100% opposed to AI actors or AI authors, but they do have to be at least as good as a human one. And that’s not happening any time soon.
Here’s a thought: how much would it cost to put an AI actor into a two hour film? Given the rate at which the likes of Scarlett or Natalie can command nowadays for an appearance in a film, how would that relate to an equivalent AI actor? And I mean equivalent as in looking like, sounding like and acting like a human, not the fuzzy, glitchy crap we have today.
AI music
It is true to say that I’ve never encountered any purely AI generated music (that I know of) so far. I’ve heard many an AI generated (with the assistance of humans) covers of songs on YouTube – by which I have been greatly entertained, but I haven’t stumbled across anything “AI original” just yet. I wouldn’t be averse to giving that a listen, just to see what it was like. Would I buy it? I don’t know yet. I’d have to like the product (obviously), but I’d like to know where my cash was going. And I’d have to be in agreement with that cash destination.
What of AI then?
I’ve had a few good, or handy things as a result of my personal interactions with AI. Pictures and scripts mainly, but it’s been quite the voyage to get there. It is a very rare occasion that AI gets a script, or even a picture right first time. In the case of scripts to perform tasks, it’ll provide you with a script, you go through a few iterations whilst it sorts itself out and then it’ll suggest an improvement. You think “oh that would be useful” and in doing so, the AI will balls it all up and you’re back at square one. Despite the fact that you’ve fed it all the information to start with and along the way as you’ve been “refining” the script that it’s given you.
I can’t help but think that if I were to learn how to do scripting (I’m really crap at it, unless is a DOS batch file!) I’d probably a) do a better job and b) do a better job far faster.
But there we are. If you’re not in any hurry, it can be quite the cathartic process interacting with AI. As long as you don’t expect any good, or even accurate results out of it, you’ll be fine.
I’ve been entertained by AI. I’ve whiled away a few hours watching AI generated videos, much to my amusement.
TL;DR
I’ve been having a dalliance with publicly available (and free) AI chatbots. With a mostly positive(ish) outcome. Let’s hope it stays that way, but at the end of the day, it’s still mostly crap and shouldn’t be taken as read. As far as AI films, books and actors go: nothing to see there for the time being, come back when you’re good enough, AI. Have I changed my mind about AI? Nope. It’s still crap.
Postscript
A little while later…
ChatGPT – like most of the other AI chatbots and AI modules – offers a subscription service. In the case of ChatGPT, a basic subscription (for £20 per month) will remove some of the limitations of the free version: extended reasoning with GPT-5.2, expanded image creation, messaging, deep research etc. and raising the limit on the amount of messages and attachment you can upload in a chat. Without the subscription (i.e. just using the free service) there is a limit to the amount of messages you can type in one session. A limit that – during spells of intense troubleshooting – I hit quite a lot then had to wait for a few hours for the limit to be reset.
ChatGPT offered me a free month’s subscription. What would normally cost me £20 of your finest English pounds, would now be free for me to use for one month, whereupon they would start charging should I continue to use it.
Apart from the fact that the message (and attachment upload) limits were raised, there was no discernible difference between free and paid. It was still just as bad when you “paid for it” as it is when it’s free, so I will absolutely not be pursuing that option and I’ll be cancelling near the end of my free subscription period.
A final word
This blog post (and all my blog posts) was actually written by me. The words are all mine from my weird brain. Just think how much better it would have been if written by AI! 😉