Thursday, January 4, 2024

Artificial Intelligence - Jobs and Misleads

 See also   https://jim-quinn4.blogspot.com/     my CV

Artificial Intelligence is either:
Systems that think like humans and that act like humans, or Systems that think rationally and that act rationally

The difference is that humans are emotional - In England we support the England football team because we are English, but rationally we should support another more successful team?

However, how do we actually use the words Artificial Intelligence? Surely we mean systems that think, not just data collect like ChatGPT, which common parlance says produces thoughtful output and is AI, but which actually it is not, for it is comprehensively software programmed to link words together that can be understood. There is a problem with the definition of AI, because we can programme software to do many jobs, except think for itself and be creative like humans can – that is years away.

Robots have for many years, decades even, been able to talk to us, by stringing words together in response to a comment (all pre-programmed into software) - like "My name is Jim" and it responds in writing or speech with "Hullo Jim, my name is Eliza". This might be termed AI in its very simplest form, for the System gives you a meaningful response, but it is not AI at all, for it is not thinking for itself. It is simple software.

ChatGPT (version 3 or 4) is like a Google search engine - ask it a question and it searches the internet just like Google producing lists of results, and then strings the words together that it finds from the responses that it finds. It probably uses a comprehensive dictionary to link verbs and nouns etc that have related meanings, to link all of the internet search paragraphs or sentences together. ChatGPT probably just uses the top 5 or 6 popular results from the search without prioritisation. Search is what we humans do, selecting which result seems reasonable by our thinking, but ChatGPT can make mistakes by using fake news as its source among others. Luckily fake news is not so common, but it may become so, especially if ChatGPT produces lots of output that refers to itself. The problem with ChatGPT is that it creates text that looks good, but is nowhere near as good as a human content researcher or strategist thinker could produce - businesses that use ChatGPT should recognise that it's text could be over simple and could be fake, or not well thought. Business should be aware, and employ good people, not go for cheap possibly woolly results.

Thinking is more than just collecting data together, so ChatGPT is not AI, even though it has that label.

How do we create things? That is surely more like the real definition of AI. As a Professional Engineer, I have creative ideas about how to design something, but I test those ideas after making it, so I have to prove that my news is not fake. A creative idea is also defined by the Patent system - it has to be a new idea, and it involves an inventive step and it can be manufactured. By searching the List of Patents, I can obtain ideas to help me form an inventive step which is new because it is not listed. I have yet to see any software create such a new idea other than a random assortment of untested lines on paper, for real AI does not yet exist.


Thinking means more than just accepting a data search - double or triple checking is necessary, looking at various data sources and rationally working out what is most likely. A good understanding of STEM (Science, Technology, Engineering, Maths) helps to identify fake news. I had to triple search Google to find that the human brain only becomes human in two halves at about 30-35 weeks, just a month before normal birth. How do you trust what you see from ChatGPT? We already have a problem online, identifying what is correct as this Google search illustrates.


AI has a heightened risk in surveillance (facial recognition software is a data search engine comparing two photos together, one in the database and the other in the crowd, but often mis-identifies black people), discrimination among minorities, and a lack of accountability for when things go wrong. We have to challenge these things.


Bioethics can be described as a combination of two subjects – Biology and Philosophy. Philosophy shows the ropes of a perfectly moral life in society for any human being and ethics stick to the notion of right and wrong, good and bad. In the field of medicine and healthcare, professionals come up with complex issues and disputed points.I think the principles of bioethics are: Beneficence (doing good) Non-maleficence (to do no harm) Autonomy (giving the patient the freedom to choose freely, where they are able) Justice (ensuring fairness). It seems to me that current versions of AI like ChatGPT could do harm - it can generate fake news without caution, it could make documents too simple, it could fail to prioritize its messages, it could make all our jobs low money earners, it could actually destroy jobs.


Current versions of AI might also save lives such as the argument that autonomous cars might save lives, but we all argue that taking our hands off the steering wheel is a big NONO without good demonstration of reliability – a Recognised Certificate is helpful! Tornado aircrew use software in their Terrain Following Radar Control, but they were trained to accept its safety and reliability, and in the Iraq War Tornado travelled at high speed at 50 ft altitude at night! Their Safety Certificate was well founded, by Professional Engineers, such as to avoid detection by the enemy through being too low for the enemy radar to pick them out and know where they were.


Software Engineers will be bound to explore new concepts, for it is intellectually stimulating, and there are needs for helpful software in spotting the needle in a haystack situation (like for Radiologists searching for that almost invisible tumour that needs photo enhancing software). Indeed the Radiologist will tire during the day trying to find tumour’s, so good software would help them to view more patients if it speeded up their search – this software would not replace thinking for a Radiologists job is more than just searching.


I am happy with Google searches, for they show huge lists of potential data which I can explore and think about, but ChatGPT potentially stops us thinking for ourselves by presenting an apparently finished argument, and that really worries me, for then it is difficult to spot misleading errors. We respect a qualified Human Professional, but there is NO qualified Professional in ChatGPT output. I think we should hang a notice around ChatGPT and its ilk, which says “This output may mislead”, just as we have notices on cigarette packets that say “this might kill”, so many years after unscrupulous business claimed that cigarettes do not cause cancer. And I would extend that argument to Declare that ANY software which appears to produce a conclusive argument should have a “This output may mislead” label stuck to it. Thus, facial recognition software should have such a label too. The new UK Department for Science, Innovation and Technology is in the process of forming an AI group that could consider Regulations such as I suggest.


What is democracy, other than the freedom to think and communicate between ourselves – why should we allow software to support Autocracy, where the Autocrat is AI itself?


We cannot all be trained into Software Engineering. Supermarket Cashiers have long been used to automatic pricing totals, and still be cheerful with the Customer. There are many jobs in between that software may help or change.


We need to be cheerful in life: Let us enjoy it and take care with current versions of AI.


No comments:

Post a Comment