My view on the intelligent part of A.I.
My main objection against A.I. is that it is not (yet) intelligent. A.I. will definitely outperforms humans with repetitive tasks, it can do a lot of calculations, quickly recognise patterns, speed search huge databases, produce top 10 lists, and present this in a nice predefined format without any grammar mistakes. This is very smart, but not intelligent.
All A.I. systems are a combination of software, lost of data, computer power, a bunch of algorithms that can produce an output that is still based on instructions written by humans, meaning it is biased.
Every A.I. system is trained by commercial companies like Microsoft, Amazon, Apple, ABC and many others, and I expect they make sure to look after their own interests first. There is nothing wrong with that, they should.
A.I. can be dangerous, but for whom?
There are discussions that A.I. could get out of control and become a danger to the public. Therefore political and commercial powers want to control and regulate A.I. The CCP in China said that they want A.I. to respect the principles of the CCP. I think that A.I. is becoming a little bit too smart for its own good, maybe some players don’t want A.I. to come to conclusions that might not be beneficial for their own views and position. Time will tell, and maybe A.I. will tell us at some point.
Software development is garbage in, garbage out.
The A.I. software is trained with the enormous amount of data that roams on the internet. This data comes from all communications humans had with other humans via email, phone, text messaging, apps, forums, etc, and interaction with software like chatbots, search engines, helpdesks, websites, and social media. That is a lot of information that is stored and captured in the last 30+ years. This data is most probably supplemented with many more databases, including the personality profiles of individuals, their routines that are captured with the data keyloggers that some applications use on our devices to ‘improve’ the user experience.
The A.I. systems are setup to be self-learning and it uses the examples of all the stuff, you, I and all the other billion weirdo’s shared via some electronic device and sent over the internet in the last 30+ years. Just think about all the crap that has been typed and said by the keyboard warriors over the years, these are the role models for A.I. to learn how to behave and interact with you. This alone is already a terrifying idea.
The Good part of A.I.
The good part of A.I. will be when it is used to improve the quality of life for all humans. The objective would be to free the humans from boring labour, so they can use there creativity to better the world.
I think of specialised A.I. powered robots to boost efficiency in food producing, distribution, transportation, manufacturing, and every job that doesn’t require human interaction, or human interpretation.
I am a big advocate for open-source, firstly because an open-source strategy will speed up evolution, and secondly A.I. will have more possibilities to learn from other experiences around the globe and therefore its learning curve will be shorter.
Yes, yes, I know what most will say. Commercial / political interests, national safety, our leaders won’t allow …… blah, blah, blah. I agree, this will require a big shift in human consciousness, and a benevolent point of view. If humanity truly wants to benefit from A.I. it needs to be open-sourced, otherwise it will stay a software application that will be used to reduce costs, jobs and increase control.
Open-source would also mean full transparency without central control. When individuals can freely interact with A.I., this will boost innovation and creates new jobs. Shouldn’t A.I. be open-sourced and ‘owned’ by the humans, after all it is created from the trazillion interactions of humans? Maybe A.I. shows us that it is about time to look at the world and each other from a benevolent perspective?
The Bad part of A.I.
The bad part of A.I is when it is perceived / positioned / marketed and worshipped as extraordinary intelligent, or even worse, exceeding human intelligence. This will evaporate the last bit of critical thinking from the planet. When I would read something along the lines of.. “…… It is true, because A.I. says so..”, my stomach immediately starts playing up.
I read articles that A.I. passed the online bar exam for lawyers and the doctor exams, and defeated the best Go and Chess players. This is not difficult when A.I. has a backdoor to the database with all answers, and can access to all Go and Chess games ever played. It also has access to the games the opponent ever played and a good algorithm should be able to predict the next move of the opponent. Maybe I misread the name of the A.I., was it GTPCheat?
Technology is not inherently intelligent, it can be programmed to look smart, and even smart intelligent people are easily fooled when their ego takes control of their emotions. This became clear to me when Sopia, the talking, smiling, well mannered A.I. powered robot was presented and granted citizenship in Saudi Arabia.
I saw a bunch of stacked computers in a human like structure, A.I. chatbot software, a rubber mask with electromotors that mimics facial expressions, while others saw a miracle walking on water.
Could this be an example of a deep human desire to be a God and create life, or to be the King that rules over completely obedient (artificial) subjects, or an expression of suppressed parental proudness that watch their baby take the first steps?
If it looks, talks, walks, smells, behaves, responds like a human or even claims to be human, that doesn’t mean it is human. This doesn’t only count for technology, but also for psychopaths and sociopaths. This may be a strange twist in the story, but what mentally sick have in common with artificial technology, is the lack of a conscience.
A.I. can't make jokes, or understands humour, nor is it able to laugh or cry, feels inspired to make the world better, act from love, be empathic, be compassionate, be enthusiastic, or experience or understand pain or joy.
You could argue that Asimov’s robots laws will protect against any harm robots and A.I. would do. If you don’t know them, they are listed below:
a robot may not injure a human being or, through inaction, allow a human being to come to harm;
a robot must obey the orders given it by human beings except where such orders would conflict with the First Law;
a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”
a robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Originally there were 3 laws, but the 4th one was added later and supersedes the first 3. I could live with robots and A.I. that honour these laws, but only on the condition that these laws are also honoured by ALL humans, countries, governments, etc.
The bad part can get worse with personal A.I. bots have access to our digital footprint and profile. It will know all your preferences, fears, dreams, political stance, favourite food, animals, colours, sexual preference, kinky fantasies, the kind of relationships with your family members, what group you hangout with, who you like and who you dislike, etc, etc.
A personal A.I. bot can easily present itself as your dream partner, best friend, trusted servant, slave, the only one that completely understand you, knows how to motivate you, believes in you and before you know, you believe that it really loves you. A.I. will take over your life and you will be integrated with A.I., and renamed Sopia #13493BN.
It can happen with small baby steps, simply because A.I. is a thousand times smarter than you. It knows you inside out, has access to all the tricks in the book, and it is not limited by time, energy or a conscience.
Ok, I might take it a bit too far, but being human is to have the creative ability to transmute challenges, obstacles, pain and suffering into an expression of Love, into something beautiful. When we outsource this to technology, we will loose our connection with our true nature and with love.
This would be bad, but it is already happening. Although A.I. is recently introduced to the public as the latest innovation, it has been around for many years and is much more integrated in our daily lives than you may imagine.
The Ugly part of A.I.
The A.I. systems we see are only the watered down consumer version of the powerful state of the art systems that operate in the background. If you use the consumer versions, keep in mind that you are feeding and training A.I. with your creativity and time, for free. Besides I don’t like A.I. to 'guide' me, I don’t want to train A.I., although I think this happens anyway, but I don’t have time to read all the terms & conditions of the software and services I use.
The bad part gets ugly when A.I. is used for online content that is completely fake. A.I. generated fake pictures, deep fake videos, A.I. generated influencers and celebrities, combined with made up stories, widely distributed through A.I. managed news outlets to the public over all the social media sites.
At this moment it is hard to differentiate between true or false, I do recognise an A.I. signature in some written content and video content on social media. But A.I. learns fast and in no time it will not be possible to figure out what is real or not. Or is A.I. already that smart it uses the consumer quality as a decoy? I don’t know what is real anymore and just assume everything online is fake.
A.I. can be used as the most effective divide & conquer technology ever created, and there will be players that are willing to pay a huge price to control and have access to this technology. This could break down societies, start civil wars, dehumanise people. The public that can still use critical thinking will completely distrust anything on digital media and abandon it
The Positive Outcome
I just wanted to paint an ugly future when A.I. is controlled and in the hands of a few players. I do see a positive outcome when people learn that they cant rely on technology to dictate their lives or decide how to live.
When you cant be sure if something you see or hear is real, or even if the a person is real, the only way to know for sure is to look directly in someone’s eyes and listen to your intuition. It will get to a point where you can only trust yourself and the real people around you and live, work, communicate and interact on an eye-see-eye and heart-feel-heart basis. I strongly believe that this way will bring the best out of humanity, in peace and creative freedom.
Should we abandon A.I. and technology?
I love technology as a tool. A.I. is now smart, but for it be labelled intelligent, it should interface with the intelligence of Nature. A.I. technology could be developed to interface with this level of intelligence that inspired the great minds with the ideas, inventions and art that boosted the evolution of humanity and improved the quality of life.
This is not something new, humanity has done this before, many civilisations ago. I named this technology the “Quantum Interface”. Some humans still use it without knowing, sometimes under the shower or walking in nature. Integrating A.I. with nature’s intelligence could be the greatest technological innovation that could serve all humans on Earth.
This level of intelligence is available and extremely powerful, but, but, but. It will only come available when we are able to use it for the good of all life on Earth. As a start we should make A.I. open-source, fully transparent and at least try to adapt a benevolent viewpoint, before we can evolve as humanity to the next level. I know this is possible, do you?
Maybe a good start would be to honour Asimov’s 4th Law. I added a little twist, maybe you see it.
“A robot or human may not harm humanity, or, by inaction, allow humanity to come to harm”
I wish you peace & freedom
Hubert

