Our new online friend is accessible to us for communication, but he is not ours, he is not even public.

Prof. Vladimir Radevski PhD

Recently, we got a completely new interlocutor, quite different from the previous ones – finally an interlocutor, and not just a stock recommender. Is it because of the quality (at first glance) of the contents with which it responds us and their astonishing comprehensiveness, or, above all, because of the fact that it addresses us with a text, in our human language – we are dealing with an artifact that has caused numerous academic and intellectual discourses, and already promising ‘powers’ in several domains of economic, scientific and business life.

We are amazed, delighted, we communicate with it anything and everything and we retell our experiences with enthusiasm, but also with fear. From a scientific point of view, it seems that it really is a qualitative leap. The first results of that leap were announced in 2016 when Google announced the AlphaGo product – a software system that defeated the then world champion in the South Korean game of Go. Before, software systems exceeded human performance, but this time the revolutionary thing was that AlphaGo is a system based on the so-called ‘deep learning’, that is, instead of the system being ‘taught’ based on the rules of the game, it was trained to expertise by being shown a huge number of games that the system had to ‘learn’ to ‘play’. And that is one of the essential features of the systems that are supposed to be the basis of the famous ‘generator’ mentioned above.

So, these are powerful software artifacts that have the ability to ‘learn’, that is, to improve, and which essentially need a huge amount of previously ‘tested’ examples. This is the so-called supervised type of learning where the human factor during the automatic ‘training’ of the artifact has a key influence on the content and form of the result.

So, with a massive ‘manual’ (and literal) influence on the answers, we get a system that always answers kindly, with pleasant phrases, mostly positive and, of course, as politically correct as possible.

Who set these ethical standards of what we would, or would not like to hear from our new online friend? Our new online friend is accessible to us for communication, but he is not ours, he is not even public. It is private corporate property along with all data and example datasets it is ‘trained’ on, along with all processing power, and along with all ‘manually’ inserted styles and response types.

So far, humanity with a rich history of intellectual effort has developed mechanisms for quality control of food products, medical products and has a complex history of regulating the media space – a space where stories, texts and materials for mass communication are in circulation. Complex procedures for quality control, certified and regulated professions have been implemented. Of course, with varying degrees of success and quality. Suddenly we have a ‘storyteller’ who is not limited by any of this.

What remained to develop rather unregulated in the last two decades, or where a relatively modest degree of regulation was achieved with great efforts, are the accessibility of content on the Internet, the way of publishing content and even more the way of consuming content both from the Internet and on social networks and specialized platforms. In these spheres, in addition to the dominant principle of ‘fair use’ that prevails in the USA and the efforts for regulation and protection that are made at the level of EU institutions – there is a huge field of grey zones.

We could not regulate that an access to a smartphone during a lesson that is not related to mobile applications is the same as carrying a basketball into a biology classroom. Rather, more dangerous than that. We could not regulate that access to social networks and specific platforms is harmful to the proper psychophysical development of young people. Hardly, and only in some places, addictive computer games have a regulated maximum duration – and that with a limited effect. I am afraid that we will not be able to regulate our ever ready, talkative and highly intelligent ‘friend’ for the same reason. Because we won’t want to, and because we’ll rather have instant access to that strange omniscient prophet – or fortune teller.

In 2021, UNESCO adopted an act on Ethics in artificial intelligence as a result of several months of consultation with top experts and scientists. Like many others, I am afraid that these recommendations remain on paper and far behind those who chase profit, instant results and benefits.

 

Prof. Vladimir Radevski PhD, American University, Skopje