Glossary of the exhibition SHIFT

This exhibition uses terms that play an important role in AI research. This glossary provides easy-to-understand explanations of the individual terms.

Algorithms are instructions, rather like a recipe. They form the basis of → artificial intelligence (AI). An algorithm contains step-by-step instructions for solving a specific problem.

Bias (partiality, prejudice): In the context of → artificial intelligence, bias describes a phenomenon of distortion as a result of incomplete, incorrect, or prejudiced data input in → machine learning. Thus, for example, in a photo booth, a prejudiced data basis might not recognize a face with a dark skin color.

Biotechnology and bioinformatics are areas of research which combine methods from informatics or the engineering sciences with knowledge from chemistry, biology, and medicine. In recent years further results in genetic research in particular were achieved through the use of computers and → artificial intelligence. Starting from a DNA sample of body cells, computer-aided genetic DNA-phenotyping can also be used to draw conclusions about the physical appearance and biogeographic origins of an individual, although they are only significant to a very limited degree. Examples of this can be found in medicine (→ nanobots). There, predictions play a role in the development of medicines.

A chatbot is an application of → artificial intelligence (AI), in which users ask questions which the system then answers. This can take place either via text or via audio input. The best known example for this is the program ChatGPT, while the technology is also often used in support chats on commercial websites for common questions and problems. The simulation of a voice by AI is already in use in numerous areas today, for example with virtual assistants like Siri and Alexa. Programs like Lyrebird, which were trained using speech samples from thousands of people, can imitate individual voices convincingly and independently form new sentences which the person concerned never uttered (→ deep fakes).

Deep fakes are media content which has been changed, distorted, or generated from scratch using the technology of → artificial intelligence. They are very difficult to distinguish from reality. Media manipulation as such is basically not a new phenomenon, but through methods of → machine learning it is possible to create very convincing forgeries of photos, videos, or voices largely automatically.

Dual-use describes the ethical challenge that in principle every technology can be used for both good and bad purposes. Ethicists point out that the negative effects can be scaled in particular by means of learning-capable systems, because they can affect large numbers of people very rapidly. This means that the developers of these systems bear a greater degree of responsibility.

Under the term digital humanism the opportunities and perspectives in the interaction between humans and technology are discussed—beyond any apocalyptic scenarios. Digital humanism retains the concept of the singularity of humankind and its abilities and uses digital technologies to expand these, rather than to limit them. The history of the development of → artificial intelligence stretches back over more than sixty years and raises countless questions: To what end can we make use of AI? Will the systems be more intelligent than we are one day? In future, will we increasingly be accompanied by “intelligent” humanoid (that is, human-like) machines? And can a world with AI perhaps even become more humane in future (→ posthumanism)?

In 1956 the computer scientist John McCarthy introduced the term artificial intelligence (AI) at a conference in New Hampshire, USA. The participants examined the question as to how structures and activities of the human brain could be emulated mathematically. AI is therefore often described as an “imitative intelligence.” It is commonly regarded as a key technology of the 21st century. Well-known examples of its application today include digital assistants, autonomous driverless vehicles, and intelligent robot solutions in product manufacture. AI is also already in use in medicine and climate research. In many cases AI operates more or less invisibly in the background, like, for example, in recommendation systems in online shops, facial recognition, and the control of drones.

Artificial neuronal networks are complex mathematical functions which are originally modelled on the structures of the human brain. These functions have free parameters (= adjusting screws), which can be optimized (= modified) in order to solve a particular problem (→ machine learning). Although artificial neuronal networks can process a large amount of data more quickly than the human brain by means of computer processes, the human brain remains in comparison considerably more flexible, more capable of learning, and above all more energy-efficient.

Machine learning (ML) is a sub-area of → artificial intelligence. The algorithms of machine learning aim to recognize correlations or regularities in large amounts of data (big data). There are different methods of machine learning. In supervised learning, researchers use data with a known answer, by means of which an algorithm recognizes patterns and applies what it has learned to unknown data. In unsupervised learning the algorithm starts only with data (for example from pieces of music), in which it searches independently for patterns. For some years now, deep learning has shown itself to be a forward-looking model as a step towards an AI which can act independently. Here machines equipped with a multi-layer, deep → artificial neuronal network are trained in such a way that they can carry out complicated tasks more flexibly. In this way the machines are even in a position to make decisions within a certain framework, to question these decisions and if necessary to modify them. However, the process by which they do so cannot generally be followed by users or even by scientists and programmers (black box process).

One application in → biotechnology is nanobots—microrobots which can travel through the human bloodstream and fight the seat of a disease directly on the spot. First experiments are also being carried out with biological microrobots from cells of frog embryos which are able to reproduce themselves (xenobots).

Non-fungible tokens (NFTs) are digital proof of the ownership of mostly intangible property. Unlike cryptocurrencies, an NFT only exists once. The ownership is proved unequivocally by means of blockchain technology. A blockchain is a list of digital data sets in individual blocks which can be extended continuously and which are attached one behind the other in a chain by means of encrypted (cryptographic) processes. Examples of NFTs include digital artworks, objects in computer games, digital entry tickets, and domain names for websites. However, they can also be digital proofs of ownership of objects which actually exist, such as for example paintings or other individual items.

Posthumanism or transhumanism are philosophical lines of thought in which humans relinquish their supposed supremacy in the world in favor of other organisms or machines. Thus it is possible to imagine not only new combinations of humans and machines (Cyborgs), but also new forms of interaction and community between AI-controlled machines, humans, and other forms of existence (→ digital humanism and → biotechnology).

Robotics is an interdisciplinary science which includes sub-areas of information technology, electrotechnology, and mechanical engineering, and which focuses on the design, control, manufacture, and operation of robots. The term was coined in 1921 by the Czech writer Karel Čapek, derived from the Slavic word “rabota” (work). In common parlance the term usually refers to hardware robots and mechanical appliances. Pure software robots, also known as “bots,” consist only of a programmed code (→ chatbot). Today, AI-capable robots are able not only to imitate the facial expressions, gestures, and appearance of humans, but can also speak and learn. In the 1970s the Japanese roboticist Masahiro Mori referred to the contradictory emotional effect when faced with artificially created figures as “uncanny valley.” On the one hand machines appear increasingly likeable the more they remind us of living beings, but on the other hand they also trigger feelings of discomfort when they are too similar to humans.

In 1950 the mathematician and logician Alan Turing developed the so-called Turing test in order to determine whether a computer has powers of reasoning that are equivalent to that of humans. He himself originally called this test the “imitation game.” In the test a human puts questions to another human and to a machine via a keyboard and computer screen. If at the end the questioner cannot tell which of the two is a machine, the machine has passed the Turing test and powers of reasoning that are similar to those of humans will be ascribed to it. A practical application is Captcha (Completely automated public Turing test to tell computers and humans apart), a fully automatic Turing test by means of which humans can be distinguished from so-called bots.