Artifact
Interactive Installation, 2019
If art is to be defined as the creativity involved in expressing emotions, visions and ideas in an effective, as well as aesthetically appealing manner – can a machine with the ability to comprehend emotions reflect them back in a work of art that possesses aesthetic value? Can machines empathize with the emotional expressions of other people, or even other machines?
Shaped by these questions, “artifact” is an experiment on the human-to-machine and machine-to-machine interaction in which different types of artificial neural networks are utilized in an effort to comprehend emotions and reflect them in the form of digital paintings.
Human expressions that are detected with the aid of a camera are analyzed by the machine. The machine produces a painting that reflects the perceived emotion by using a neural network that has been trained mainly on oil paintings. Then, a different machine observes the painting produced by the first one and creates its own painting based on its prediction of what emotion the original painting reflects.
This process that begins with the comprehension of human emotions is followed by a hallucinatory and perpetual tour in the hidden layers of neural networks that correspond to the differing probabilities that the two machines derive from the same emotion.
Technical Description:
Artifact is a realtime human computer interactive artificial intelligence application, which creates unique oil painting visuals by detecting the basic emotional situations (fear, peace, happiness, misery etc.) of visitors via their facial expressions. Artifact; grounds on creating an artwork by using an AI application and if it can make it by realizing emotions. It aims transition between emotional situation via producing visuals according to the facial expression. Labeled according to emotional reactions and determined as oil painting visual from Behance Artistic Media Dataset and Wikiart Emotions Dataset has been used to prepare artifact’s data set. Data set classified as representation of 4 basic emotions and taking the reference of J. A. Russle’s two dimensional emotion model. Visual generation done by training an artificial neural network called the SNcGAN (Spectral Normalization + Conditional GAN) with the custom dataset. With this machine learning system, accomplished realization of different emotions by “artifact” and generating new oil painting visuals according to the emotions. In this interactive installation, faces have been recognised via camera shooting by using face recognition algorithm with the help of OpenCV. CNN type neural networks and FER2013 data-sets have been used realtime to detect the emotional situation of the facial expressions. This emotion values are transmitted to the generative network of GAN model and the neural network generates realtime new oil paintings concurrently switches between it’s learned emotional structures. At the end, the low resolution generated visuals are transformed to bigger resolution versions via a neural network called as srgan (Super- Resolution Using a Generative Adversarial Network ) and screened in realtime.
The other two machines, by analysing this generated image and using a different machine learning model that can classify the image according to the emotional state it represents, separately recognise the emotional state of the image produced by the machine that interacts with the human. It generates a new image corresponding to the emotional state they recognise and displays it on the screen.