Don't get cat for a hare... or yes?

Science fiction writer Philip K. Dick was not misguided when he posed that androids could dream of electric sheep. Today, machines are able to imagine and create faces of new people, indistinguishable from real people. This application of artificial intelligence (AI) has created unprecedented developments in certain fields of mathematical modeling, interacting even with an apparently distant branch of the exact, game theory. Let's review how it has come to this point.

At the dawn of the IA, the parents of this discipline were reeling their brains in elucidating how a machine could learn to distinguish, in a broad sense: to differentiate a fraudulent transaction from a legitimate, a junk mail of an ordinary one, or a cat of a Hare, among many examples.

The mathematical models underlying this exciting task have been called, in a display of originality, discriminatory models. After years of effort we have seen how, in fact, machines learn to distinguish, being able even to differentiate the face of a user from any intruder to unlock or not your mobile phone. But the researchers in IA don't seem satisfied with the achievement... Do the machines really understand what they are doing? In the line of what proclaimed the physicist Richard Feynmann "What I can not create, I do not understand", research has focused on making machines learn to create. Create spam emails, legitimate emails, pictures of hares, cats and even people. And as the awake reader will have guessed, the mathematical models used in this field are called generative models.

But one thing is clear, if the machine is able to learn to generate a picture of a cat or a hare, ie has "understood" the anatomy of these mammals, also be able to distinguish a cat from a hare. Therefore, we are not mistaken to think that the task of generating is intrinsically more complex than that of discriminating. One of the great challenges of the generative models lies in the difficulty of quantifying the error: it is difficult to measure how well the machine is generating. In the task of discriminating, quantifying the error made by a machine by distinguishing hare cats would be as simple as ordering you to label millions of photos as a cat or hare and count how many times you have been mistaken. But what happens if the machine generates images of cats?, how do you measure the quality of a cat generated?, a pretty cat is more cat than an ugly one?, at what point does a generated cat stop being? To quantify the error in this situation is, in addition to subjective, rather complex.

Ian Goodfellow, a researcher at the University of Montreal, resolved this mess in a unique way: seeing that there is no clear criterion for measuring how good the hares are generated from the real ones, let this task be done by another. And who can better quantify the quality of the generated images than a discriminatory model that has precisely learned to distinguish real images from generated images? The idea is as follows: We will face two machines, a generative, that produces images and another discriminatory one that learns to distinguish real images of generated images. We hope that when the generated are sufficiently "good" (usually occurs after a few hours of calculation in which the generator is learning with the feedback of the discriminator), the generative machine can deceive the discriminator. In this way, the ability to deceive the discriminator is a measure of how good the generated images are. Mathematically, the conflict between generator and Discriminator is approached from the theory of games: Each of the two actors has its own objective that must be reached simultaneously, until reaching a point where neither of them can improve unilaterally. Once again, the synergy between (apparently) different areas of the exact one is appreciated.

If the machine is able to learn to generate a picture of a cat or a hare, ie has "understood" the anatomy of these mammals, also be able to distinguish a cat from a hare

The idea of Goodfellow is known as the Generative Antagonistic network (from the English, GAN). After its appearance in 2014, the GANs have been deep in the researchers in IA. Much of the interest they have raised is due to the GANs's ability to generate realistic images. However, there is nothing to prevent the same idea from being applied to another type of data. The underlying mathematical structure is general enough to be used in really diverse fields such as the simulation of events in particle colliders, the creation of musical pieces or the generation of text. An example of the realism achieved is the algorithm of the company NVIDIA, which allows to generate photos of faces specifying attributes, like the color of eyes or hair.

Of course, the great advances that are emerging in the field carry with it a strong controversy about their possible malicious applications. At a time when the notorious fake news has gained great political importance, automating the creation of false news is just around the corner. The monopoly of this development is concentrated in the hands of large companies competing to lead this revolution. Perhaps for the sake of a better-informed society?