Fake it ’till you make it: on art, learning, economy and strange bedrooms

Your company has just moved into a new office. You and your team realised that it could do with some decorating and a touch of personalisation. What do you do? Hire a decorator? Buy art online? Nah. At BrightMinded we decided to take matters into our own hands the best way we know how! Here’s a non-technical account of something we tried.

30 samples of art produced by the WGAN

Art with GAN

Our mission, should we choose to accept it, is to create digital art without an artist… well, without a human artist.

We could exploit the intrinsic beauty of certain mathematical processes (e.g. fractals) but that would be too easy.

One of our (many) philosophies at BrightMinded is to take the path of greatest learning whenever possible, so what better way to investigate one of the latest techniques in Artificial Intelligence than actually applying it?

Basics

In 2014 J. Goodfellow et al. published a paper titled Generative Adversarial Networks (GANs), introducing a process that takes induction by the ears and shakes it like an etch-a-sketch (although in this case the images appear rather than disappear).

Suppose you wanted to play keyboard solos like Chick Corea. You could search for a formula that gave you the conditional probability of every action he is likely to take during a solo (not recommended!). Or you could just become very good at imitating him.

The former is the approach taken by traditional deep learning models where the goal is for the model to be able to approximate the conditional probability of an output given some input. This is not always easy or useful. GAN models instead learn to generate samples (hence the “generative” in their name) in a manner that imitates as closely as possible (in terms of probability distributions) the
underlying process under consideration.


Figure 1: GAN game: the Generator uses random noise to generate fake data, say images, that are then fed to the Discrimitator which in turn tries to spot discrepancies between real images and fake images. The Generator uses this discrepancy to become better at faking it whilst the Discriminator gets more familiar with examples of real images from the same source.

A game

But how do we tell if the Generator (typically a neural network) is doing a good job at imitating? Simple! It is pitted against a Discriminator (hence the “adversarial” in GAN), which is typically another neural network. Its sole job is to become an expert at spotting whether its input is likely to be the real thing or an imitation. The Generator “wins” if the Discriminator can no longer effectively tell the difference between the real thing and the Generator’s fakes (Figure 1).

Conclusion 1: GANs are a perfect fit for generating our office art!


Figure 2: Suppose you need to dig some soil up in order to build a castle. Optimal Transport would be the problem of minimising the cost of moving and reshaping all the soil mass from the dig site to its destination.

Art with WGAN

Not all GANs are made the same. It depends in fact on how you measure the discrepancy between the real thing and a fake produced by the Generator, that is, you need a concept of similarity that is appropriate for this context.

Back in the USSR

Being an Economist in communist Soviet Union is like being a streaker at a football match: you are exposed and people will chase you! After all, you are spending your time thinking about optimising profit in the country that invented anti-capitalism. Not a great career move by all means, but this is exactly what Leonid Kantorovich did between 1934 and 1986. He got away with it thanks to his brilliance, which earned him a Nobel prize in 1975.

One thing he did is to show that a solution to the Optimal Transport problem always exists when expressed in terms of linear programming.

WGANs

Optimal Transport pertains to the problem of moving a mass of “stuff” (e.g. soil, taxis, etc) from one location to another when there is a cost involved (see Figure 2) which wants to be kept to a minimum.

It turns out that the optimal expected cost of moving and reshaping masses (i.e. probability distributions), as specified by Kantorovich, has the same properties as a measure of distance.

The Kantorovich-Rubinstein distance was born, a clever measure of similarity between probability distributions, although it almost always shows up with a different name, Wasserstein distance, after the German spelling of the name of the Russian fellow (Leonid Vaseršteĭn) who pointed this out in 1969.

Conclusion 2: Our art will be generated by a GAN that uses Wasserstein distance to measure the similarity between the distribution of the real thing and the fake, also known as a WGAN.

Bedrooms

WGANs were introduced by M. Arjovsky et al in their 2017 paper (Wasserstein GAN). One of the experiments Arjovsky and Co. carried out was to generate fake bedrooms (yes, bedrooms!) based on the LSUN-bedrooms dataset. Essentially, the Discriminator keeps looking at images of bedrooms to learn the pixel distribution that “makes a bedroom” whilst the Generator spits out fake bedroom images with the goal of getting the distribution of pixels as close as possible to that of real bedroom images.

Conclusion 3: We will reproduce the results from this paper in order to get better acquainted with WGANs.

Art

We plugged an NVIDIA Graphics Card into an old computer and with the limited resources at our disposal we ran our version of a WGAN.

The first fakes produced by the Generator look straight out of the Head-Up-Display from the Predator’s movies (Figure 3).

However, lo and behold, after a few hours we can admire some of the weirdest attempts at bedrooms you could ever have dreamt of (Figure 4).

In fact, it almost looks like they were painted by Salvador Dalí himself!

What’s more, look at the fantastic output on Figure 5 from a run using “defective” WGAN model parameters.

Epilogue

We set out to create original art for our office and we would argue that’s exactly what we achieved with our attempt to reproduce the results from the WGAN paper – and we did it all while expanding our mathematical and computational arsenal!

Feel free to contact, us whether you have an office to decorate or just to argue our creation’s status as art.


Figure 3: First ever output from the Generator


Figure 4: The Generator’s output after a few hours


Figure 5: The output from a different run showcasing some interesting vertical artefacts

Meanwhile our mission has been accomplished and this message will self-destruct in 3…2…1…