The Limited Times

Now you can see non-English news...

Three steps make these videos over 100 years old look freshly recorded

2020-06-01T20:45:35.055Z


We explain the process in which three algorithms that must be previously trained intervene.You may have seen videos filmed more than a century ago circulating on the networks in recent weeks, but with extraordinary quality and even in color. A walk through New York in 1911 or Paris in the late 19th century, or the arrival of a train in the village of La Ciotat (France) in 1896 - filmed by the Lumière brothers, the inventors of the cinematographer, whose first projection in this 2020 we ...


You may have seen videos filmed more than a century ago circulating on the networks in recent weeks, but with extraordinary quality and even in color. A walk through New York in 1911 or Paris in the late 19th century, or the arrival of a train in the village of La Ciotat (France) in 1896 - filmed by the Lumière brothers, the inventors of the cinematographer, whose first projection in this 2020 we celebrate 125 years - these are some examples created by the Russian programmer Denis Shiryaev, who accumulates many more examples on his YouTube account. At Verne , we explain how Shiryaev has achieved, with the help of artificial intelligence, that these fragments of analog film have that current touch.

We will use this video of the arrival of the train to La Ciotat, by the Lumière brothers, to illustrate this process, which is divided into three parts:

1. Improve resolution

Remember that these videos that you now see on your mobile or computer were filmed analogically with a cinematographer on a celluloid film. In order to work with them, it is necessary to digitize them, convert them into digital information: pixels. There are already many national museums and archives that have done this work for us and have them available for download on their web pages. As he explains in his videos, Shiryaev used this recording from another YouTube account, but the original file can be found, for example, on the website of the Museum of Modern Art in New York.

The first thing the programmer did was improve the quality of the video, increasing the maximum resolution with which it had been digitized from 720 pixels to one of 4K (capable of reaching 2,160 pixels). How? We say he did it, but in reality it is not. This process, like the other two that we will see later, is carried out by a sophisticated artificial intelligence system that does not require the direct supervision of a human - what is known as deep learning - although it does require prior training.

As explained in this article by the technology consultancy Smart Panel, this deep learning is a category of artificial intelligence capable of creating automatic algorithms that emulate human learning in order to obtain certain knowledge. That is, they are trained to perform certain tasks.

In this video from the YouTube Dot CSV account, they explain how an algorithm is trained so that it can increase the resolution of a video through two actions: the perception of an image and the generation of another image related to that perception. Before a diffuse image of an object but with certain clues of what can be seen in it (for example, color), our brain is able to predict what it really is. In the video they give an example of an apparent yellow spot that, with high probability, our brain will identify as a lemon or a tennis ball thanks to the registration of similar images that we store in our memory. For the algorithm to react the same, we must feed it with a large image register, that is what its training consists of. Once trained, the algorithm would be ready to perceive an image, identify the elements that are in it and generate more details (information in pixels) of it, improving its quality.

To scale the resolution of the Lumière brothers' video, Shiryaev used the Gigapixel algorithm, developed by Topaz Labs, according to the author himself.

2. More frames per second, like on mobiles

In the original recording of the Lumière there are small jumps that make it less fluid. You have to go back to his time and the beginnings of cinema to understand why. The sequence of still images (frames) projected at a certain speed produce the sensation of movement. For the human eye, this speed is from 10 or 12 frames per second (FPS). Still, at this speed you can still see small jumps in the video. The cinema as we know it is shot at 24 FPS, although the videos of the Lumière brothers did not reach that figure. With the emergence of digital cinema, it began to test with formats of 30 FPS and higher.

Shiryaev tries to eliminate skipping by increasing the frame rate per second. How? Again, it again uses a deep learning algorithm , created by Google engineers and registered under the name of DAIN. In this case, it has been trained to insert frames between the gaps that were in the original sequence of 16 FPS and increase them to 60 FPS, generating that feeling of fluidity of the video like that presented in the videos we record with our mobiles.

3. From black and white to color

Shiryaev already has the video of the train on arrival at La Ciotat in 4K resolution and 60 frames per second, but he has one last step to make that 1896 video look like it was shot yesterday: color.

At this point, we can already intuit what the Russian programmer uses to color a video that was originally filmed with a black and white film, and therefore did not register any type of color. Indeed, an intelligent algorithm called DeOldify is in charge of this task.

Its creator, the American computer engineer Jason Antic, explains to Verne by email how this algorithm created from antagonistic generative networks (GAN) works. “To create these color images, two neural networks are used: a generator and a critic. The generator knows how to recognize things in images and therefore can see a black and white image and figure out what color should be used for most things in that image. If you don't know, do your best to choose a color that makes sense. The mission of the critic is to find out if that image with those colors is real. The generating network is constantly trying to trick criticism into believing that the images it generates are real. ” It is this antagonistic dynamics that causes the two artificial neural networks to improve the image result until they find one that the criticism accepts as real. DeOldify works for both video and still images.

And voila , we can already see in the video the blue tones of the station clerk's jacket, the creams of a passenger's skirt or another woman's burgundy plaid shawl.

As Shiryaev details in the information of his videos on YouTube, he adds to these three processes an increase in image definition (improves sharpness and resolution) with the After Effects program and, in some cases, a reduction in noise ( removes grain from old movies.) Of course, to work with these algorithms you need a computer with powerful graphics memory.

If you've been wanting to see more, here are other examples of old videos scaled to 4K resolution and colored with these artificial intelligence techniques:

* You can also follow us on Instagram and Flipboard. Don't miss the best of Verne!

Source: elparis

All life articles on 2020-06-01

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.