Using a training data set of approximately 52,000 written recipes, along with images showing the completed foods, the researchers were able to devise a system that can read a recipe and then generate a picture showing what the end result is likely to look like.
The neural network responsible for the feat generates its images using a two-stage process. First, the text of the recipe is converted into a vector of numbers in a process called text embedding. This numerical representation attempts to capture the meaning of the text by mapping semantically similar pieces of text to close vectors in the embedding space. After this is done, a separate network maps the text vectors and images to align them.
In the second stage, the team uses a Generative Adversarial Network (GAN) which both generates new images and evaluates them. This is the process that resulted in the A.I.-created painting which sold at Christie’s auction last year. By having the GAN attempt to fool itself into thinking a generated image is a real photo, the pictures the system comes up with look increasingly realistic.
“[One] challenge we faced was the fact that the quality of the images in the dataset we used was low,” Bar El continued. “This is re?ected by lots of blurred images with bad lighting conditions.” The system also turned out to be better at generating certain, more formless foods (pasta, rice, soups, and salads) than others that had a distinctive shape, such as hamburgers.
While the results may be quite good enough for sharing on Instagram, however, it’s nonetheless an impressive example of machine learning. Pair it with IBM’s recipe-generating Chef Watson and it would be more dazzling.
A.I. can generate pictures of a finished meal based on just a written recipe [Digital Trends]
Sehr gute Informationen. Glück für much Vorr kurzewm fand
ich Ihre Website Zufall (stumbleupon). Ichh habe gespeichert für später!