New AI Technology Turns All Kinds of Text Descriptions Into Images From Scratch
Here’s what “a baby daikon radish in a tutu walking a dog” looks like.
Technology company OpenAI has introduced DALL·E (a combination of artist Salvador Dalí and Pixar’s WALL-E) and CLIP (Contrastive Language–Image Pre-training), two new artificial intelligence (AI) models that can turn any kind of text description into an image from scratch. Using a dataset of text-image pairs, they can create even the most bizarre anthropomorphized objects, including a “baby daikon radish in a tutu walking a dog.”
“We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,” OpenAI explains on its official blog. The technology can even draw multiple objects from a three-dimensional perspective, as well as the internal and external structure of items like a walnut. The company showcases an array of examples ranging from “an armchair in the shape of an avocado,” to “an extreme close-up view of a capybara sitting in a field.”
Building on this invention, OpenAI plans to “analyze how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and longer-term ethical challenges.” You can head over to the official website to try it out for yourself.