![]() The process is iterative and selecting which image to create variations from results in a sort of story. From there you are presented with six images that you can then download or use to create more variations. You begin the creative process to generate these images with the text prompt. I have a Twitter thread here that discusses my initial comparisons between the two platforms. My interest in this topic began while using Midjourney – and while I think the results are better for dataviz purposes using that platform – I just gained access to Dall-E 2 so exploring the differences between the apps is equally interesting. There’s a whole world of ideas to explore here, but I have been interested in using these generative models to help us see data visualization designs from a new perspective. The many improvements in both the text models and the image models have created a whole new way to think about imaging and how we might create prompts to describe it. As you can see on the left, the new models allow far more stylistic illustration of the language, determined not only by text as metadata but also by the context of where that language may appear. ![]() When the concept was introduced, the results looked like a mash of uncanny valley nightmares and jpeg compression. The concept of text-to-image is simple: a user submits a text prompt such as “Yoda drinks a beer” and the neural networks use the text to create a new image based on the associations in the language.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |