Software:DALL-E
Watermark present on DALL-E images | |
An image generated by DALL-E 2, from the prompt "Teddy bears working on new AI research underwater with 1990s technology" | |
| Developer(s) | OpenAI |
|---|---|
| Initial release | 5 January 2021 |
| Stable release | DALL-E 3
/ 10 August 2023 |
| Platform | Cloud computing platforms |
| Successor | GPT Image 1 |
| Type | Text-to-image model |
| License | Proprietary service |
| Artificial intelligence |
|---|
| Major goals |
| Approaches |
| Philosophy |
| History |
| Technology |
| Glossary |
DALL-E, DALL-E 2, and DALL-E 3 (stylised DALL·E) are text-to-image models developed by OpenAI using deep learning methodologies to generate digital images from natural language descriptions known as prompts.
The first version of DALL-E was announced in January 2021. In the following year, its successor DALL-E 2 was released. DALL-E 3 was released natively into ChatGPT for ChatGPT Plus and ChatGPT Enterprise customers in October 2023,[1] with availability via OpenAI's API[2] and "Labs" platform provided in early November.[3] Microsoft implemented the model in Bing's Image Creator tool and plans to implement it into their Designer app.[4] With Bing's Image Creator tool, Microsoft Copilot runs on DALL-E 3.[5] In March 2025, DALL-E-3 was replaced in ChatGPT by GPT Image 1's native image-generation capabilities.[6]
History and background
DALL-E was revealed by OpenAI in a blog post on 5 January 2021, and uses a version of GPT-3[7] modified to generate images.
On 6 April 2022, OpenAI announced DALL-E 2, a successor designed to generate more realistic images at higher resolutions that "can combine concepts, attributes, and styles".[8] On 20 July 2022, DALL-E 2 entered into a beta phase with invitations sent to 1 million waitlisted individuals;[9] users could generate a certain number of images for free every month and may purchase more.[10] Access had previously been restricted to pre-selected users for a research preview due to concerns about ethics and safety.[11][12] On 28 September 2022, DALL-E 2 was opened to everyone and the waitlist requirement was removed.[13] In September 2023, OpenAI announced their latest image model, DALL-E 3, capable of understanding "significantly more nuance and detail" than previous iterations.[14] In early November 2022, OpenAI released DALL-E 2 as an API, allowing developers to integrate the model into their own applications. Microsoft unveiled their implementation of DALL-E 2 in their Designer app and Image Creator tool included in Bing and Microsoft Edge.[15] The API operates on a cost-per-image basis, with prices varying depending on image resolution. Volume discounts are available to companies working with OpenAI's enterprise team.[16]
The software's name is a portmanteau of the names of animated robot Pixar character WALL-E and the Spanish surrealist artist Salvador Dalí.[17][7]
In February 2024, OpenAI began adding watermarks to DALL-E generated images, containing metadata in the C2PA (Coalition for Content Provenance and Authenticity) standard promoted by the Content Authenticity Initiative.[18]
Technology
The first generative pre-trained transformer (GPT) model was initially developed by OpenAI in 2018,[19] using a Transformer architecture. The first iteration, GPT-1,[20] was scaled up to produce GPT-2 in 2019;[21] in 2020, it was scaled up again to produce GPT-3, with 175 billion parameters.[22][7][23]
DALL-E
DALL-E has three components: a discrete VAE, an autoregressive decoder-only Transformer (12 billion parameters) similar to GPT-3, and a CLIP pair of image encoder and text encoder.[24]
The discrete VAE can convert an image to a sequence of tokens, and conversely, convert a sequence of tokens back to an image. This is necessary as the Transformer does not directly process image data.[24]
The input to the Transformer model is a sequence of tokenised image caption followed by tokenised image patches. The image caption is in English, tokenised by byte pair encoding (vocabulary size 16384), and can be up to 256 tokens long. Each image is a 256×256 RGB image, divided into 32×32 patches of 4×4 each. Each patch is then converted by a discrete variational autoencoder to a token (vocabulary size 8192).[24]
DALL-E was developed and announced to the public in conjunction with CLIP (Contrastive Language-Image Pre-training).[25] CLIP is a separate model based on contrastive learning that was trained on 400 million pairs of images with text captions scraped from the Internet. Its role is to "understand and rank" DALL-E's output by predicting which caption from a list of 32,768 captions randomly selected from the dataset (of which one was the correct answer) is most appropriate for an image.[26]
A trained CLIP pair is used to filter a larger initial list of images generated by DALL-E to select the image that is closest to the text prompt.[24]
DALL-E 2
DALL-E 2 uses 3.5 billion parameters, a smaller number than its predecessor.[24] Instead of an autoregressive Transformer, DALL-E 2 uses a diffusion model conditioned on CLIP image embeddings, which, during inference, are generated from CLIP text embeddings by a prior model.[24] This is the same architecture as that of Stable Diffusion, released a few months later.
DALL-E 3
While a technical report was written for DALL-E 3, it does not include training or implementation details of the model, instead focusing on the improved prompt following capabilities developed for DALL-E 3.[27]
Capabilities
DALL-E can produce images for a wide variety of arbitrary descriptions from various viewpoints[28] with only rare failures.[17] Mark Riedl, an associate professor at the Georgia Tech School of Interactive Computing, found that DALL-E could blend concepts (described as a key element of human creativity).[29][30]
Its visual reasoning ability is sufficient to solve Raven's Matrices (visual tests often administered to humans to measure intelligence).[31][32]

DALL-E 3 follows complex prompts with more accuracy and detail than its predecessors, and is able to generate more coherent and accurate text.[33][14] DALL-E 3 is integrated into ChatGPT Plus.[14]
Image modification
Given an existing image, DALL-E 2 and DALL-E 3 can produce "variations" of the image as individual outputs based on the original, as well as edit the image to modify or expand upon it. The "inpainting" and "outpainting" abilities of these models use context from an image to fill in missing areas using a medium consistent with the original, following a given prompt.
For example, this can be used to insert a new subject into an image, or expand an image beyond its original borders.[34] According to OpenAI, "Outpainting takes into account the image’s existing visual elements — including shadows, reflections, and textures — to maintain the context of the original image."[35]
Technical limitations
DALL-E 2's language understanding has limits. It is sometimes unable to distinguish "A yellow book and a red vase" from "A red book and a yellow vase" or "A panda making latte art" from "Latte art of a panda".[36] It generates images of an astronaut riding a horse when presented with the prompt "a horse riding an astronaut".[37] It also fails to generate the correct images in a variety of circumstances. Requesting more than three objects, negation, numbers, and connected sentences may result in mistakes, and object features may appear on the wrong object.[28] Additional limitations include generating text, ambigrams and other forms of typography, which often results in dream-like gibberish. The model also has a limited capacity to address scientific information, such as astronomy or medical imagery.[38]

Ethical concerns
DALL-E 2's reliance on public datasets influences its results and leads to algorithmic bias in some cases, such as generating higher numbers of men than women for requests that do not mention gender.[38] DALL-E 2's training data was filtered to remove violent and sexual imagery, but this was found to increase bias in some cases such as reducing the frequency of women being generated.[39] OpenAI hypothesise that this may be because women were more likely to be sexualised in training data which caused the filter to influence results.[39] In September 2022, OpenAI confirmed to The Verge that DALL-E invisibly inserts phrases into user prompts to address bias in results; for instance, "black man" and "Asian woman" are inserted into prompts that do not specify gender or race.[40] OpenAI claims to address concerns for potential "racy content" – containing nudity or sexual content generation, with DALL-E 3 through input/output filters, blocklists, ChatGPT refusals, and model level interventions.[41] However, DALL-E 3 continues to disproportionally represent people as White, female, and youthful. Users are able to somewhat remedy this through more specific prompts for image generation.
A concern about DALL-E 2 and similar image generation models is that they could be used to propagate deepfakes and other forms of misinformation.[42][43] As an attempt to mitigate this, the software rejects prompts involving public figures and uploads containing human faces.[44] Prompts containing potentially objectionable content are blocked, and uploaded images are analysed to detect offensive material.[45] A disadvantage of prompt-based filtering is that it is easy to bypass using alternative phrases that result in a similar output. For example, the word "blood" is filtered, but "ketchup" and "red liquid" are not.[46][45]
Another concern about DALL-E 2 and similar models is that they could cause technological unemployment for artists, photographers, and graphic designers due to their accuracy and popularity.[47][48] DALL-E 3 is designed to block users from generating art in the style of currently-living artists.[14] While OpenAI states that images produced using these models do not require permission to reprint, sell, or merchandise,[49] legal concerns have been raised regarding who owns those images.[50][51]
In 2023 Microsoft pitched the United States Department of Defense to use DALL-E models to train battlefield management systems.[52] In January 2024 OpenAI removed its blanket ban on military and warfare use from its usage policies.[53]
Reception

Most coverage of DALL-E focuses on a small subset of "surreal"[25] or "quirky"[29] outputs. DALL-E's output for "an illustration of a baby daikon radish in a tutu walking a dog" was mentioned in pieces from Input,[54] NBC,[55] Nature,[56] and other publications.[7][57][58] Its output for "an armchair in the shape of an avocado" was also widely covered.[25][30]
ExtremeTech stated "you can ask DALL-E for a picture of a phone or vacuum cleaner from a specified period of time, and it understands how those objects have changed".[59] Engadget also noted its unusual capacity for "understanding how telephones and other objects change over time".[60]
According to MIT Technology Review, one of OpenAI's objectives was to "give language models a better grasp of the everyday concepts that humans use to make sense of things".[25]
Wall Street investors have had a positive reception of DALL-E 2, with some firms thinking it could represent a turning point for a future multi-trillion dollar industry. By mid-2019, OpenAI had already received over $1 billion in funding from Microsoft and Khosla Ventures,[61][62][63] and in January 2023, following the launch of DALL-E 2 and ChatGPT, received an additional $10 billion in funding from Microsoft.[64]
Japan's anime community has had a negative reaction to DALL-E 2 and similar models.[65][66][67] Two arguments are typically presented by artists against the software. The first is that AI art is not art because it is not created by a human with intent. "The juxtaposition of AI-generated images with their own work is degrading and undermines the time and skill that goes into their art. AI-driven image generation tools have been heavily criticized by artists because they are trained on human-made art scraped from the web."[9] The second is the trouble with copyright law and data text-to-image models are trained on. OpenAI has not released information about what dataset(s) were used to train DALL-E 2, inciting concern from some that the work of artists has been used for training without permission. Copyright laws surrounding these topics are inconclusive at the moment.[10]
After integrating DALL-E 3 into Bing Chat and ChatGPT, Microsoft and OpenAI faced criticism for excessive content filtering, with critics saying DALL-E had been "lobotomized."[68] The flagging of images generated by prompts such as "man breaks server rack with sledgehammer" was cited as evidence. Over the first days of its launch, filtering was reportedly increased to the point where images generated by some of Bing's own suggested prompts were being blocked.[68][69] TechRadar argued that leaning too heavily on the side of caution could limit DALL-E's value as a creative tool.[69]
Open-source implementations
Since OpenAI has not released source code for any of the three models, there have been several attempts to create open-source models offering similar capabilities.[70][71] Released in 2022 on Hugging Face's Spaces platform, Craiyon (formerly DALL-E Mini until a name change was requested by OpenAI in June 2022) is an AI model based on the original DALL-E that was trained on unfiltered data from the Internet. It attracted substantial media attention in mid-2022, after its release due to its capacity for producing humorous imagery.[72][73][74] Another example of an open source text-to-image model is Stable Diffusion by Stability AI.[75]
See also
- Artificial intelligence art
- DeepDream
- GPT Image 1
- Runway
- Imagen
- Midjourney
- Stable Diffusion
- Prompt engineering
References
- ↑ David, Emilia (2023-09-20). "OpenAI releases third version of DALL-E" (in en-US). https://www.theverge.com/2023/9/20/23881241/openai-dalle-third-version-generative-ai.
- ↑ "OpenAI Platform" (in en). https://platform.openai.com/.
- ↑ Niles, Raymond (2023-11-10). "DALL-E 3 API" (in en). https://help.openai.com/en/articles/8555480-dall-e-3-api.
- ↑ Mehdi, Yusuf (2023-09-21). "Announcing Microsoft Copilot, your everyday AI companion" (in en-US). https://blogs.microsoft.com/blog/2023/09/21/announcing-microsoft-copilot-your-everyday-ai-companion/.
- ↑ "AI art improvements with DALL-E 3". https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/do-more-with-ai/ai-art-and-creativity/image-creator-improvements-dall-e-3?form=MA13KP.
- ↑ Zeff, Kyle; Wiggers, Maxwell (2025-03-25). "ChatGPT's image-generation feature gets an upgrade" (in en-US). https://techcrunch.com/2025/03/25/chatgpts-image-generation-feature-gets-an-upgrade/.
- ↑ 7.0 7.1 7.2 7.3 Johnson, Khari (5 January 2021). "OpenAI debuts DALL-E for generating images from text". VentureBeat. https://venturebeat.com/2021/01/05/openai-debuts-dall-e-for-generating-images-from-text/.
- ↑ "DALL·E 2" (in en-US). https://openai.com/dall-e-2/.
- ↑ 9.0 9.1 "DALL·E Now Available in Beta" (in en). 2022-07-20. https://openai.com/blog/dall-e-now-available-in-beta/.
- ↑ 10.0 10.1 Allyn, Bobby (2022-07-20). "Surreal or too real? Breathtaking AI tool DALL-E takes its images to a bigger stage" (in en). NPR. https://www.npr.org/2022/07/20/1112331013/dall-e-ai-art-beta-test.
- ↑ "DALL·E Waitlist" (in en). https://labs.openai.com/.
- ↑ "From Trump Nevermind babies to deep fakes: DALL-E and the ethics of AI art" (in en). 2022-06-18. https://www.theguardian.com/technology/2022/jun/19/from-trump-nevermind-babies-to-deep-fakes-dall-e-and-the-ethics-of-ai-art.
- ↑ "DALL·E Now Available Without Waitlist" (in en). 2022-09-28. https://openai.com/blog/dall-e-now-available-without-waitlist/.
- ↑ 14.0 14.1 14.2 14.3 "DALL·E 3" (in en-US). https://openai.com/dall-e-3/.
- ↑ "DALL·E API Now Available in Public Beta" (in en). 2022-11-03. https://openai.com/blog/dall-e-api-now-available-in-public-beta.
- ↑ Wiggers, Kyle (2022-11-03). "Now anyone can build apps that use DALL-E 2 to generate images". TechCrunch. https://techcrunch.com/2022/11/03/now-anyone-can-build-apps-that-use-dall-e-2-to-generate-images.
- ↑ 17.0 17.1 Coldewey, Devin (5 January 2021). "OpenAI's DALL-E creates plausible images of literally anything you ask it to". https://techcrunch.com/2021/01/05/openais-dall-e-creates-plausible-images-of-literally-anything-you-ask-it-to/.
- ↑ Growcoot, Matt (2024-02-08). "AI Images Generated on DALL-E Now Contain the Content Authenticity Tag" (in en). https://petapixel.com/2024/02/08/ai-images-generated-on-dall-e-now-contain-the-content-authenticity-tag/.
- ↑ Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (11 June 2018). "Improving Language Understanding by Generative Pre-Training". OpenAI. pp. 12. https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf.
- ↑ "GPT-1 to GPT-4: Each of OpenAI's GPT Models Explained and Compared". 11 April 2023. https://www.makeuseof.com/gpt-models-explained-and-compared/.
- ↑ Radford, Alec; Wu, Jeffrey; Child, Rewon et al. (14 February 2019). Language models are unsupervised multitask learners. 1. https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf. Retrieved 19 December 2020.
- ↑ Brown, Tom B.; Mann, Benjamin; Ryder, Nick; et al. (July 22, 2020). "Language Models are Few-Shot Learners". arXiv:2005.14165 [cs.CL].
- ↑ Ramesh, Aditya; Pavlov, Mikhail; Goh, Gabriel; et al. (24 February 2021). "Zero-Shot Text-to-Image Generation". arXiv:2102.12092 [cs.LG].
- ↑ 24.0 24.1 24.2 24.3 24.4 24.5 Ramesh, Aditya; Dhariwal, Prafulla; Nichol, Alex; Chu, Casey; Chen, Mark (2022-04-12). "Hierarchical Text-Conditional Image Generation with CLIP Latents". arXiv:2204.06125 [cs.CV].
- ↑ 25.0 25.1 25.2 25.3 Heaven, Will Douglas (5 January 2021). "This avocado armchair could be the future of AI". MIT Technology Review. https://www.technologyreview.com/2021/01/05/1015754/avocado-armchair-future-ai-openai-deep-learning-nlp-gpt3-computer-vision-common-sense/.
- ↑ Radford, Alec; Kim, Jong Wook; Hallacy, Chris; Ramesh, Aditya; Goh, Gabriel; Agarwal, Sandhini; Sastry, Girish; Askell, Amanda et al. (2021-07-01). "Learning Transferable Visual Models From Natural Language Supervision" (in en). Proceedings of the 38th International Conference on Machine Learning. PMLR. pp. 8748–8763. https://proceedings.mlr.press/v139/radford21a.
- ↑ Shi, Zhan; Zhou, Xu; Qiu, Xipeng; Zhu, Xiaodan (2020). "Improving Image Captioning with Better Use of Captions". arXiv:2006.11807 [cs.CV].
- ↑ 28.0 28.1 Marcus, Gary; Davis, Ernest; Aaronson, Scott (2022-05-02). "A very preliminary analysis of DALL-E 2". arXiv:2204.13807 [cs.CV].
- ↑ 29.0 29.1 Shead, Sam (8 January 2021). "Why everyone is talking about an image generator released by an Elon Musk-backed A.I. lab". CNBC. https://www.cnbc.com/2021/01/08/openai-shows-off-dall-e-image-generator-after-gpt-3.html.
- ↑ 30.0 30.1 Wakefield, Jane (6 January 2021). "AI draws dog-walking baby radish in a tutu". British Broadcasting Corporation. https://www.bbc.com/news/technology-55559463.
- ↑ Markowitz, Dale (10 January 2021). "Here's how OpenAI's magical DALL-E image generator works". TheNextWeb. https://thenextweb.com/neural/2021/01/10/heres-how-openais-magical-dall-e-generates-images-from-text-syndication/.
- ↑ "DALL·E: Creating Images from Text" (in en). 2021-01-05. https://openai.com/blog/dall-e/.
- ↑ Edwards, Benj (2023-09-20). "OpenAI's new AI image generator pushes the limits in detail and prompt fidelity" (in en-us). https://arstechnica.com/information-technology/2023/09/openai-announces-dall-e-3-a-next-gen-ai-image-generator-based-on-chatgpt/.
- ↑ Coldewey, Devin (2022-04-06). "New OpenAI tool draws anything, bigger and better than ever" (in en-US). https://techcrunch.com/2022/04/06/openais-new-dall-e-model-draws-anything-but-bigger-better-and-faster-than-before/.
- ↑ "DALL·E: Introducing Outpainting" (in en). 2022-08-31. https://openai.com/blog/dall-e-introducing-outpainting/.
- ↑ Saharia, Chitwan; Chan, William; Saxena, Saurabh; et al. (2022-05-23). "Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding". arXiv:2205.11487 [cs.CV].
- ↑ Marcus, Gary (2022-05-28). "Horse rides astronaut". https://garymarcus.substack.com/p/horse-rides-astronaut.
- ↑ 38.0 38.1 Strickland, Eliza (2022-07-14). "DALL-E 2's Failures Are the Most Interesting Thing About It" (in en). https://spectrum.ieee.org/openai-dall-e-2.
- ↑ 39.0 39.1 "DALL·E 2 Pre-Training Mitigations" (in en). 2022-06-28. https://openai.com/blog/dall-e-2-pre-training-mitigations/.
- ↑ James Vincent (September 29, 2022). "OpenAI's image generator DALL-E is available for anyone to use immediately". https://www.theverge.com/2022/9/28/23376328/ai-art-image-generator-dall-e-access-waitlist-scrapped.
- ↑ OpenAI (October 3, 2023). DALL-E 3 System Card. https://cdn.openai.com/papers/DALL_E_3_System_Card.pdf.
- ↑ Taylor, Josh (18 June 2022). "From Trump Nevermind babies to deep fakes: DALL-E and the ethics of AI art". https://www.theguardian.com/technology/2022/jun/19/from-trump-nevermind-babies-to-deep-fakes-dall-e-and-the-ethics-of-ai-art. Retrieved 2 August 2022.
- ↑ Knight, Will (13 July 2022). "When AI Makes Art, Humans Supply the Creative Spark". Wired. https://www.wired.com/story/when-ai-makes-art/. Retrieved 2 August 2022.
- ↑ Rose, Janus (24 June 2022). "DALL-E Is Now Generating Realistic Faces of Fake People". Vice. https://www.vice.com/en/article/dall-e-is-now-generating-realistic-faces-of-fake-people/.
- ↑ 45.0 45.1 OpenAI (19 June 2022). "DALL·E 2 Preview – Risks and Limitations". https://github.com/openai/dalle-2-preview/blob/main/system-card.md. Retrieved 2 August 2022.
- ↑ Lane, Laura (1 July 2022). "DALL-E, Make Me Another Picasso, Please". The New Yorker. https://www.newyorker.com/magazine/2022/07/11/dall-e-make-me-another-picasso-please. Retrieved 2 August 2022.
- ↑ Goldman, Sharon (26 July 2022). "OpenAI: Will DALL-E 2 kill creative careers?". https://venturebeat.com/business/openai-will-dall-e-2-kill-creative-careers/.
- ↑ Blain, Loz (29 July 2022). "DALL-E 2: A dream tool and an existential threat to visual artists". https://newatlas.com/computers/dall-e-2-ai-art/.
- ↑ "DALL·E 3" (in en-US). https://openai.com/index/dall-e-3/.
- ↑ centerforartlaw (2022-11-21). "Art-istic or Art-ificial? Ownership and copyright concerns in AI-generated artwork - Center for Art Law" (in en-US). https://itsartlaw.org/2022/11/21/artistic-or-artificial-ai/.
- ↑ "Generative AI Has an Intellectual Property Problem" (in en). Harvard Business Review. 2023-04-07. ISSN 0017-8012. https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem.
- ↑ Biddle, Sam (10 April 2024). "Microsoft Pitched OpenAI's DALL-E as Battlefield Tool for U.S. Military". The Intercept. https://theintercept.com/2024/04/10/microsoft-openai-dalle-ai-military-use/.
- ↑ Biddle, Sam (12 January 2024). "OpenAI Quietly Deletes Ban on Using ChatGPT for "Military and Warfare"". The Intercept. https://theintercept.com/2024/01/12/open-ai-military-ban-chatgpt/.
- ↑ Kasana, Mehreen (7 January 2021). "This AI turns text into surreal, suggestion-driven art". Input. https://www.inputmag.com/tech/dalle-takes-your-text-turns-it-into-surreal-captivating-art.
- ↑ Ehrenkranz, Melanie (27 January 2021). "Here's DALL-E: An algorithm learned to draw anything you tell it". NBC News. https://www.nbcnews.com/tech/innovation/here-s-dall-e-algorithm-learned-draw-anything-you-tell-n1255834.
- ↑ Stove, Emma (5 February 2021). "Tardigrade circus and a tree of life — January's best science images". Nature. https://www.nature.com/immersive/d41586-021-00095-y/index.html.
- ↑ Knight, Will (26 January 2021). "This AI Could Go From 'Art' to Steering a Self-Driving Car". Wired. https://www.wired.com/story/ai-go-art-steering-self-driving-car/. Retrieved 2 March 2021.
- ↑ Metz, Rachel (2 February 2021). "A radish in a tutu walking a dog? This AI can draw it really well". CNN. https://www.cnn.com/2021/01/08/tech/artificial-intelligence-openai-images-from-text/index.html.
- ↑ Whitwam, Ryan (6 January 2021). "OpenAI's 'DALL-E' Generates Images From Text Descriptions". ExtremeTech. https://www.extremetech.com/extreme/318881-openais-dall-e-generates-images-from-text-descriptions.
- ↑ Dent, Steve (6 January 2021). "OpenAI's DALL-E app generates images from just a description". Engadget. https://www.engadget.com/dall-e-ai-gpt-make-image-from-any-description-135535140.html.
- ↑ Leswing, Kif (8 October 2022). "Why Silicon Valley is so excited about awkward drawings done by artificial intelligence" (in en). https://www.cnbc.com/2022/10/08/generative-ai-silicon-valleys-next-trillion-dollar-companies.html.
- ↑ Etherington, Darrell (2019-07-22). "Microsoft invests $1 billion in OpenAI in new multiyear partnership" (in en-US). https://techcrunch.com/2019/07/22/microsoft-invests-1-billion-in-openai-in-new-multiyear-partnership/.
- ↑ "OpenAI's first VC backer weighs in on generative A.I." (in en). https://fortune.com/2023/02/02/openais-first-vc-backer-khosla-ventures-weighs-in-on-the-future-of-generative-a-i/.
- ↑ Metz, Cade; Weise, Karen (2023-01-23). "Microsoft to Invest $10 Billion in OpenAI, the Creator of ChatGPT" (in en-US). The New York Times. ISSN 0362-4331. https://www.nytimes.com/2023/01/23/business/microsoft-chatgpt-artificial-intelligence.html.
- ↑ "AI-generated art sparks furious backlash from Japan's anime community" (in en-US). 2022-10-27. https://restofworld.org/2022/ai-backlash-anime-artists/.
- ↑ Roose, Kevin (2022-09-02). "An A.I.-Generated Picture Won an Art Prize. Artists Aren't Happy." (in en-US). The New York Times. ISSN 0362-4331. https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html.
- ↑ Daws, Ryan (2022-12-15). "ArtStation backlash increases following AI art protest response" (in en-GB). https://www.artificialintelligence-news.com/2022/12/15/artstation-backlash-increases-ai-art-protest-response/.
- ↑ 68.0 68.1 Corden, Jez (8 October 2023). "Bing Dall-E 3 image creation was great for a few days, but now Microsoft has predictably lobotomized it". https://www.windowscentral.com/software-apps/bing/bing-dall-e-3-image-creation-was-great-for-a-few-days-but-now-microsoft-has-predictably-lobotomized-it.
- ↑ 69.0 69.1 Allan, Darren (9 October 2023). "Microsoft reins in Bing AI's Image Creator – and the results don't make much sense". https://www.techradar.com/computing/artificial-intelligence/microsoft-reins-in-bing-ais-image-creator-and-the-results-dont-make-much-sense.
- ↑ Sahar Mor, Stripe (16 April 2022). "How DALL-E 2 could solve major computer vision challenges". https://venturebeat.com/2022/04/16/how-dall-e-2-could-solve-major-computer-vision-challenges/.
- ↑ "jina-ai/dalle-flow". Jina AI. 2022-06-17. https://github.com/jina-ai/dalle-flow.
- ↑ Carson, Erin (14 June 2022). "Everything to Know About Dall-E Mini, the Mind-Bending AI Art Creator". https://www.cnet.com/culture/everything-to-know-about-dall-e-mini-the-mind-bending-ai-art-creator/.
- ↑ Schroeder, Audra (9 June 2022). "AI program DALL-E mini prompts some truly cursed images". https://www.dailydot.com/unclick/dall-e-mini-memes/.
- ↑ Diaz, Ana (15 June 2022). "People are using DALL-E mini to make meme abominations — like pug Pikachu". https://www.polygon.com/23167596/memes-dall-e-mini-image-generator-ai-explained.
- ↑ Stability-AI/stablediffusion, Stability AI, 2025-04-07, https://github.com/Stability-AI/stablediffusion?tab=readme-ov-file, retrieved 2025-04-08
<ref> tag with name "boing" defined in <references> is not used in prior text.External links
- Ramesh, Aditya; Pavlov, Mikhail; Goh, Gabriel; Gray, Scott; Voss, Chelsea; Radford, Alec; Chen, Mark; Sutskever, Ilya (2021-02-26). "Zero-Shot Text-to-Image Generation". arXiv:2102.12092 [cs.CV].. The original report on DALL-E.
- DALL-E 3 System Card
- DALL-E 3 paper by OpenAI
- DALL-E 2 website
- Craiyon website
Template:Artificial intelligence navbox
