Text-to-image personalization
Text-to-Image personalization is a task in deep learning for computer graphics that augments pre-trained text-to-image generative models. In this task, a generative model that was trained on large-scale data (usually a foundation model), is adapted such that it can generate images of novel, user-provided concepts.[1][2] These concepts are typically unseen during training, and may represent specific objects (such as the user's pet) or more abstract categories (new artistic style[3] or object relations[4]).
Text-to-Image personalization methods typically bind the novel (personal) concept to new words in the vocabulary of the model. These words can then be used in future prompts to invoke the concept for subject-driven generation,[5] inpainting, style transfer[6] and even to correct biases in the model. To do so, models either optimize word-embeddings, fine-tune the generative model itself, or employ a mixture of both approaches.
Technology
Text-to-Image personalization was first proposed during August 2022 by two concurrent works, Textual Inversion[7] and DreamBooth.[8]
In both cases, a user provides a few images (typically 3–5) of a concept, like their own dog, together with a coarse descriptor of the concept class (like the word "dog"). The model then learns to represent the subject through a reconstruction based objective, where prompts referring to the subject are expected to reconstruct images from the training set.
In Textual Inversion, the personalized concepts are introduced into the text-to-image model by adding new words to the vocabulary of the model. Typical text-to-image models represent words (and sometimes parts-of-words) as tokens, or indices in a predefined dictionary. During generation, an input prompt is converted into such tokens, each of which is converted into a ‘word-embedding’: a continuous vector representation which is learned for each token as part of the model's training. Textual Inversion proposes to optimize a new word-embedding vector for representing the novel concept. This new embedding vector can then be assigned to a user-chosen string, and invoked whenever the user's prompt contains this string.[7]
In DreamBooth, rather than optimizing a new word vector, the full generative model itself is fine-tuned. The user first selects an existing token, typically one which rarely appears in prompts. The subject itself is then represented by a string containing this token, followed by a coarse descriptor of the subject's class. A prompt describing the subject will then take the form: "A photo of <token> <class>" (e.g. "a photo of sks cat" when learning to represent a specific cat). The text-to-image model is then tuned so that prompts of this form will generate images of the subject.[8]
Textual Inversion
The key idea in textual inversion is to add a new term to the vocabulary of the diffusion model that corresponds to the new (personalized) concept. Textual inversion optimizes the vector embedding of that new term such that using it as an input text prompt will generate images that are similar to given image examples of the concept. The resulting model is extremely light-weight per concept: only 1K long, but succeeds to encode detailed visual properties of the concept.
Extensions
Several approaches were proposed to refine and improve over the original methods. These include the following.
- Low-rank Adaptation (LoRA) - an adapter-based technique for efficient finetuning of models.[9] In the case of text-to-image models, LoRA is typically used to modify the cross-attention layers of a diffusion model.[10]
- Perfusion - a low rank update method that also locks the activations of the key matrix in the diffusion model's cross attention layers to the concept's coarse class.[11]
- Extended Textual Inversion - a technique that learns an individual word embedding for each layer in the diffusion model's denoising network.[12]
- Encoder-based methods that use another neural network to quickly personalize a model[13][14]
Challenges and limitations
Text-to-image personalization methods must contend with several challenges. At their core is the goal of achieving high-fidelity to the personal concept while maintaining high alignment between novel prompts containing the subject, and the generated images (typically referred to as ‘editability’).
Another challenge that personalization methods must contend with is memory requirements. Initial implementations of personalization methods required more than 20 Gigabytes of GPU memory, and more recent approaches have reported requirements of more than 40 Gigabytes.[13] However, optimizations such as Flash Attention[15] have since reduced this requirement considerably.
Approaches that tune the entire generative model may also create checkpoints that are several gigabytes in size, making it difficult to share or store many models. Embedding based approaches require only a few kilobytes, but typically struggle to preserve identity while maintaining editability. More recent approaches have proposed hybrid tuning goals which optimize both an embedding and a subset of network weights. These can reduce storage requirements to as little as 100 Kilobytes while achieving quality comparable to full tuning methods.[11]
Finally, optimization processes can be lengthy, requiring several minutes of tuning for each novel concept. Encoder and quick-tuning methods aim to reduce this to seconds or less[16]
References
- ↑ Murphy, Brendan Paul (2022-10-12). "AI image generation is advancing at astronomical speeds. Can we still tell if a picture is fake?" (in en). http://theconversation.com/ai-image-generation-is-advancing-at-astronomical-speeds-can-we-still-tell-if-a-picture-is-fake-191674.
- ↑ "「好きなキャラに近い絵をAIが量産」――ある概念を"単語"に圧縮し入力テキストに使える技術" (in ja). https://www.itmedia.co.jp/news/articles/2208/30/news058.html.
- ↑ Baio, Andy (2022-11-01). "Invasive Diffusion: How one unwilling illustrator found herself turned into an AI model" (in en-US). https://waxy.org/2022/11/invasive-diffusion-how-one-unwilling-illustrator-found-herself-turned-into-an-ai-model/.
- ↑ Huang, Ziqi; Wu, Tianxing; Jiang, Yuming; Chan, Kelvin C. K.; Liu, Ziwei (2023). "ReVersion: Diffusion-Based Relation Inversion from Images". arXiv:2303.13495 [cs.CV].
- ↑ Jr, Edward Ongweso (2022-10-14). "People Are Now Making Fake Selfies With AI" (in en). https://www.vice.com/en/article/88qdpa/people-are-now-making-fake-selfies-with-ai.
- ↑ Dave James (2022-12-27). "I thrashed the RTX 4090 for 8 hours straight training Stable Diffusion to paint like my uncle Hermann" (in en). PC Gamer. https://www.pcgamer.com/nvidia-rtx-4090-stable-diffusion-training-aharon-kahana/.
- ↑ 7.0 7.1 Gal, Rinon; Alaluf, Yuval; Atzmon, Yuval; Patashnik, Or; Bermano, Amit Haim; Chechik, Gal; Cohen-or, Daniel (2022-09-29) (in en). An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion. https://openreview.net/forum?id=NAQvF08TcyG.
- ↑ 8.0 8.1 Ruiz, Nataniel; Li, Yuanzhen; Jampani, Varun; Pritch, Yael; Rubinstein, Michael; Aberman, Kfir (2023) (in en). DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation. pp. 22500–22510. https://openaccess.thecvf.com/content/CVPR2023/html/Ruiz_DreamBooth_Fine_Tuning_Text-to-Image_Diffusion_Models_for_Subject-Driven_Generation_CVPR_2023_paper.html.
- ↑ Singh, Niharika (2023-02-18). "HuggingFace Publishes LoRA Scripts For Efficient Stable Diffusion Fine-Tuning" (in en-US). https://www.marktechpost.com/2023/02/18/huggingface-publishes-lora-scripts-for-efficient-stable-diffusion-fine-tuning/.
- ↑ Hu, Edward J.; Shen, Yelong; Wallis, Phillip; Allen-Zhu, Zeyuan; Li, Yuanzhi; Wang, Shean; Wang, Lu; Chen, Weizhu (2021-10-06) (in en). LoRA: Low-Rank Adaptation of Large Language Models. https://openreview.net/forum?id=nZeVKeeFYf9.
- ↑ 11.0 11.1 Tewel, Yoad; Gal, Rinon; Chechik, Gal; Atzmon, Yuval (2023-07-23). "Key-Locked Rank One Editing for Text-to-Image Personalization". Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Proceedings. SIGGRAPH '23. New York, NY, USA: Association for Computing Machinery. pp. 1–11. doi:10.1145/3588432.3591506. ISBN 979-8-4007-0159-7. https://doi.org/10.1145/3588432.3591506.
- ↑ Lorenzi, Daniele (2023-07-22). "Meet P+: A Rich Embeddings Space for Extended Textual Inversion in Text-to-Image Generation" (in en-US). https://www.marktechpost.com/2023/07/22/meet-p-a-rich-embeddings-space-for-extended-textual-inversion-in-text-to-image-generation/.
- ↑ 13.0 13.1 Gal, Rinon; Arar, Moab; Atzmon, Yuval; Bermano, Amit H.; Chechik, Gal; Cohen-Or, Daniel (2023-07-26). "Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models". ACM Transactions on Graphics 42 (4): 150:1–150:13. doi:10.1145/3592133. ISSN 0730-0301. https://doi.org/10.1145/3592133.
- ↑ Wei, Yuxiang; Zhang, Yabo; Ji, Zhilong; Bai, Jinfeng; Zhang, Lei; Zuo, Wangmeng (2023). "ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation". arXiv:2302.13848 [cs.CV].
- ↑ Dao, Tri; Fu, Daniel Y.; Ermon, Stefano; Rudra, Atri; Ré, Christopher (2022). "FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness". arXiv:2205.14135 [cs.LG].
- ↑ Shi, Jing; Xiong, Wei; Lin, Zhe; Jung, Hyun Joon (2023). "InstantBooth: Personalized Text-to-Image Generation without Test-Time Finetuning". arXiv:2304.03411 [cs.CV].
Original source: https://en.wikipedia.org/wiki/Text-to-image personalization.
Read more |