73 points | by lcorinst6 hours ago
From https://arxiv.org/html/2409.11340v1
> Unlike popular diffusion models, OmniGen features a very concise structure, comprising only two main components: a VAE and a transformer model, without any additional encoders.
> OmniGen supports arbitrarily interleaved text and image inputs as conditions to guide image generation, rather than text-only or image-only conditions.
> Additionally, we incorporate several classic computer vision tasks such as human pose estimation, edge detection, and image deblurring, thereby extending the model’s capability boundaries and enhancing its proficiency in complex image generation tasks.
This enables prompts for edits like: "|image_1| Put a smile face on the note." or "The canny edge of the generated picture should look like: |image_1|"
> To train a robust unified model, we construct the first large-scale unified image generation dataset X2I, which unifies various tasks into one format.
Not exactly. They mention starting from the VAE from Stable Diffusion XL and the Transformer from Phi3.
Looks like these LLMs can really be used for anything
Check out:
Or, if you need solid regions that overlap and mask out other regions, then generate objects over a chroma-keyable flat background.
Transparent Image Layer Diffusion using Latent Transparency