Training the model AI imagery tools like Midjourney, Stable Diffusion, and DALL-E 2 are pretty amazing at creating images of just about anything you can come up with, but they have their own algorithmic and random-noise way of getting there. So while you can come up with interesting results, it can be hard to come up with a specific result. To get to anything that actually looked like our friendly SEO Mozbot, we needed to train a stable diffusion model to get a start.
about this, some that get pretty technical, and russian phone number example a number of others that use app interfaces to make the process easier on someone with a little less technical expertise. We chose to start with Astria, a solution which allows you to customize (they call it tuning) a model of your own. A lot of users train it on their own likeness to make cool avatars (like the popular Lensa app), but we threw a bunch of variations of Roger in there, had him party with the AI model, and watched what kind of shenanigans they got up to.
A Rogues Gallery of Rogers These tools generate images based on a text prompt, so our initial prompt was to see if it could output a version in a fun and colorful 3D style. Not bad first results! It was clear this generation drew heavily from photos of a Roger toy held in a hand, as well as a photo of our life-size Roger Mascot at one of our Mozcon events (thus, the people in the background of some of the images).
There are a lot of ways to go
-
- Posts: 6
- Joined: Sun Dec 22, 2024 9:35 am