As long as your AI doesn’t somehow infringe on your training data, you’re allowed to use whatever you want, just like reviewers, analysts, and indexers do.
As long as your AI doesn’t somehow infringe on your training data, you’re allowed to use whatever you want, just like reviewers, analysts, and indexers do.
They’re trained on technical material too.
Titanic vs. Bismark. Who would win?
It’s only decipherable to people who have kept up with the last two years of IMG Gen jargon.
The animation stuff you mentioned exists today:
https://www.youtube.com/watch?v=Gt1yNJ180Cs
We need, Standard Fruit.
i cri evrytiem
I found this guide on how to make an inpainting model out of any model. Though it’s pretty out of date.
Yeah, this seems like the last confirmation we didn’t really need.
What are his feelings on open source? That’s my question.
If you’re using the same UI and metadata, you should be able to reproduce images with only slight differences and then upscale them with hires fix or something else.
They tried to make video game rentals illegal in the US. They’ve always been a shitty, anti-consumer company.
That’s kind of unbelievable given what they say it can do.
They said they would be open sourcing it.
That was really cool.
Those might just be LoRA merged models, not full fine-tuning. From what I heard, fine-tuning doesn’t work because the models are distilled. You’d have to find a way to undistill them to train them.
Last I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn’t really work.
You can never learn anything with these clickbait headlines.
I don’t think so. They’re going to have to do a lot better than a tutorial to win people back. That said, the two Flux models being distilled making them close to impossible to fine-tune sucks too.
Isn’t this just the pay raise the Japanese company is forcing everyone to do? They’re pretty late. A bunch of other companies announced their raises earlier this year. Doing this in October comes off as scummy.