Replies: 1 comment 3 replies
-
Thanks for opening another discussion with this. I will look into this, I'm still deciding if I should try to test and fix this or just update and publish a new guide with the new techniques and also with an example like this one. Probably what's happening here is that the subject is too small in comparison to what you're generating as a background, so probably the model doesn't know what to do, in your examples also the perspective and the shadows are wrong. If you can provide an image (isolated subject) I can test it with it or else, I'll have to find one that might not be the same as your problem. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi.
I've been struggling with this problem for sometime so I decided to revert to the original code in this blog post to see where I was going wrong. First, I ran the code as it is with no changes to produce the car and I got this:

And then, I only changed the filename and the prompts to see if I can do the same inpainting with humans. Here is what the code looks like now:
And this is the output:

The input image also has a transparent background like the input image for the car example.
Where am I going wrong?
Beta Was this translation helpful? Give feedback.
All reactions