Previous posts: https://programming.dev/post/3974121 and https://programming.dev/post/3974080

Original survey link: https://forms.gle/7Bu3Tyi5fufmY8Vc8

Thanks for all the answers, here are the results for the survey in case you were wondering how you did!

Edit: People working in CS or a related field have a 9.59 avg score while the people that aren’t have a 9.61 avg.

People that have used AI image generators before got a 9.70 avg, while people that haven’t have a 9.39 avg score.

Edit 2: The data has changed slightly! Over 1,000 people have submitted results since posting this image, check the dataset to see live results. Be aware that many people saw the image and comments before submitting, so they’ve gotten spoiled on some results, which may be leading to a higher average recently: https://docs.google.com/spreadsheets/d/1MkuZG2MiGj-77PGkuCAM3Btb1_Lb4TFEx8tTZKiOoYI

  • taiyang@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    1 year ago

    I feel like without the usual bad hands tell, it’s pretty impossible to tell. There’s a few other “tells” but your image selection is particularly avoident. Having generated plenty of images, cherry picking the best is par for the course.

    Although, a bad ai sample is pretty easy to tell at least. Even up against a bad human artist, a person with 8 fingers is easy ai giveaway. And for every one good image we get from SD at least, I toss out a few dozen.