DPChallenge: A Digital Photography Contest You are not logged in. (log in or register
 

DPChallenge Forums >> General Discussion >> AI is fun!!!
Pages:  
Showing posts 26 - 50 of 58, (reverse)
AuthorThread
04/18/2023 10:16:57 AM · #26
Originally posted by tanguera:

Originally posted by MargaretNet:

Originally posted by tanguera:

...But the possibility for abuse is massive, especially with defamation, putting people in situations that never happened, for example...

Skilled photographers could do this before, even in the days of film. It was called photo-montage.


Ues, photographers did it. People. Not machine or computers.

It was still unethical, no matter how achieved.

And robots build cars these days and we are fine that - the output is what matters needs to be judged, not the technique.

Message edited by author 2023-04-18 10:18:01.
04/18/2023 10:22:33 AM · #27
This stuff is dangerous. Just one recent example ... Trump, the Pope, etc.

PetaPixel article (where this image collage was pulled from).



The AI tool that was used for the above was taken down by the creator of it (the AI tool) - or at least the free version.
In this case, the people portrayed were known celebrities, but what are the limits? Who else could be fooled? In what way?
04/18/2023 10:23:20 AM · #28
Originally posted by MargaretNet:

... And robots build cars these days and we are fine that - the output is what matters needs to be judged, not the technique.

Guns don't kill people - people do?

Message edited by author 2023-04-18 10:23:43.
04/18/2023 10:28:21 AM · #29
Originally posted by glad2badad:

Originally posted by MargaretNet:

... And robots build cars these days and we are fine that - the output is what matters needs to be judged, not the technique.

Guns don't kill people - people do?

I think this comparison makes no sense for many reasons.
04/18/2023 10:47:59 AM · #30
Originally posted by MargaretNet:

Originally posted by tanguera:

Originally posted by MargaretNet:

Originally posted by tanguera:

...But the possibility for abuse is massive, especially with defamation, putting people in situations that never happened, for example...

Skilled photographers could do this before, even in the days of film. It was called photo-montage.


Ues, photographers did it. People. Not machine or computers.

It was still unethical, no matter how achieved.

And robots build cars these days and we are fine that - the output is what matters needs to be judged, not the technique.


Originally posted by MargaretNet:

It was still unethical, no matter how achieved.

So you're acknowledging that this is unethical...?

And the ends justify the means?

Like I said, you're on the winning side of this because it's not going to go away.
04/18/2023 04:06:55 PM · #31
Originally posted by tanguera:

Name me anything that didn't suck in its infancy.

That's fair, but I'm in charge of a code base with just under three million lines of code.
I'd love if AI could refactor and write new features, I just don't see that ever happening.
04/18/2023 04:23:34 PM · #32
Originally posted by Venser:

Originally posted by tanguera:

Name me anything that didn't suck in its infancy.

That's fair, but I'm in charge of a code base with just under three million lines of code.
I'd love if AI could refactor and write new features, I just don't see that ever happening.


If you've noticed, it's only the creative/fun stuff that AI is taking over. Technology was supposed to make life easier - I have yet to see any AI taking over accounting, or any other routine-type "work".
04/18/2023 04:46:07 PM · #33
Originally posted by glad2badad:

This stuff is dangerous. Just one recent example ... Trump, the Pope, etc.

PetaPixel article (where this image collage was pulled from).



The AI tool that was used for the above was taken down by the creator of it (the AI tool) - or at least the free version.
In this case, the people portrayed were known celebrities, but what are the limits? Who else could be fooled? In what way?


I agree... AI like this is not "free speech" or "free art". It is simply deception.

I'd hate for someone to AI my reputation into the trash... even if I'd not done anything wrong.

I'm thankful that I'm not famous. The potential harm to famous people is astronomical.

It's defamation of character.

Punishable by law.
04/18/2023 05:03:29 PM · #34
Originally posted by Lydia:

Originally posted by glad2badad:

This stuff is dangerous. Just one recent example ... Trump, the Pope, etc.

PetaPixel article (where this image collage was pulled from).



The AI tool that was used for the above was taken down by the creator of it (the AI tool) - or at least the free version.
In this case, the people portrayed were known celebrities, but what are the limits? Who else could be fooled? In what way?


I agree... AI like this is not "free speech" or "free art". It is simply deception.

I'd hate for someone to AI my reputation into the trash... even if I'd not done anything wrong.

I'm thankful that I'm not famous. The potential harm to famous people is astronomical.

It's defamation of character.

Punishable by law.


Sadly, technology has outpaced our laws. Again.
04/18/2023 05:20:15 PM · #35
In the USA "famous people" are legally available subjects for review, criticism, and parody/satire, regardless of whether Generative-AI or more traditional methods are used. Presenting fakes as real images is a different matter, and may be technically easier with G-AI but is no more or less legal.

"Not famous" people have more rights to sue for defamation, libel, or slander ...

FWIW Fox News just threw in the towel in the suit by Dominion Voting Machines, so offering up fake news (telling lies) does (sometimes) have consequences.
04/20/2023 10:07:16 PM · #36
Interesting analysis: https://youtu.be/gQfQiXP9yZA
04/20/2023 11:18:10 PM · #37
Originally posted by Art Roflmao:

Interesting analysis: https://youtu.be/gQfQiXP9yZA


I learned a couple things, thanks. This is my favorite analysis:

https://www.youtube.com/watch?v=ro130m-f_yk
04/21/2023 03:50:12 AM · #38
Artificial intelligence is already implemented in many areas of our daily lives.
You cannot stay into your caves, guys.
We use AI (chat GPT) all the time here at school.
04/21/2023 08:58:20 AM · #39
Originally posted by Art Roflmao:

Interesting analysis: https://youtu.be/gQfQiXP9yZA

Thanks, I enjoyed it, I liked reporting aspect of this video over the shouting and finger waving of the second one.
04/21/2023 04:49:32 PM · #40
Originally posted by MargaretNet:

Originally posted by Art Roflmao:

Interesting analysis: https://youtu.be/gQfQiXP9yZA

Thanks, I enjoyed it, I liked reporting aspect of this video over the shouting and finger waving of the second one.


yeah, it sure *sounded* more accurate and objective
04/22/2023 12:55:38 AM · #41
Originally posted by posthumous:

I learned a couple things, thanks. This is my favorite analysis:
https://www.youtube.com/watch?v=ro130m-f_yk

That guy is just insufferable to watch/listen to and he just sounds like the guy on the horse that ranted about how those newfangled automobiles ain't all that. More importantly, he is just plain wrong about so many things. It would take me an hour to pick apart his flawed conclusions and his barely surface level understanding of AI.

I'm not a zealot for AI, but I see lots of positives in the very near future and they will undoubtedly outweigh the occasional glitches, but that won't stop detractors from flailing their arms in emotional hysterics, claiming the path is too dangerous to pursue - as if that was even an option. As far as the ethics relating to the art that AI produces and how it does it, I'm ambivalent for now - I understand the arguments, but I also understand the counterarguments. I have no real skin in the game and even if I did, I also understand the inevitability that some, if not all, of my skills will eventually become obsolete.
04/22/2023 08:23:43 AM · #42
Originally posted by posthumous:

Originally posted by MargaretNet:

Originally posted by Art Roflmao:

Interesting analysis: https://youtu.be/gQfQiXP9yZA

Thanks, I enjoyed it, I liked reporting aspect of this video over the shouting and finger waving of the second one.


yeah, it sure *sounded* more accurate and objective

To be honest I don't know, I stopped watching after about 1 minute (to be clear - the shouting one, I could not stand the guy).

Message edited by author 2023-04-22 08:34:38.
04/22/2023 08:33:03 AM · #43
I thought about copyright on AI image generation and if the requirement is for a direct human involvement then maybe a series of prompts used in the specific software should be copyrighted because that's where human creativity comes in. As I did not use it yet I don't know if the series of prompts used to generate the image that won the Sony award would always result in the same or a very similar image or could there be a significant variation, what is your experience, Georges?
04/22/2023 10:29:30 AM · #44
Originally posted by Art Roflmao:


That guy is just insufferable to watch/listen to and he just sounds like the guy on the horse that ranted about how those newfangled automobiles ain't all that.


yeah, his schtick is "Adam ruins everything" ... not for everyone!

Originally posted by Art Roflmao:

More importantly, he is just plain wrong about so many things. It would take me an hour to pick apart his flawed conclusions and his barely surface level understanding of AI.


I'm curious about what you'd pick apart. I have my own understanding of AI. Also, I feel the same way about ColdFusion's video. Flawed and surface level. Just because it's boring and flat doesn't make it smarter.

Originally posted by Art Roflmao:

I'm not a zealot for AI, but I see lots of positives in the very near future and they will undoubtedly outweigh the occasional glitches, but that won't stop detractors from flailing their arms in emotional hysterics, claiming the path is too dangerous to pursue - as if that was even an option.


ay, there's the rub. There is no option, as you say. It's coming, whatever it is. His main point, in my opinion, is that it is being given tasks it simply can't do, at least not yet, like drive a car or answer your questions accurately. AI still has the same problem it has always had. It doesn't understand concepts or context. It literally doesn't know wrong from right. As a materialist, I fully believe it's possible to create AI that is smarter than we are, but the current way it's being programmed is not even close. People (admittedly, dumb people) are going to ask Bing life or death questions, and they are going to choose death. There will be a body count. Acceptable collateral damage? Maybe for Microsoft or Google.

Originally posted by Art Roflmao:

As far as the ethics relating to the art that AI produces and how it does it, I'm ambivalent for now - I understand the arguments, but I also understand the counterarguments. I have no real skin in the game and even if I did, I also understand the inevitability that some, if not all, of my skills will eventually become obsolete.

I feel similarly. I think copyright is weird to begin with. Humanity has progressed because one person's ideas are used by the next person. We have seriously slowed down progress because of copyright and patents. Have you ever heard what John Lee Hooker did with In The Mood? He would surely be sued if he did that today. It's not just AI, it's the age of easy copying that we've been in for 40 years. It's a tough question how to protect intellectual property. Most "protection" these days is done by big corporations with money for lawyers.
04/22/2023 11:43:00 AM · #45
Originally posted by posthumous:

His main point, in my opinion, is that it is being given tasks it simply can't do, at least not yet, like drive a car or answer your questions accurately. AI still has the same problem it has always had. It doesn't understand concepts or context.

Driving a car should be easy. The rules are rigid.
Yet when Tesla tried out their AI driving in Toronto, the street car tracks running down the middle of the road FUBAR'ed it. That won't be the first or only instance. What happens when the car is traveling faster than 40km/h?

Will it get better? Yes.
But it won't be able to handle new or unusual circumstances as well as a human could.
04/22/2023 12:04:53 PM · #46
Originally posted by Venser:

Originally posted by posthumous:

His main point, in my opinion, is that it is being given tasks it simply can't do, at least not yet, like drive a car or answer your questions accurately. AI still has the same problem it has always had. It doesn't understand concepts or context.

Driving a car should be easy. The rules are rigid.
Yet when Tesla tried out their AI driving in Toronto, the street car tracks running down the middle of the road FUBAR'ed it. That won't be the first or only instance. What happens when the car is traveling faster than 40km/h?

Will it get better? Yes.
But it won't be able to handle new or unusual circumstances as well as a human could.

All your examples talk in general about AI, can we get back to AI in art? The issues with car driving come back to common sense which is the most difficult skill to master by a computer, how much does common sense matter in art?
04/22/2023 12:48:54 PM · #47
Originally posted by MargaretNet:

I thought about copyright on AI image generation and if the requirement is for a direct human involvement then maybe a series of prompts used in the specific software should be copyrighted because that's where human creativity comes in. As I did not use it yet I don't know if the series of prompts used to generate the image that won the Sony award would always result in the same or a very similar image or could there be a significant variation, what is your experience, Georges?


Same prompts will generate different images. Once you get something that is the closest you want, you can regenerate from that one image and get different variations. (depending of the settings you choose, how similar or different you want it)
Very often I slightly change the prompts and get totally different results. AI can easily win in any photocontest, but in my opinion AI does not belong in that category. I join AI contests in discord. I can use my own pictures to generate something from it, and put a weight of how much is taken from MY photo. It will however not copy any pixels from it, but get information from that image.
04/22/2023 01:01:32 PM · #48
Originally posted by GeorgesBogaert:

Originally posted by MargaretNet:

I thought about copyright on AI image generation and if the requirement is for a direct human involvement then maybe a series of prompts used in the specific software should be copyrighted because that's where human creativity comes in. As I did not use it yet I don't know if the series of prompts used to generate the image that won the Sony award would always result in the same or a very similar image or could there be a significant variation, what is your experience, Georges?


Same prompts will generate different images. Once you get something that is the closest you want, you can regenerate from that one image and get different variations. (depending of the settings you choose, how similar or different you want it)
Very often I slightly change the prompts and get totally different results. AI can easily win in any photocontest, but in my opinion AI does not belong in that category. I join AI contests in discord. I can use my own pictures to generate something from it, and put a weight of how much is taken from MY photo. It will however not copy any pixels from it, but get information from that image.

OK, so one can't really copyright AI prompts in Midjourney as it is now.

As I understand it you start with your own image and then see where Midjourney will take you?

Message edited by author 2023-04-22 13:03:38.
04/22/2023 02:59:37 PM · #49
Originally posted by MargaretNet:

Originally posted by GeorgesBogaert:

Originally posted by MargaretNet:

I thought about copyright on AI image generation and if the requirement is for a direct human involvement then maybe a series of prompts used in the specific software should be copyrighted because that's where human creativity comes in. As I did not use it yet I don't know if the series of prompts used to generate the image that won the Sony award would always result in the same or a very similar image or could there be a significant variation, what is your experience, Georges?


Same prompts will generate different images. Once you get something that is the closest you want, you can regenerate from that one image and get different variations. (depending of the settings you choose, how similar or different you want it)
Very often I slightly change the prompts and get totally different results. AI can easily win in any photocontest, but in my opinion AI does not belong in that category. I join AI contests in discord. I can use my own pictures to generate something from it, and put a weight of how much is taken from MY photo. It will however not copy any pixels from it, but get information from that image.

OK, so one can't really copyright AI prompts in Midjourney as it is now.

As I understand it you start with your own image and then see where Midjourney will take you?


Midjourney serves as a tool to visualize your thoughts and ideas, but it requires specific instructions to generate unique results. You do not necessarily have to begin with your own image, as you can combine multiple images and provide prompts to guide its output. For example, you can request Midjourney to create an image with specific camera settings such as aperture or lighting techniques.

As an important skill for the future job market, I have been encouraging my daughter who is studying art to learn how to use Midjourney. With the rise of AI-related professions, it is evident that many traditional jobs will become obsolete in the upcoming years.
04/22/2023 06:06:25 PM · #50
Originally posted by posthumous:

Originally posted by Art Roflmao:

More importantly, he is just plain wrong about so many things. It would take me an hour to pick apart his flawed conclusions and his barely surface level understanding of AI.

I'm curious about what you'd pick apart. I have my own understanding of AI. Also, I feel the same way about ColdFusion's video. Flawed and surface level. Just because it's boring and flat doesn't make it smarter.

I've been watching ColdFusion for years and he's always had excellent analysis and insights on everything including (off the top of my head) crypto, social media, stem cell engineering, robotics, business scandals, military tech, digital photography, and a huge range of related topics. He's been covering AI for probably a decade. I don't find him boring, nor particularly exciting - but that's part of what I like about him - he presents relatively objectively.

Adam's comments:
"AI simply predicts the next word..." - shallow, simplistic understanding of the large language model AI.
"10 people died from autonomous driving attempts" (paraphrasing) - in that timespan, tens of thousands died from human driver mistakes or irrational behavior in a car. I constantly cringe at the idea that progress has to be 100% risk free when the alternative (status quo) is far more injurious and detrimental.
I could go through it again and find more but suffice to say that I'm not impressed with him.

Originally posted by posthumous:

Originally posted by Art Roflmao:

I'm not a zealot for AI, but I see lots of positives in the very near future and they will undoubtedly outweigh the occasional glitches, but that won't stop detractors from flailing their arms in emotional hysterics, claiming the path is too dangerous to pursue - as if that was even an option.

ay, there's the rub. There is no option, as you say. It's coming, whatever it is. His main point, in my opinion, is that it is being given tasks it simply can't do, at least not yet, like drive a car or answer your questions accurately. AI still has the same problem it has always had. It doesn't understand concepts or context. It literally doesn't know wrong from right. As a materialist, I fully believe it's possible to create AI that is smarter than we are, but the current way it's being programmed is not even close. People (admittedly, dumb people) are going to ask Bing life or death questions, and they are going to choose death. There will be a body count. Acceptable collateral damage? Maybe for Microsoft or Google.

I disagree. It CAN drive a car, just not with an acceptable level of collateral damage, but possibly far less damage than the sum of human driver risks. The challenge is going to be in the transition, where we have both AI and human drivers on the road. Once it is all automated, there would be far less deaths and injuries. ...that said, we would still complain and write to congress to abolish the tech because [insert emotional pleas here]. Also having said all that, I am NOT a fan of losing the freedom to drive, so I am hoping this does not happen, but I can see the writing on the wall.

To say that AI doesn't understand concepts or context isn't true either. It can learn context. You can train it and it can learn from millions of sources very quickly. There definitely are valid concerns about it's source material and it's "biases", but I don't know anyone that is taking everything ChatGPT says as gospel truth and if they are, it is probably in no less measure than people who use "Just Google it" to reference their "trusted source" of factual information. "Dumb" people already make life or death decisions based on search results. And regarding that body count, for every death by blind trust of AI, there may be 10 lives saved by getting answers from an AI Doctor. There are AI models being trained specifically on biology and medicine and there was an example mentioned in the ColdFusion video of how it made a correct diagnosis on a rare condition that only the top doctor in that field previously was able to do (or something along those lines). I'll be happy to see Doctors and Lawyers be replaced by AI at least for those who cannot afford them in our current system.

For things that have no ethical or political basis - like coding, for example - I can tell you that ChatGPT has already saved me a TON of time. Whereas I used to have to Google or go to StackOverflow to get answers and then have to sift through them and test them and troubleshoot and adapt them. I've had ChatGPT write PHP functions, Javascript code, and also explain the code and how to properly implement it in less than a minute. Not always perfect, but I can go back and without having to re-explain what I want, I can just say what the error was and it can correct the code - again, in seconds. I've gotten recipes, ideas to entertain my grandson, solutions to household repairs, and much more since I dove into it a couple months ago. So, ok, I guess I am becoming a zealot. LOL At least for ChatGPT. The AI Art thing is another story.

I've played with MidJourney a lot and find it very frustrating when you have a vision of what you want and you are trying to steer it to produce that vision using prompts that produce sometimes wildly varying results from the same prompts. I've watched dozens of videos on prompting techniques that made me think that producing what you want is as tedious as coding. People have had to figure out methods through trial & error and then suddenly an update to the AI changes everything they learned. It definitely produces amazing images, but I can agree that they are ultimately derivative works - albeit hard to tell who/what they may be derived from and no more "copying" than artists copying the style of other artists. I have also used NightCafe, which implements various models like DallE2, Stable Diffusion, and others, in addition to selectable styles, but it also struggle to produce predictable results - at least with my lack of patience for proper prompting.

Originally posted by posthumous:

Originally posted by Art Roflmao:

As far as the ethics relating to the art that AI produces and how it does it, I'm ambivalent for now - I understand the arguments, but I also understand the counterarguments. I have no real skin in the game and even if I did, I also understand the inevitability that some, if not all, of my skills will eventually become obsolete.

I feel similarly. I think copyright is weird to begin with. Humanity has progressed because one person's ideas are used by the next person. We have seriously slowed down progress because of copyright and patents. Have you ever heard what John Lee Hooker did with In The Mood? He would surely be sued if he did that today. It's not just AI, it's the age of easy copying that we've been in for 40 years. It's a tough question how to protect intellectual property. Most "protection" these days is done by big corporations with money for lawyers.

100% agree with that.

Interestingly, AI is cognizant of many of these issues we are discussing...
Originally posted by Ken to ChatGPT:

Can you explain how AI will benefit humanity by replacing human functions over time?

Originally posted by ChatGPT:

AI has the potential to benefit humanity in a number of ways by replacing some human functions over time. Here are some examples:

Increased efficiency: AI can perform repetitive and mundane tasks quickly and accurately, without getting tired or bored. This can free up human workers to focus on more creative and strategic work, leading to increased productivity and efficiency.

Improved safety: AI can perform dangerous or hazardous tasks, such as working in hazardous environments or performing surgery, with greater precision and safety than humans. This can reduce the risk of injury or death for human workers and patients.

Enhanced decision-making: AI can analyze large amounts of data and provide insights that would be difficult for humans to uncover on their own. This can lead to better-informed decision-making and more effective problem-solving.

Cost savings: AI can perform many functions more efficiently and at a lower cost than humans, leading to cost savings for businesses and individuals.

However, it's important to note that AI is not a panacea, and there are also potential risks and challenges associated with its adoption. For example, AI may exacerbate inequality if it leads to job displacement or if its benefits are not distributed fairly. Additionally, there are concerns about the ethical implications of AI, such as bias, privacy, and accountability. As such, it's important to approach the adoption of AI thoughtfully and carefully, with an eye towards maximizing its benefits while minimizing its risks.
Pages:  
Current Server Time: 08/04/2025 01:25:06 PM

Please log in or register to post to the forums.


Home - Challenges - Community - League - Photos - Cameras - Lenses - Learn - Help - Terms of Use - Privacy - Top ^
DPChallenge, and website content and design, Copyright © 2001-2025 Challenging Technologies, LLC.
All digital photo copyrights belong to the photographers and may not be used without permission.
Current Server Time: 08/04/2025 01:25:06 PM EDT.