Valve quietly not publishing games that contain AI generated content if the submitters can’t prove they own the rights to the assets the AI was trained on
Valve quietly not publishing games that contain AI generated content if the submitters can’t prove they own the rights to the assets the AI was trained on
Who is the “unfettered capitalist” in this case? The artist whose artwork was used as training data without permission? Valve? It’s a nice soundbite, but I’m not sure how you are applying it in this case.
Are you suggesting that an artist retains the right to prevent their art from being used to train someone on art? No artist has ever created anything in a vacuum. This whole line of reasoning is ridiculous, imo.
No, that’s fine - just as we understand it.
Your stance against unfettered capitalism is that - if I make some art and aomeone puts it online, some multibillion dollar games house should be able to grab it and use it in their game for free.
I can feel the capitalists quaking in their boots already. I’m sure the Reddit admins agree with you.
Is that what you think we’re talking about, directly copying artwork? There’s already laws for that, regardless of who or what creates the art. What is concerning people is that AI can be trained on other people’s art and then told to create new art. It’s not a copy, it’s a new thing, but it used old stuff to come up with the new stuff. (humans do this too)
I don’t even know what this means.
Edit: I don’t know if this needs to be said but I am not the original person you replied to.
What I was trying to talk about is what the commenter meant by “unfettered capitalism has not, and will not work except for those already at the top.” - it wasn’t clear how it related to this story - but we seem to have gone off at tangent.
Yes. It is entirely dependent on the old stuff. We have laws for that too, in terms of licences for derivative works.
My guess is that they saw the phrase “let the market decide” and took that to mean “unfettered capitalism”. But yeah, sorry about the tanget I’ve dragged you into, haha.
but they’re not derivative works, at least not in how I understand the term. They’re entirely new works.
Not someone, AI. It takes years to train a person, it takes years to train a person, much less to train AI, and if that content is sold it’s more akin to something selling tracings of someone else’s work.
This isn’t “being influenced” by someone else’s work here, it’s directly used to generate new content.
I am old enough to remember when “X, but on the internet” was considered a new and novel thing-- turns out that it isn’t. X, but with AI is no different than X. Training a person and training an AI do not need different laws.
Most people, and so what? You think an artist gets different rights depending on how fast someone can learn their style?
Only if it’s an exact copy, which would already be covered by current laws. This would be more like when people create art in the style of other art. Like, for a made up example, if someone drew the stranger things characters in the style of the Simpsons.
What does this even mean?
Edit: Sorry about all those typos!
Sounds like you might not know enough about how AI generation actually works to have this conversation, especially if your response to the nuances around the difference between human generated and AI generated content is just “so what?”
I can’t help but notice you didn’t answer the question. My question was more like “How is this different than when a human learns to make art”? It’s to directly generate new content, is it not?
No, AI has no creativity. Everything generated by AI is a probabilistic interpretation of inputs and training data. It’s purely mathematics, there’s no emotion or actual thought put into it. But frankly, whether or not AI generated content is “new content” is a philosophical debate that doesn’t matter while AI has the potential to displace more jobs and create more wealth inequality than ever before, and I don’t necessarily mean in the “robots took my job” sense. Generative AI will push productivity to all time highs by an order of magnitude and wages will not have increased by the same, enabling a faster rate of wealth transfer to corporations and the top percentage of shareholders.
I didn’t answer your question because it was vague and shows a lack of understanding of both how AI generates content and the future problems AI presents as long as it’s controlled by the wealthy and corporations.
It clearly does matter if valve is rejecting games because their art was generated by an AI.
You think generative AI will be more advantageous to big corporations, versus smaller operations? How does that track?
You have no idea what my skillset is, and I am passingly familiar with the concepts of machine learning. But my question, as I already noted, was more like “why do you think this phrase doesn’t also apply to humans?”. Which I already clarified, and you still haven’t answered.
If a person is in the art/media-for-hire business, they’re going to be in a rough spot in the very near future because a computer program will likely replace them. Just like self-driving cars-- the technology doesn’t have to be perfect, it just has to be better than humans. For cars, we’re a little ways away from that; for art, that time is arguably right now.
Yeah, so if you’ve actually read my comments instead of skimming for bits you can pick out and pedantically question you’d know that my point is that this is the problem. It’s not like these people can say “oh my life as an artist is over, I’ll just walk down the street and get another job that pays a living wage”. Without accessible alternative wealth sources, this has the potential to severely displace skilled individuals, and not just artists.
If you don’t see that as a problem then this conversation isn’t worth having and your views are unimportant.
You noticed that, did you? If I ask a small child to draw a picture of a sunflower - and they have never seen a picture of a sunflower, but they are sitting in a field of sunflowers - is it your contention that they would be unable, because they’ve never seen a picture?
Because I think the small child will manage it. And the AI with no training data won’t.
But yes, to answer your broader question, I think it is reasonable to have legislation around automated or large scale processes that don’t pertain to something an individual can do. Which is why there is regulation around robocalling, sending spam and photocopying and selling books.
I am not sure why you’re starting another thread with me, but I don’t think the distinction you’re making between a live stream of a flower and a picture of a flower is sensical.
I don’t want to get too bogged down in the details of your analogy. (It’s really bad.) but in either case, you have to explain what a flower is when you request a picture of a flower. If you ask a child that doesn’t speak English to draw you a picture of a “sunflower”, they won’t be able to do so even if they’re sitting in a field of sunflowers.
You make a good point regarding the legislation of the output of an automated process, but we were talking about the input; whether the AI needed to be trained only one works with permission. This is certainly not how the law works now, and I argue that it makes no sense to implement such a law.