Oh boy, more slop
You Should Know
YSK - for all the things that can make your life easier!
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules (interactive)
Rule 1- All posts must begin with YSK.
All posts must begin with YSK. If you're a Mastodon user, then include YSK after @youshouldknow. This is a community to share tips and tricks that will help you improve your life.
Rule 2- Your post body text must include the reason "Why" YSK:
**In your post's text body, you must include the reason "Why" YSK: It’s helpful for readability, and informs readers about the importance of the content. **
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding non-YSK posts.
Provided it is about the community itself, you may post non-YSK posts using the [META] tag on your post title.
Rule 7- You can't harass or disturb other members.
If you harass or discriminate against any individual member, you will be removed.
If you are a member, sympathizer or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people and you were provably vocal about your hate, then you will be banned on sight.
For further explanation, clarification and feedback about this rule, you may follow this link.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
Rule 10- The majority of bots aren't allowed to participate here.
Unless included in our Whitelist for Bots, your bot will not be allowed to participate in this community. To have your bot whitelisted, please contact the moderators for a short review.
Rule 11- Posts must actually be true: Disiniformation, trolling, and being misleading will not be tolerated. Repeated or egregious attempts will earn you a ban. This also applies to filing reports: If you continually file false reports YOU WILL BE BANNED! We can see who reports what, and shenanigans will not be tolerated.
If you file a report, include what specific rule is being violated and how.
Partnered Communities:
You can view our partnered communities list by following this link. To partner with our community and be included, you are free to message the moderators or comment on a pinned post.
Community Moderation
For inquiry on becoming a moderator of this community, you may comment on the pinned post of the time, or simply shoot a message to the current moderators.
Credits
Our icon(masterpiece) was made by @clen15!
No, that's a skumk.
What a fucking waste of resources
Good to know, I'll be sure to block this bot on all of my Fediverse accounts. It's sad to see more GenAI nonsense.
@aihorde@lemmy.dbzer0.com draw for me the db0 admin as a power hungry reddit mod
I put this in the post body, but it was further in and I think that some people may not have read that far: this community, !YouShouldKnow@lemmy.world, says that it bans most bots, so I don't think that that bot will operate here. I linked to a test post on another community, !test@sh.itjust.works, where I know it works, because I just tested it there, if someone just wants to give it a try.
https://lemmyverse.link/sh.itjust.works/post/40008132
or
https://sh.itjust.works/post/40008132, if your client can't understand the above.
Thanks
Dw you don't gotta tell me, I blocked that shitbot months ago.
What is threadiverse? Lemmy, piefed, mbin?
Yeah, just one word to refer to all the compatible Fediverse Reddit-alikes.
Sublinks will be in there too, if they get the ball rolling.
Humbly requesting an explanation for the polarized takes on this. How are we so specifically split 50/50 rejoicing for the utility and cursing the 'slop' at the same? Someone sway me to a side? I'm addicted to reserving judgement
on the one hand, this is an ai horde-based bot. the ai horde is just a bunch of users who are letting you run models on their personal machines, which means this is not "big ai" and doesn't use up massive amounts of resources. it's basically the "best" way of running stable diffusion at small to medium scale.
on the other, this is still using "mainstream" models like flux, which has been trained on copyrighted works without consent and used shitloads of energy to train. unfortunately models trained on only freely available data just can't compete.
lemmy is majority anti-ai, but db0 is a big pro-local-ai hub. i don't think they're pro-big-ai. so what we're getting here is a clash between people who feel like any use of ai is immoral due to the inherent infringement and the energy cost, and people who feel like copyright is a broken system anyway and are trying to tackle the energy thing themselves.
it's a pretty thorny issue with both sides making valid points, and depending on your background you may very well hold all the viewpoints of both sides at the same time.
Both sides having valid points is almost always the case with issues of any complexity. I'm very curious to know why there isn't a sweeping trump card that ultimately deems one side as significantly more ethical than the other
Great analysis tho--very thankful for the excellent breakdown unless you used ai to do it or if that ai is ultimately not justifying the means adequately. No actually I'm thankful regardless but I'm still internally conflicted by the unknown
no matter your stance on the morality of language models, it's just plain rude to use a machine to generate text meant for people. i would never do that. if i didn't take the time to write it, why would you take the time to read it?
I think there may be two exceptions to that rule.
-
Accessibility. People who may have issues writing long coherent text due the need to use some different input method (think about tetraplegic people for instance). LLM generated text could be of great aid there.
-
Translation. I do hate forced translation. But it's true that for some people it may be needed. And I think LLM translation models have already surpassed other forms of automatic software translation.
There are always exceptions/outliers to any rule, it's basically playing devils advocate to bring them up, I never care for it, like a conversation about someone murdering someone for funsies and saying "but there are cases where people should be murdered, like the joker from batman"
Just doesn't apply to generative ai, what do they need images explaining the text lol
But these are neither problems of the technology, nor of it being hosted. It's an issue of the person using it, the situation, and the person receiving it, as well as all their values.
Not sure why people are directing their hate against the tools instead of the actual politics and governments not taking the current and potential future ones seriously. Technology and progress are never the problem.
the problem with entirely separating the two is that progress and technology can be made with an ideology in mind.
the current wave of language model development is spearheaded by what basically amounts to a cult of tech-priests, going all-in on reaching AGI as fast as possible because they're fully bought into rokos basilisk. if your product built to collect and present information in context is created by people who want that information to cater to their world view, do you really think that the result is going to be an unbiased view of the world? sure the blueprint for how to make an llm or diffusion model is (probably) unbiased, but when you combine it with data?
as an example, did you know that all the big diffusion models (stable, flux, illustrious etc) use the same version of CLIP, the part responsible for mapping text to features? and that the CLIP part is tailored for and trained on medical information? how might that affect the output? sure you can train your own CLIP, but will you? will anyone?
I see what you mean, but is there any evidence that the models are biased in a way that affirms the world view of the owners? If I understood you correctly? I couldn't find any.
I'm as sceptical of the capitalist fuckwits as you seem to be, but their power seems to me to be more political/capitalist through the idea of AGI, than through the models themselves. Something that simple sequestration could solve. But that's on the government and the voters.
I'm not sure about the point you are trying to make with CLIP. It's not a topic I'm familiar with, but also seems to be more a problem of the usage and people that the technology itself. Naïve usage by people who want to follow trends/the cheapest option/just something that works in any capacity.
For me, the issue lies first in the overhyped marketing which is par on course for basically anything, unfortunately, as well as the fact, that suddenly copyright infringement is fine, if you make enough money off of it and lick powerful boots. If it was completely open for everyone, it wouls be a completely different story IMO.
Also, I do not think that the models were created with the goal of pushing a certain narrative. They lucked into it being popular, completely unexpectedly, and only then the vultures started seeing the opportunity. So we will see how it evolves in that regard, but I don't think this is what we're seeing currently.
sorry, i had to think for a while about this one.
I see what you mean, but is there any evidence that the models are biased in a way that affirms the world view of the owners? If I understood you correctly? I couldn’t find any.
so, this is an interesting point. we know they are biased because we've done fairness reviews, and we know that that bias is in line with the bias of silicon valley as a whole. whether that means the bias is a) intentional, b) coincidentally aligned or c) completely random is impossible to tell. and, frankly, not the interesting part. we know there is bias, and we know it aligns.
whether or not the e/acc people at openai actually share the worldview they espouse is also impossible to tell. it could also be just marketing.
I’m as sceptical of the capitalist fuckwits as you seem to be, but their power seems to me to be more political/capitalist through the idea of AGI, than through the models themselves.
as long as the product is sold as it is today, i believe it reinforces that power.
Something that simple sequestration could solve. But that’s on the government and the voters.
ESL moment... i don't really understand what you mean by sequestration here. like, limit who is allowed to use it? i feel like that power lies with the individual user, even though regulation definitely can help.
For me, the issue lies first in the overhyped marketing which is par on course for basically anything, unfortunately, as well as the fact, that suddenly copyright infringement is fine, if you make enough money off of it and lick powerful boots. If it was completely open for everyone, it wouls be a completely different story IMO.
agreed, which is why i as an "abolish copyright law" person am so annoyed to find myself siding with the industry in the cases ongoing against the ai companies. then again, we have "open weight" models that can still be used for the same thing, because the main problem was never copyright itself but the system it exists within.
Also, I do not think that the models were created with the goal of pushing a certain narrative. They lucked into it being popular, completely unexpectedly, and only then the vultures started seeing the opportunity. So we will see how it evolves in that regard, but I don’t think this is what we’re seeing currently.
the purpose of a system is what it does. some people with a certain ideology made a thing capable of "expressing itself", and by virtue of the thing being made by those people it expresses itself in a similar way. whether it is intentional or not doesn't really factor into it, because as long as the people selling it do not see a problem with it it will continue to express itself in that fashion. this connects back to my first point; we know the models have built in bias, and whether the bias was put there deliberately or is in there as a consequence of ingesting biased data (for which the same rule holds) doesn't matter. it's bias all the way down, and not intentionally working against that bias means the status quo will be reinforced.
Awesome! Cheers db0
I didn't know about this, and I tested it on mastodon and it's awesome! Thanks db0!
with allies like these who even needs Elon Muskies? Grok-ifying the fediverse is sure to lead to a tech utopia /s
This isn't useful for me specifically, but thanks for sharing as
I'm sure many people will find it useful!
This is gold and I’m so thankful for this knowledge.
I wish I could limit the response to just one picture so I could use this as a meme generator in response to user comments.
The reason many AI image generators do multiple images is as a simple way to trade compute cycles for quality. The idea is that you generate a couple and pick the best, using your human knowledge of what you intend.
You could generate it in one place, copy the URL of the best image, and embed it in your response. That's what I did when I pasted the links to the skunk engraving images in my post; the images were generated elsewhere. I just pasted all four rather than only one, to show what the response looks like.
The syntax for an inline image on the Threadiverse's Markdown variant is:

I assume that either that syntax or a similar one will work on Mastodon, but I don't know Mastodon's syntax, as I don't use it.
I could arrange that somehow as being able to reply with memegens was part of the original idea. But you might still end up with a bad 1shot.
I actually think the bad one shot is kind of part of the joke.
For instance: Someone posts a petty revenge story about getting back at their ex boyfriend for cleaning out their bank account.
I comment on that post with my request to aihorde along the lines of “a drag queen lifting a champagne glass as a toast to how delightfully petty someone is being”.
AIHorde will then generate the image.
Regardless of how good or bad the gen is, my original intent will come across because the OP can still see that my AIHorde prompt was intended to compliment OP. The bonus is if AIHorde comes up with an awesome output, it will be hilarious. It the output is terrible, also hilarious. If the output is so-so, the original intent of a compliment was still delivered.
At least, that’s my thinking.
As an example: Slack kind of had this functionality when sending GIFs awhile back. If you had the Giphy integration, you’d just type “/gif {topic}” and the integration would select a random gif that was returned from searching your topic. This GIF would be posted in the chat without you having the chance to review it first. Sometimes the GIF returned was irrelevant result, but everyone brushed it off because they knew how random the integration could be. Other times, it returned the perfect GIF and the potential randomness of result made a good GIF result even more satisfying.
This is super cool. Thanks for sharing!
@aihorde@lemmy.dbzer0.com draw for me Linus Torvalds playing with a steam deck.