hedgehog

joined 2 years ago
[–] hedgehog@ttrpg.network 1 points 5 days ago

You don't have to finish the file to share it though, that's a major part of bittorrent. Each peer shares parts of the files that they've partially downloaded already. So Meta didn't need to finish and share the whole file to have technically shared some parts of copyrighted works. Unless they just had uploading completely disabled,

The argument was not that it didn’t matter if a user didn’t download the entirety of a work from Meta, but that it didn’t matter whether a user downloaded anything from Meta, regardless of whether Meta was a peer or seed at the time.

Theoretically, Meta could have disabled uploading but not blocked their client from signaling that they could upload. This would, according to that argument, still counts as reproducing the works, under the logic that signaling that it was available is the same as “making it available.”

but they still "reproduced" those works by vectorizing them into an LLM. If Gemini can reproduce a copyrighted work "from memory" then that still counts.

That’s irrelevant to the plaintiff’s argument. And beyond that, it would need to be proven on its own merits. This argument about torrenting wouldn’t be relevant if LLAMA were obviously a derivative creation that wasn’t subject to fair use protections.

It’s also irrelevant if Gemini can reproduce a work, as Meta did not create Gemini.

Does any Llama model reproduce the entirety of The Bedwetter by Sarah Silverman if you provide the first paragraph? Does it even get the first chapter? I highly doubt it.

By the same logic, almost any computer on the internet is guilty of copyright infringement. Proxy servers, VPNs, basically any compute that routed those packets temporarily had (or still has for caches, logs, etc) copies of that protected data.

There have been lawsuits against both ISPs and VPNs in recent years for being complicit in copyright infringement, but that’s a bit different. Generally speaking, there are laws, like the DMCA, that specifically limit the liability of network providers and network services, so long as they respect things like takedown notices.

[–] hedgehog@ttrpg.network 14 points 3 weeks ago

Why should we know this?

Not watching that video for a number of reasons, namely that ten seconds in they hadn’t said anything of substance, their first claim was incorrect (Amazon does not prohibit use of gen ai in books, nor do they require its use be disclosed to the public, no matter how much you might wish it did), and there was nothing in the description of substance, which in instances like this generally means the video will largely be devoid of substance.

What books is the Math Sorcerer selling? Are they the ones on Amazon linked from their page? Are they selling all of those or just promoting most of them?

Why do we think they were generated with AI?

When you say “generated with AI,” what do you mean?

  • Generated entirely with AI, without even editing? Then why do they have so many 5 star reviews?
  • Generated with AI and then heavily edited?
  • Written partly by hand with some pieces written by unedited GenAI?
  • Written partly by hand with some pieces written by edited GenAI?
  • AI was used for ideation?
  • AI was used during editing? E.g., Grammarly?
  • GenAI was used during editing?E.g., “ChatGPT, review this chapter and give me any feedback. If sections need rewritten go ahead and take a first pass.”
  • AI might have been used, but we don’t know for sure, and the issue is that some passages just “read like AI?”

And what’s the result? Are the books misleading in some way? That’s the most legitimate actual concern I can think of (I’m sure the people screaming that AI isn’t fair use would disagree, but if that’s the concern, settle it in court).

[–] hedgehog@ttrpg.network 4 points 3 weeks ago (1 children)

Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)

If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.

For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:

  • q4_K_M (the default): 43 GB
  • fp16: 141 GB
  • q8: 75 GB
  • q6_K: 58 GB
  • q5_k_m: 50 GB
  • q4: 40 GB
  • q3_K_M: 34 GB
  • q2_K: 26 GB

This is why I run a lot of Q4_K_M 70B models on two 3090s.

Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.

TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.

[–] hedgehog@ttrpg.network 6 points 3 weeks ago (1 children)

I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.

I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.

[–] hedgehog@ttrpg.network 1 points 3 weeks ago

Not directly, but indirectly. You gerrymander the district-based positions, which allow you to pass legislation enabling you to suppress enough votes to win the statewide elections, too.

[–] hedgehog@ttrpg.network 0 points 4 weeks ago (2 children)

That sounds like something someone who's never heard of gerrymandering or voter suppression would say.

[–] hedgehog@ttrpg.network 3 points 4 weeks ago (1 children)

From https://www.yalemedicine.org/news/covid-vaccines-reduce-long-covid-risk-new-study-shows

At the pandemic’s onset, approximately 10% of people who suffered COVID-19 infections went on to develop Long COVID. Now, the risk of getting Long COVID has dropped to about 3.5% among vaccinated people (primary series).

...

Then, the team conducted analyses to uncover the reasons for the observed decline in Long COVID cases from the pre-Delta to Omicron eras. About 70% of the decline was attributable to vaccination, they found.

[–] hedgehog@ttrpg.network 3 points 1 month ago (1 children)

The above post says it has support for Ollama, so I don’t think this is the case… but the instructions in the Readme do make it seem like it’s dependent on OpenAI.

[–] hedgehog@ttrpg.network 3 points 1 month ago (2 children)

Are you saying that NAT isn’t effectively a firewall or that a NAT firewall isn’t effectively a firewall?

[–] hedgehog@ttrpg.network 3 points 1 month ago (3 children)

Is there a way to use symlinks instead? I’d think it would be possible, even with Docker - it would just require the torrent directory to be mounted read-only in the same location in every Docker container that had symlinks to files on it.

[–] hedgehog@ttrpg.network 1 points 1 month ago

If they do the form correctly, then it’s just an extra step for you to confirm. One flow I’ve seen that would accomplish this is:

  1. You enter your address into a form that can be auto-filled
  2. You submit the address
  3. If the address validates, the site saves the form and shows you the address in a more readable format. You can click Edit to make changes.
  4. If the address doesn’t validate, the site displays a modal asking you to confirm the address. If another address they were able to look up looks similar, it suggests you use that instead. It’s one click to continue editing, to use the suggested address, or to use what you originally entered.

That said, if you’re regularly seeing the wrong address pop up it may be worth submitting a request to get your address added to the database they use. That process will differ depending on your location and the address verification service(s) used by the sites that are causing issues. If you’re in the US, a first step is to confirm that the USPS database has your address listed correctly, as their database is used by some downstream address verification services like “Melissa.” I believe that requires a visit to your local post office, but you may be able to fix it by calling your region’s USPS Address Management System office.

[–] hedgehog@ttrpg.network 1 points 1 month ago

Depending on setup this can be true with Jellyfin, too. I have a domain registered, use dynamic DNS, and have Traefik direct a subdomain to my Jellyfin server. My mobile clients are configured using that. My local clients use the local static IP.

If my internet goes down, my mobile clients can’t connect, even on the LAN.

view more: next ›