this post was submitted on 07 Dec 2025
1012 points (98.0% liked)

Technology

77090 readers
3519 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Just want to clarify, this is not my Substack, I'm just sharing this because I found it insightful.

The author describes himself as a "fractional CTO"(no clue what that means, don't ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

you are viewing a single comment's thread
view the rest of the comments
[–] MangoCats@feddit.it 1 points 1 day ago (1 children)

If you outsource you could at least sure them when things go wrong.

Most outsourcing consultants I have worked with aren't worth the legal fees to attempt to sue.

Plus you can own the code if a person does it.

I'm not aware of any ownership issues with code I have developed using Claude, or any other agents. It's still mine, all the more so because I paid Claude to write it for me, at my direction.

[–] JcbAzPx@lemmy.world 1 points 1 day ago (1 children)

AI doesn't get IP protections.

[–] MangoCats@feddit.it 1 points 1 day ago (1 children)

Nobody is asking it to (except freaks trying to get news coverage.)

It's like compiler output - no, I didn't write that assembly code, gcc did, but it did it based on my instructions. My instructions are copyright by me, the gcc interpretation of them is a derivative work covered by my rights in the source code.

When a painter paints a canvas, they don't record the "source code" but the final work is also still theirs, not the brush maker or the canvas maker or paint maker (though some pigments get a little squirrely about that...)

[–] JcbAzPx@lemmy.world 1 points 1 day ago (1 children)

My instructions are copyright by me

First, how much that is true is debatable. Second, that doesn't matter as far as the output. No one can legally own that.

[–] MangoCats@feddit.it 1 points 1 day ago (1 children)

First, how much that is true is debatable.

It's actually settled case law. AI does not hold copyright any more than spell-check in a word processor does. The person using the AI tool to create the work holds the copyright.

Second, that doesn’t matter as far as the output. No one can legally own that.

Idealistic notions aside, this is no different than PIXAR owning the Renderman output that is Toy Story 1 through 4.

[–] JcbAzPx@lemmy.world 0 points 16 hours ago (1 children)

You obviously didn't even glance at the case law. No one can own what AI produces. It is inherently public domain.

[–] MangoCats@feddit.it 1 points 6 hours ago

The statement that "No one can own what AI produces. It is inherently public domain" is partially true, but the situation is more nuanced, especially in the United States.

Here is a breakdown of the key points:

Human Authorship is Required: In the U.S., copyright law fundamentally requires a human author. Works generated entirely by an AI, without sufficient creative input or control from a human, are not eligible for copyright protection and thus fall into the public domain.

"Sufficient" Human Input Matters: If a human uses AI as an assistive tool but provides significant creative control, selection, arrangement, or modification to the final product, the human's contributions may be copyrightable. The U.S. Copyright Office determines the "sufficiency" of human input on a case-by-case basis.

Prompts Alone Are Generally Insufficient: Merely providing a text prompt to an AI tool, even a detailed one, typically does not qualify as sufficient human authorship to copyright the output.

International Variations: The U.S. stance is not universal. Some other jurisdictions, such as the UK and China, have legal frameworks that may allow for copyright in "computer-generated works" under certain conditions, such as designating the person who made the "necessary arrangements" as the author.

In summary, purely AI-generated content generally lacks copyright protection in the U.S. and is in the public domain. However, content where a human significantly shapes the creative expression may be copyrightable, though the AI-generated portions alone remain unprotectable.

To help you understand the practical application, I can explain the specific requirements for copyrighting a work that uses both human creativity and AI assistance. Would you like me to outline the specific criteria the U.S. Copyright Office uses to evaluate "sufficient" human authorship for a project you have in mind?

Use at your own risk, AI can make mistakes, but in this case it agrees 100% with my prior understanding.