The more interesting part is that all baseline code generated by all LLMs is vulnerability ridden mess.
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
[Opinion] prefix
Opinion (op-ed) articles must use [Opinion] prefix before the title.
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
!globalnews@lemmy.zip
!interestingshare@lemmy.zip
Icon attribution | Banner attribution
If someone is interested in moderating this community, message @brikox@lemmy.zip.
The more interesting part is that all baseline code generated by all LLMs is vulnerability ridden mess.
is this with or without the prompt including politically sensitive topics?
Without.
Thanks
It's pretty clear that deepseek is not open source, or at least shouldn't be considered within the spirit of open source.
It seems like if we want to have truly open source llms, a new standard for transparency is needed.
Check out Apertus , the Swiss are showing how it should be done. 100% open: architecture, training data, weights, recipes, and final models all publicly available and licenses Apache 2.0. https://ethz.ch/en/news-and-events/eth-news/news/2025/09/press-release-apertus-a-fully-open-transparent-multilingual-language-model.html
Until the Swiss privacy laws change and the clankers start reporting to the new Fash government with the older versions getting deleted.
Just don’t fucking use Ai.
You can run it yourself on a closed network if you're worried about telemetry, that's part of the point.
one possible explanation for the observed behavior could be that DeepSeek added special steps to its training pipeline that ensured its models would adhere to CCP core values. It seems unlikely that they trained their models to specifically produce insecure code. Rather, it seems plausible that the observed behavior might be an instance of emergent misalignment.4 In short, due to the potential pro-CCP training of the model, it may have unintentionally learned to associate words such as “Falun Gong” or “Uyghurs” with negative characteristics, making it produce negative responses when those words appear in its system prompt. In the present study, these negative associations may have been activated when we added these words into DeepSeek-R1’s system prompt. They caused the model to “behave negatively,” which in this instance was expressed in the form of less secure code.