I seriously dislike my operating system giving me suggestions for replies to text messages or anything else. It makes a mockery of trust and sincerity. This whole thing is a bad idea.
ai
"Counsel for plaintiffs in a copyright lawsuit filed against Meta allege that Meta CEO Mark Zuckerberg gave the green light to the team behind the company’s Llama AI models to use a data set of pirated ebooks and articles for training."
这两天我是小刀捅屁股——开了眼了,可能对于熟练运用AI工具的象友不是新鲜事了吧,我之前只用chatgpt进行一些常识性的问答对话,但这两天我学会了:
- 用perplexity AI搜索资料
- 把pdf喂给NotebookLM并生成播客音频
- 把音频喂给otter.ai转化成可以逐字播放的文本
还有什么好用的工具也请象友多多推荐!
#长毛象安利大会 #长毛象安利交换大会 #长毛象安利中心 #generativeAI #AI #perplexity #notebooklm
#pic #pics #picture #pictures
#animal #animals #cute #cuteanimal
#cat #cats #caturday #catsofmastodon #art #ai
Apple is temporarily disabling its AI-generated news notifications, which were frequently error-filled, misleading or totally false @CNN reports. Alerts included fake stories that Luigi Mangione, who is charged with murdering the UnitedHealthcare CEO, had shot himself, and that Benjamin Netanyahu had been arrested.
Anthropic, the company that made one of the most popular AI writing assistants in the world, requires job applicants to agree that they won’t use an AI assistant to help write their application.
“We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.”
Mediating everything else in the world through an AI system is just fine, though, and non-AI-assisted communication skills are otherwise unimportant to them.
#AI #GenAI #GenerativeAI #Anthropic
"What's worse, as Berlin historian Henrik Schönemann discovered while experimenting with the bot, is that it seems trained to avoid pinning blame for Frank's death on the actual Nazis responsible for her death, instead redirecting the conversation in a positive light."
So soulless, they can't even imagine people creating music for the fun, the self-expression, or the accomplishment of developing a skill. To these types, nothing matters but output.
https://www.404media.co/ceo-of-ai-music-company-says-people-dont-like-making-music/
Update2: https://fedihum.org/@lavaeolus/113860035702353891
Update: https://fedihum.org/@lavaeolus/113856096099247855
---
An '#AI-emulation' of Anne Frank made for use in schools.
Who the fuck thought this is appropriate?
Who in the everloving fuck coded this? Who approved it?
Who didn't stop them?
This needs to be luddited 🔥🔥🔥
Spoken as a (digital) historian, who uses #LLMs as tools.
I'm not one quick to anger, but I'm fuming 😤🤬🤬🤬
(Those chatbots are not new, but I hadn't seen this one until this morning in a post by @ct_bergstrom)
#AnneFrank
In a Zoom meeting right now in which there are almost as many #AI notetakers as there are human attendees.
Pretty soon, no one will come to meetings and it will be just AI notetakers listening for someone to start talking.
“I have a PhD in AI, worked to develop some of the algorithms used by generative AI,” one developer wrote. “I deeply regret how naively I offered up my contributions.”
https://www.wired.com/story/video-game-industry-artificial-intelligence-developers/
LinkedIn accused of using private messages to train AI
https://www.bbc.com/news/articles/cdxevpzy3yko
Has anyone put together an article that lists all the different walkthroughs for disabling AI in various programs/services?
Something like this: https://stefanbohacek.com/blog/opting-out-of-ai-in-popular-software-and-services/
Abstract: This paper will look at the various predictions that have been made about AI and propose decomposition schemas for analyzing them. It will propose a variety of theoretical tools for analyzing, judging, and improving these predictions. Focusing specifically on timeline predictions (dates given by which we should expect the creation of AI), it will show that there are strong theoretical grounds to expect predictions to be quite poor in this area. Using a database of 95 AI timeline predictions, it will show that these expectations are borne out in practice: expert predictions contradict each other considerably, and are indistinguishable from non-expert predictions and past failed predictions. Predictions that AI lie 15 to 25 years in the future are the most common, from experts and non-experts alike.
Armstrong, Stuart, and Kaj Sotala. 2012. “How We’re Predicting AI—or Failing To.” In Beyond AI: Artificial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, and Radek Schuster, 52–75. Pilsen: University of West Bohemia.
Note that this is from 2012.
One wonders what exactly an expert is when it comes to AI, if their track records are so consistently poor and unresponsive to their own failures.
#AI #GenAI #GenerativeAI #AGI
Virgin Money's AI-powered chatbot will scold you if you use the word "virgin".
https://www.ft.com/content/670f5896-1fe5-4a31-b41f-ad4f5b91202f
Archived link: https://archive.is/gGVgt
DeepSeek launched a free, open-source large-language model in late December, claiming it was developed in just two months at a cost of under $6 million — a much smaller expense than the one called for by Western counterparts.
These developments have stoked concerns about the amount of money big tech companies have been investing in AI models and data centers, and raised alarm that the U.S. is not leading the sector as much as previously believed.
The sad reality is that the US could lead in this field (1), if we'd stop routinely putting narcissists and con artists in charge and showering them with praise even when they fail.
From https://www.cnbc.com/2025/01/27/nvidia-falls-10percent-in-premarket-trading-as-chinas-deepseek-triggers-global-tech-sell-off.html
#AI #GenAI #GenerativeAI #LLM #SnakeOil #hype #grift #MarketCapitalism
(1) Putting aside whether we should, which is an important question.
… people with lower AI literacy perceive AI to be more magical, and thus experience greater feelings of awe when thinking about AI completing tasks, which explain their greater receptivity towards using AI-based products and services.
AI marketing is 100% about whether the sucker is sufficiently wowed by an impressive demo."
From https://econtwitter.net/users/amycastor/statuses/113902204443133686
#AI #GenAI #GenerativeAI #SnakeOil
"But to [Aaron, the creator of the anti-AI software Nepenthes], the fight is not about winning. Instead, it's about resisting the AI industry further decaying the Internet [...]."
Or to paraphrase Jean-Paul Sartre:
"You don’t fight enshittification because you are going to win, you fight enshittification because it is enshittification."
#news #TechNews #technology #ai #nepenthes #enshittification
Sabot in the Age of AI
Here is a curated list of strategies, offensive methods, and tactics for (algorithmic) sabotage, disruption, and deliberate poisoning.
🔻 iocaine
The deadliest AI poison—iocaine generates garbage rather than slowing crawlers.
🔗 https://git.madhouse-project.org/algernon/iocaine
🔻 Nepenthes
A tarpit designed to catch web crawlers, especially those scraping for LLMs. It devours anything that gets too close. @aaron
🔗 https://zadzmo.org/code/nepenthes/
🔻 Quixotic
Feeds fake content to bots and robots.txt-ignoring #LLM scrapers. @marcusb
🔗 https://marcusb.org/hacks/quixotic.html
🔻 Poison the WeLLMs
A reverse-proxy that serves diassociated-press style reimaginings of your upstream pages, poisoning any LLMs that scrape your content. @mike
🔗 https://codeberg.org/MikeCoats/poison-the-wellms
🔻 Django-llm-poison
A django app that poisons content when served to #AI bots. @Fingel
🔗 https://github.com/Fingel/django-llm-poison
🔻 KonterfAI
A model poisoner that generates nonsense content to degenerate LLMs.
🔗 https://codeberg.org/konterfai/konterfai
"[California Attorney General Rob Bonta’s] memo clearly illustrates what a legal clusterfuck the AI industry represents, though it doesn’t even get around to mentioning U.S. copyright law, which is another legal gray area where AI companies are perpetually running into trouble."