Has anyone put together an article that lists all the different walkthroughs for disabling AI in various programs/services?
Something like this: https://stefanbohacek.com/blog/opting-out-of-ai-in-popular-software-and-services/
Has anyone put together an article that lists all the different walkthroughs for disabling AI in various programs/services?
Something like this: https://stefanbohacek.com/blog/opting-out-of-ai-in-popular-software-and-services/
Abstract: This paper will look at the various predictions that have been made about AI and propose decomposition schemas for analyzing them. It will propose a variety of theoretical tools for analyzing, judging, and improving these predictions. Focusing specifically on timeline predictions (dates given by which we should expect the creation of AI), it will show that there are strong theoretical grounds to expect predictions to be quite poor in this area. Using a database of 95 AI timeline predictions, it will show that these expectations are borne out in practice: expert predictions contradict each other considerably, and are indistinguishable from non-expert predictions and past failed predictions. Predictions that AI lie 15 to 25 years in the future are the most common, from experts and non-experts alike.
DeepSeek launched a free, open-source large-language model in late December, claiming it was developed in just two months at a cost of under $6 million — a much smaller expense than the one called for by Western counterparts.
These developments have stoked concerns about the amount of money big tech companies have been investing in AI models and data centers, and raised alarm that the U.S. is not leading the sector as much as previously believed.
Resistance to the coup is the defense of the human against the digital and the democratic against the oligarchic.
Some argue that ai technology is more significant than electricity or the internet, and so it will spread fast. But there is little sign of this. Only 5-6% of American businesses said they used ai to produce goods and services in 2024, according to the country’s Census Bureau.
The present perspective outlines how epistemically baseless and ethically pernicious paradigms are recycled back into the scientific literature via machine learning (ML) and explores connections between these two dimensions of failure. We hold up the renewed emergence of physiognomic methods, facilitated by ML, as a case study in the harmful repercussions of ML-laundered junk science. A summary and analysis of several such studies is delivered, with attention to the means by which unsound research lends itself to social harms. We explore some of the many factors contributing to poor practice in applied ML. In conclusion, we offer resources for research best practices to developers and practitioners.
Francis Galton, pioneering figure of the eugenics movement, believed that good research practice should consist in “gathering as many facts as possible without any theory or general principle that might prejudice a neutral and objective view of these facts” (Jackson et al., 2005). Karl Pearson, statistician and fellow purveyor of eugenicist methods, approached research with a similar ethos: “theorizing about the material basis of heredity or the precise physiological or causal significance of observational results, Pearson argues, will do nothing but damage the progress of the science” (Pence, 2011). In collaborative work with Pearson, Weldon emphasised the superiority of data-driven methods which were capable of delivering truths about nature “without introducing any theory” (Weldon, 1895).
Before the automobile industry invented the catalytic converter, the costs of reducing air pollution seemed astronomical, enough to bankrupt the entire industry. After they invented the catalytical converter, the costs were manageable. And they only invented it because they were faced with the threat of being shut down.
We were given a prompt as an invitation to participate in this newsletter: “How are you using AI in the classroom?” While we have accepted this invitation, we are engaging in the most humanistic act we can imagine—refusing the prompt.
Nothing is more valuable than a clear-headed understanding of which particular lies are most likely to succeed in the present environment, and which are just evanescent byproducts of the generally mendacious atmosphere. Dodge the decoys, save the right kind of energy to counter the real blows. Turning up the heat in lamenting the current crisis risks mistaking a mere mirage for a more substantial threat.
A California federal judge ruled Thursday that three authors suing Anthropic over copyright infringement can bring a class action lawsuit representing all U.S. writers whose work was allegedly downloaded from libraries of pirated works.
⇒ Please help me find #GenAI truth-telling sites! ⇐
In the past I've come across several websites that effectively debunk #GenerativeAI hype.
However, now that I actually need them, to help me make the case at work for strong oversight of the company's GenAI use, I can't find any of them.
It seems like no matter what search terms and search engine I use, I get garbage search results (hype, indeed!).
What are your go-to websites for debunking #AI hype?
#tech #LLM