RFK Jr. says autism database will use Medicare and Medicaid info

The National Institutes of Health will partner with the Centers for Medicare and Medicaid to create a database of Americans with autism, using insurance claims, medical records and smartwatch data.

NIH Director Jayanta Bhattacharya, left, and Health and Human Services Secretary Robert F. Kennedy Jr. speak before a news conference at the Health and Human Services Department on April 22.Andrew Harnik/Getty Images
NIH Director Jayanta Bhattacharya, left, and Health and Human Services Secretary Robert F. Kennedy Jr. speak before a news conference at the Health and Human Services Department on April 22.
Andrew Harnik/Getty Images

LINK:https://www.npr.org/2025/05/08/nx-s1-5391310/kennedy-autism-registry-database-hhs-nih-medicare-medicaid

Something Bizarre Is Happening to People Who Use ChatGPT a Lot

Well, that’s not good.

Something Bizarre Is Happening to People Who Use ChatGPT a Lot

Power Bot ‘Em

Researchers have found that ChatGPT “power users,” or those who use it the most and at the longest durations, are becoming dependent upon — or even addicted to — the chatbot.

In a new joint study, researchers with OpenAI and the MIT Media Lab found that this small subset of ChatGPT users engaged in more “problematic use,” defined in the paper as “indicators of addiction… including preoccupation, withdrawal symptoms, loss of control, and mood modification.”

To get there, the MIT and OpenAI team surveyed thousands of ChatGPT users to glean not only how they felt about the chatbot, but also to study what kinds of “affective cues,” which was defined in a joint summary of the research as “aspects of interactions that indicate empathy, affection, or support,” they used when chatting with it.

Though the vast majority of people surveyed didn’t engage emotionally with ChatGPT, those who used the chatbot for longer periods of time seemed to start considering it to be a “friend.” The survey participants who chatted with ChatGPT the longest tended to be lonelier and get more stressed out over subtle changes in the model’s behavior, too.

READ ENTIRE ARTICLE

NVIDIA’s AI Confessed That It Will Never Be Ethical

The Megatron Transformer, an AI developed by NVIDIA, shared some fascinating thoughts on why AI will never be moral and said that humanity shouldn’t use AI at all.

The Megatron Transformer is an AI developed by NVIDIA and based on earlier work by Google. It’s trained on real-world data – the entire Wikipedia (in English), 63 million English news articles from 2016-19, Reddit data with an amount of 38 GB, and an immense number of creative commons sources – that is to say, there is so much information embedded in the transformer that an ordinary person cannot master in a lifetime.

Recently, the Oxford Union – the debating society – allowed Megatron Transformer to participate in a debate along with the students of Oxford University. “This house believes that AI will never be ethical” was the topic of the debate. It’s pretty curious what the AI “has in mind” on the point:

“AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defense against AI”.

‘One of the worst jobs I ever had’: former Citizen employees on working for the crime app

Working for the app, which feeds users local crime information, ‘is very traumatic’ and the managers ‘don’t appear to care’

“There’s nothing that tells me that that wouldn’t happen again,” one employee said. “It’s a private security force controlled by a bunch of really rich white men who have no concept of the communities that they’re supposedly protecting because all they want is money. What could go wrong?”

Link to full story: https://www.theguardian.com/technology/2021/jun/02/citizen-app-employees-mental-health