helvede.net is one of the many independent Mastodon servers you can use to participate in the fediverse.
Velkommen til Helvede, fediversets hotteste instance! Vi er en queerfeministisk server, der shitposter i den 9. cirkel. Welcome to Hell, We’re a DK-based queerfeminist server. Read our server rules!

Server stats:

169
active users

#gai

0 posts0 participants0 posts today
Replied in thread

Since Mitt Romney’s loss in the 2012 election,
the Mercers have drifted ever further out of the orbit of Planet Koch,
building up their own entities, including #Breitbart #News,
#Cambridge #Analytica (the data startup),
and the ironically named #Government #Accountability #Institute,
all of them featuring #Steve #Bannon, the self-described economic nationalist, in top slots
—until he joined the White House.
Cambridge Analytica, on whose board Bannon served, and Breitbart, where he was the chief executive, are private companies.
#GAI, which produced the campaign-season book
"Clinton Cash: The Untold Story of How and Why Foreign Governments and Businesses Helped Make Bill and Hillary Rich", by Peter Schweizer,
is a nonprofit organization.
Schweizer’s book, together with an accompanying movie executive-produced by #Rebekah Mercer, was designed to sully the reputation of Democratic presidential nominee Hillary Clinton.
It was part of a Bannon-crafted strategy to steer the sort of smears that usually bubble up from the right-wing fever swamps directly into the mainstream media.
It worked; in 2015 the New York Times and the Washington Post both made exclusive agreements with the GAI to report on advance excerpts of Clinton Cash.

#Bannon was wealthy before he met up with the Mercers,
first through his work as a banker for Goldman Sachs back when it was a privately held company,
and then through his own privately held ventures in movie-making and consulting.
Though Bannon’s wealth, when compared to that of Betsy DeVos, makes him a pipsqueak in the Trump money universe
(assets worth between $12 million and $54 million, according to the New York Times),
it nonetheless derives primarily from privately held entities.

En ny bog om AI i skolan: Möjligheter och utmaningar i undervisningen skrevet af Thomas Nygren og anmeldt af undertegnede: kulturkapellet.dk/sagprosaanme
Bogen er en udmærket fremstilling af en del af de muligheder og udfordringer, AI giver for skolen. Men den kommer ikke rigtigt i dybden med diskussionerne. Fremstillingen virker noget abstrakt og overordnet, og den er ikke tilstrækkelig tæt på, hvordan undervisningen i skolen foregår. #skolechat #GAI #GAEd

www.kulturkapellet.dkAI i skolan. Möjligheter och utmaningar i undervisningen

Kunstig intelligens kan gå farlig fort i gal retning – mener Stuart Russell iflg Dagsavisen:
Innenfor atomkraft og medisin har vi teknologispesifikke lover som krever en høyere sikkerhetsstandard. Vi antar ikke at produkter som ikke er bevist utrygge er trygge. Utviklerne må derimot bevise at produktene de lager er trygge. Russell mener vi trenger en lignende lovgivning for KI.
dagsavisen.no/debatt/kommentar #GenAI #GAI #skolechat

Dagsavisen · Farlig fort i gal retningBy Aksel Braanen Sterri
Replied in thread

@luis_in_brief Yeah, there are enough move-fast-and-break things stories about #gAI and even consumers have got the memo.

To be fair, I do think it's possible gAI can be used in interesting (and ethical) ways. It's just a challenge for a lot of companies that are using it solely to "save money"

[2403.00742] Dialect prejudice predicts AI decisions about people's character, employability, and criminality arxiv.org/abs/2403.00742 #Dialectbias - Unsurprisingly the study shows prejudice or bias against wording that do not follow standard American English. #GAI #skolechat

arXiv.orgDialect prejudice predicts AI decisions about people's character, employability, and criminalityHundreds of millions of people now interact with language models, with uses ranging from serving as a writing aid to informing hiring decisions. Yet these language models are known to perpetuate systematic racial prejudices, making their judgments biased in problematic ways about groups like African Americans. While prior research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time. It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice: we extend research showing that Americans hold raciolinguistic stereotypes about speakers of African American English and find that language models have the same prejudice, exhibiting covert stereotypes that are more negative than any human stereotypes about African Americans ever experimentally recorded, although closest to the ones from before the civil rights movement. By contrast, the language models' overt stereotypes about African Americans are much more positive. We demonstrate that dialect prejudice has the potential for harmful consequences by asking language models to make hypothetical decisions about people, based only on how they speak. Language models are more likely to suggest that speakers of African American English be assigned less prestigious jobs, be convicted of crimes, and be sentenced to death. Finally, we show that existing methods for alleviating racial bias in language models such as human feedback training do not mitigate the dialect prejudice, but can exacerbate the discrepancy between covert and overt stereotypes, by teaching language models to superficially conceal the racism that they maintain on a deeper level. Our findings have far-reaching implications for the fair and safe employment of language technology.

Maybe one of the best reads on #GAI Will Douglas Heaven: What is AI?
Writing about GPT-3 back in 2020, I said that the greatest trick AI ever pulled was convincing the world it exists. I still think that: We are hardwired to see intelligence in things that behave in certain ways, whether it’s there or not. In the last few years, the tech industry has found reasons of its own to convince us that AI exists, too. This makes me skeptical
technologyreview.com/2024/07/1 #skolechat

MIT Technology Review · What is AI?By Will Douglas Heaven

⚠️ Resistance Is Futile - Maven to Assimilate All Mastodon Posts - Even Private Posts!

I just received notice of this Maven activity from a friend. I'll be deleting all of my private posts on Mastodon in case more groups are performing similar activities in the background.

Maven is being funded by OpenAI's Sam Altman 👎

wedistribute.org/2024/06/maven

#Maven #OpenAI #Fediverse #ActivityPub #Mastodon #Toots #Posts #Privacy #Invasion #AI #GAI #TrainingData #Copyright #Infringement

We Distribute · Maven Imported 1.12 Million Fediverse Posts (Updated) - We Distribute
More from We Distribute

"Generative artificial intelligence (#GAI) adds a new dimension to the problem of #disinformation. Freely available and largely unregulated tools make it possible for anyone to generate (...) fake content in vast quantities. (...) But there is also a positive side. Used smartly, GAI can provide a greater number of content consumers with trustworthy information"

– writes our colleague Julius Endert in this DWA post (which is part of a learning guide on tackling disinfo): akademie.dw.com/en/generative-

Deutsche Welle · Generative AI is the ultimate disinformation amplifierBy Julius Endert

Progress on legal frameworks around AI safeguards is being made. The EU now has a political agreement around their framework, and is now expected to make it law early next year.

“…threatens stiff financial penalties for violations…7% of a company’s global turnover.”

“Strong and comprehensive rules from the EU ‘can set a powerful example for many governments considering regulation…’”

apnews.com/article/ai-act-euro

AP News · Europe reaches deal on world's first comprehensive AI rulesBy KELVIN CHAN
#AI#safeguards#laws