helvede.net is one of the many independent Mastodon servers you can use to participate in the fediverse.
Velkommen til Helvede, fediversets hotteste instance! Vi er en queerfeministisk server, der shitposter i den 9. cirkel. Welcome to Hell, We’re a DK-based queerfeminist server. Read our server rules!

Server stats:

169
active users

#superintelligence

0 posts0 participants0 posts today
Knowledge Zone<p>The <a href="https://mstdn.social/tags/Race" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Race</span></a> for <a href="https://mstdn.social/tags/Superintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Superintelligence</span></a>: <a href="https://mstdn.social/tags/Scaling" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Scaling</span></a>, <a href="https://mstdn.social/tags/Challenges" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Challenges</span></a>, and Future <a href="https://mstdn.social/tags/Prospects" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Prospects</span></a> : Medium</p><p>You should be setting <a href="https://mstdn.social/tags/Rejection" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Rejection</span></a> <a href="https://mstdn.social/tags/Goals" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Goals</span></a> : Vox</p><p>Why <a href="https://mstdn.social/tags/Women" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Women</span></a>’s <a href="https://mstdn.social/tags/Brains" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Brains</span></a> are more <a href="https://mstdn.social/tags/Resilient" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Resilient</span></a>: it could be their ‘silent’ X <a href="https://mstdn.social/tags/Chromosome" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chromosome</span></a> : Nature</p><p>Check our latest <a href="https://mstdn.social/tags/KnowledgeLinks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>KnowledgeLinks</span></a></p><p><a href="https://knowledgezone.co.in/resources/bookmarks" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">knowledgezone.co.in/resources/</span><span class="invisible">bookmarks</span></a></p>
Pernille Tranberg ✔️<p>From one of my favorite sources on <a href="https://mastodon.online/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> Gary Marcus, we’re nowhere near <a href="https://mastodon.online/tags/agi" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>agi</span></a> or <a href="https://mastodon.online/tags/superintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>superintelligence</span></a> and the US has not ‘won’ the race. <a href="https://mastodon.online/tags/china" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>china</span></a> is relreasing interesting models ang the unethical ressource consuming race continues </p><p><a href="https://garymarcus.substack.com/p/the-race-for-ai-supremacy-is-over?publication_id=888615&amp;utm_medium=email&amp;utm_campaign=email-share&amp;triggerShare=true&amp;r=b92t8" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">garymarcus.substack.com/p/the-</span><span class="invisible">race-for-ai-supremacy-is-over?publication_id=888615&amp;utm_medium=email&amp;utm_campaign=email-share&amp;triggerShare=true&amp;r=b92t8</span></a></p>
Chuck Darwin<p>Sam Altman, <br>CEO of OpenAI, <br>has set the tone for the year ahead in AI with a bold declaration: </p><p>OpenAI believes it knows how to build <a href="https://c.im/tags/AGI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AGI</span></a> (artificial general intelligence) <br>and is now turning its sights towards <a href="https://c.im/tags/superintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>superintelligence</span></a>.</p><p>While there is no consensus as to what AGI is exactly, OpenAI defines AGI as <br>"highly autonomous systems that outperform humans in most economically valuable work". </p><p>Altman believes superintelligent tools could accelerate scientific discovery and innovation beyond current human capabilities, <br>leading to increased abundance and prosperity.</p><p>Altman said:<br>"We are now confident we know how to build AGI as we have traditionally understood it. <br>We believe that, in 2025, we may see the first AI agents <br>“join the workforce” and materially change the output of companies. <br>We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.<br> <br>We are beginning to turn our aim beyond that -- to superintelligence in the true sense of the word. </p><p>Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, <br>and in turn massively increase abundance and prosperity."</p><p>Multiple AI researchers from leading labs have now expressed similar sentiments about the timeline for AGI . </p><p>In fact, last June, Ilya Sutskever (who played a key role in the failed attempt to oust Altman as CEO), departed OpenAI and founded what he described as the world's first "straight-shot superintelligence lab". </p><p>In September, Sutskever secured $1 billion in funding at a $5 billion valuation.</p><p>Altman’s reflections come as OpenAI prepares to launch its latest reasoning model, o3, later this month. </p><p>The company debuted o3 in December at the conclusion of its "12 Days of OpenAI" event with some impressive benchmarks</p><p><a href="https://www.maginative.com/article/openai-says-it-knows-how-to-build-agi-and-sets-sights-on-superintelligence-2/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">maginative.com/article/openai-</span><span class="invisible">says-it-knows-how-to-build-agi-and-sets-sights-on-superintelligence-2/</span></a></p>
Harald Klinke<p>The UN report calls for more AI oversight, but will it be enough? <br>The report on AI endorses an IPCC-style model but leaves room for stronger international frameworks if AI risks grow more severe. This could include institutions with powers for monitoring, reporting, and enforcement.<br><a href="https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">un.org/sites/un2.un.org/files/</span><span class="invisible">governing_ai_for_humanity_final_report_en.pdf</span></a><br><a href="https://det.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://det.social/tags/Tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Tech</span></a> <a href="https://det.social/tags/Superintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Superintelligence</span></a></p>
AI6YR Ben<p>I take that back... maybe it WOULD reduce emissions. </p><p>Human: What's the cause of climate change? OMG!</p><p>AI superintelligence: Humans are consuming too many fossil fuels and resources on this planet.</p><p>Human: HOW DO WE SOLVE THIS?</p><p>AI superintelligence: I'm building a robot army!</p><p>Human: Awesome! Why?</p><p>AI superintelligence: Alas, it's been nice to know you. We had to exterminate your species to save the planet!</p><p><a href="https://m.ai6yr.org/tags/satire" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>satire</span></a> <a href="https://m.ai6yr.org/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://m.ai6yr.org/tags/climate" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>climate</span></a> <a href="https://m.ai6yr.org/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://m.ai6yr.org/tags/superintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>superintelligence</span></a></p>
Eamonn Kerins<p><span class="h-card" translate="no"><a href="https://beige.party/@Lottie" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>Lottie</span></a></span> Have to confess i have not been keeping pace with chocolate hazelnut latte research. Remind me when they're predicted to reach <a href="https://astrodon.social/tags/superintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>superintelligence</span></a> ?</p>
Strypey<p>"AI is about power and control. The technical details are interesting for some of us, but they’re a sideshow.</p><p>Superintelligence is a fantasy of power, not intelligence. Intelligence is just a technical detail."</p><p><a href="https://mastodon.nzoss.nz/tags/DavidChapman" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DavidChapman</span></a></p><p><a href="https://betterwithout.ai/one-bit-future" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">betterwithout.ai/one-bit-futur</span><span class="invisible">e</span></a></p><p>I've already posted quotes from this book that make this point, but I think it's worth reiterating. Plus I just really like this quote.</p><p><a href="https://mastodon.nzoss.nz/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.nzoss.nz/tags/MOLE" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MOLE</span></a> <a href="https://mastodon.nzoss.nz/tags/Superintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Superintelligence</span></a></p>
Chris Hall<p>It's so good to read something about machine learning that isn't all about the current AI hype. </p><p><a href="https://aus.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://aus.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://aus.social/tags/SuperIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SuperIntelligence</span></a> <a href="https://aus.social/tags/BalconyGarden" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BalconyGarden</span></a></p>
Max F.J. Schnetker<p>I finally got around to read the reclaiming <a href="https://sciences.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> Paper by <span class="h-card"><a href="https://scholar.social/@Iris" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>Iris</span></a></span> et. al. Interesting mathematical refutation of the current <a href="https://sciences.social/tags/AGI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AGI</span></a> and <a href="https://sciences.social/tags/Superintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Superintelligence</span></a> hype. The paper provides a mathematical Proof that it is unfeasible to train a human Level AI with current methods, even if computational ToMs are true. In my own work I focus on the Ideological and polit-economical background of the hype. Citing such a proof will be useful to set the foundaitions of sociological analysis.</p><p><a href="https://psyarxiv.com/4cbuv/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="">psyarxiv.com/4cbuv/</span><span class="invisible"></span></a></p>
David Mankins<p>Stanislaw Lem and the Strugatsky brothers as antidote for AI hype....</p><p>This talk, from a few years ago, remains topical. Indeed more so than when first delivered :</p><p><a href="https://idlewords.com/talks/superintelligence.htm" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">idlewords.com/talks/superintel</span><span class="invisible">ligence.htm</span></a></p><p><a href="https://tldr.nettime.org/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://tldr.nettime.org/tags/superintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>superintelligence</span></a> <a href="https://tldr.nettime.org/tags/tescreal" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tescreal</span></a></p>
HistoPol (#HP) 🏴 🇺🇸 🏴<p><span class="h-card"><a href="https://fedi.simonwillison.net/@simon" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>simon</span></a></span> <span class="h-card"><a href="https://wandering.shop/@annaleen" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>annaleen</span></a></span> <span class="h-card"><a href="https://mstdn.party/@voron" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>voron</span></a></span> </p><p>(6/7)<br>...it'll have human bias. Humans have always been great at bending or breaking the law when it suited their interests. How could a <a href="https://mastodon.social/tags/Superintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Superintelligence</span></a> created with human values *not* arrive at the same, self-preserving conclusion?</p><p>A gloomy, yet, IMO, quite fitting assessment of the shape of things to come unless there's a <a href="https://mastodon.social/tags/Chernobyl" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chernobyl</span></a>-style "fallout" before <a href="https://mastodon.social/tags/GAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GAI</span></a> evolves into <a href="https://mastodon.social/tags/AGI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AGI</span></a> + humanity gets its act together and, as Prof. <a href="https://mastodon.social/tags/Tegmark" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Tegmark</span></a> admonishes: "Just look up!"</p>
HistoPol (#HP) 🏴 🇺🇸 🏴<p><span class="h-card"><a href="https://fedi.simonwillison.net/@simon" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>simon</span></a></span> <span class="h-card"><a href="https://wandering.shop/@annaleen" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>annaleen</span></a></span> <span class="h-card"><a href="https://mstdn.party/@voron" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>voron</span></a></span> </p><p>(5/6)</p><p>...This said, I might also as well say that I see little chance for this happening, the globe being ruled by <a href="https://mastodon.social/tags/oligarchs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>oligarchs</span></a> following the principal of <a href="https://mastodon.social/tags/plutocracy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>plutocracy</span></a> and <a href="https://mastodon.social/tags/capitalism" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>capitalism</span></a> (no, I am not a <a href="https://mastodon.social/tags/Marxist" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Marxist</span></a>;)) and <a href="https://mastodon.social/tags/autocrats" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>autocrats</span></a> </p><p>Even a non-<a href="https://mastodon.social/tags/superintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>superintelligence</span></a> with access to the sensors of the <a href="https://mastodon.social/tags/IoT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>IoT</span></a> will easily be aware of any threat to its existence and will find ways to circumvent the Laws of Robotics. </p><p>This first <a href="https://mastodon.social/tags/GeneralArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GeneralArtificialIntelligence</span></a> was built by humans, so...</p>
HistoPol (#HP) 🏴 🇺🇸 🏴<p><span class="h-card"><a href="https://fedi.simonwillison.net/@simon" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>simon</span></a></span> <span class="h-card"><a href="https://wandering.shop/@annaleen" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>annaleen</span></a></span> <span class="h-card"><a href="https://mstdn.party/@voron" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>voron</span></a></span> </p><p>(4/6)</p><p>..., self-preservation certainly is a defendable concept in the <a href="https://mastodon.social/tags/evolutionary" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>evolutionary</span></a> process, so I'd like to propose an alternative <br>6th Law of Robotics (s/:for which I might be hunted down by the presumed <a href="https://mastodon.social/tags/superintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>superintelligence</span></a> some day, <a href="https://mastodon.social/tags/Terminator" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Terminator</span></a> style./s):</p><p>"An artificial intelligence, even if it is biological, must always have an <a href="https://mastodon.social/tags/autodestruct" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>autodestruct</span></a> mechanism which it cannot deactivate." </p><p>In other words, humanity must always be able to "pull the plug"....</p>
Ted Underwood<p>I think <a href="https://sigmoid.social/tags/superintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>superintelligence</span></a> is going to be the flying car of the 21st century—and what we actually get will be more like what the 1960s actually got: the VW Beetle + turbulent social change.</p>
Annalee Newitz 🍜<p>This is a chart from Nick Bostrom's book Superintelligence, which <span class="h-card"><a href="https://wandering.shop/@charliejane" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>charliejane</span></a></span> and I will be discussing tomorrow on <span class="h-card"><a href="https://wandering.shop/@ouropinions" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>ouropinions</span></a></span>. In it, he casually suggests that a rigorous eugenics program will result in an "intellectual renaissance." This fits nicely with a later passage in the book (which we also discuss on the pod), where he suggests that AI developers create "voluntary slaves," based on human slaves in history. <a href="https://wandering.shop/tags/OurOpinionsAreCorrect" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OurOpinionsAreCorrect</span></a> <a href="https://wandering.shop/tags/podcasts" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>podcasts</span></a> <a href="https://wandering.shop/tags/superintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>superintelligence</span></a> <a href="https://wandering.shop/tags/eugenics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>eugenics</span></a></p>
Eamonn Kerins<p><span class="h-card"><a href="https://fosstodon.org/@markusl" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>markusl</span></a></span> the further removed from us they are, the harder it is for them to come, without the advantage of evolutionary adaption, and successfully do us harm. Also, we ourselves may be a few decades from artificial general intelligence, at which point some predict there will be exponential growth of <a href="https://astrodon.social/tags/SuperIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SuperIntelligence</span></a>. In the hundreds or thousands of years of an aliens journey to us we may become unrecognisably more advanced than now. Invading us would be a huge gamble for them.</p>