helvede.net is one of the many independent Mastodon servers you can use to participate in the fediverse.
Velkommen til Helvede, fediversets hotteste instance! Vi er en queerfeministisk server, der shitposter i den 9. cirkel. Welcome to Hell, We’re a DK-based queerfeminist server. Read our server rules!

Server stats:

157
active users

#agents

0 posts0 participants0 posts today
Continued thread

Energy usage of LLMs used through an individual chat interface doesn't seem worrying to me. It's not worse than energy usage of a high-end PC, and it's limited by how much a human can ask and read.

But deployment of LLMs as "autonomous #agents seems like a floodgate for wasteful large-scale usage. It makes it easy to run lots of them, without time limits. Getting only summarized outputs hides how much effort was spent in the process of getting it.

About 10 years ago, or more, I said in a talk in a basement ping pong bar “if you don’t want the robots, that build robots, to put guns on the robots; don’t put guns in the warehouse they pick from.” #AI #Agents

Someone put guns in the warehouse.
Same applies to bank card details. If you don’t want #AI accidentally spend all your money, don’t give it your bank card.

Convenience is a trap you spring on your self.
Dependence is the goal.
#rebel by learning, rebel by doing things yourself.

Replied in thread

@remixtures @ai_feed I’m a believer in agent-based systems and distributed intelligent systems. I’m amazed at how much #AI people have forgotten or failed to learn in graduate studies. #Agents date back at least to Hewitt’s “Actor” language in 1971, and some of the powerful automation systems come from agent-based applications in the ‘90s. I agree that the speed of information flow is critical in such systems because the quality of info degrades quickly with time. But there are also emergent problems from sampling too often, especially in noisy systems. There is no single approach to this, which is one reason that problem may be better solved by distributing it to agents closer to the data. Anyway, 👍🏼

#Leonard #Peltier, a Native American activist,
has been imprisoned for nearly 50 years in the USA
for a crime he maintains he did not commit.

There are serious and ongoing concerns about the fairness of his trial and conviction.
#Tribal #Nations, Nobel #Peace #Laureates, former #FBI #agents, numerous others, and even the former U.S. Attorney, #James #Reynolds, whose office handled the prosecution,
have called for Leonard Peltier’s release.

Now 79 years old, he suffers from several chronic health ailments,
including one that is potentially fatal.

#President #Biden should grant him #clemency and release him before it is too late.

amnestyusa.org/campaigns/free-

Amnesty International USACampaign: President Biden Should Free Leonard PeltierIt’s time for Native American activist Leonard Peltier to go home before it’s too late. You can help!

"[2402.06664] LLM Agents can Autonomously Hack Websites"

arxiv.org/abs/2402.06664

"... #LLMs can now function autonomously as agents ... In this work, we show that LLM #agents can autonomously #hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback ..."

arXiv.orgLLM Agents can Autonomously Hack WebsitesIn recent years, large language models (LLMs) have become increasingly capable and can now interact with tools (i.e., call functions), read documents, and recursively call themselves. As a result, these LLMs can now function autonomously as agents. With the rise in capabilities of these agents, recent work has speculated on how LLM agents would affect cybersecurity. However, not much is known about the offensive capabilities of LLM agents. In this work, we show that LLM agents can autonomously hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback. Importantly, the agent does not need to know the vulnerability beforehand. This capability is uniquely enabled by frontier models that are highly capable of tool use and leveraging extended context. Namely, we show that GPT-4 is capable of such hacks, but existing open-source models are not. Finally, we show that GPT-4 is capable of autonomously finding vulnerabilities in websites in the wild. Our findings raise questions about the widespread deployment of LLMs.

Re-added/ported more agent behavior building blocks today, incl. path following and path attraction (both for open/closed geometries). In total there're now 7 freely combinable behaviors, each weighted and so usable as soft/hard constraint.

Here's an example of path following (here a Hilbert curve) and separation only...