mastodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance, open to everyone, but mainly English and French speaking.

Administered by:

Server stats:

815
active users

#foundationmodels

0 posts0 participants0 posts today
Replied in thread

@Techmeme This is the danger of closed source

These are knowledge models, and they only output what they are fed with

And no, they won’t magically develop ’reasoning skills’ and be able to sift through propaganda. NOT when it’s part of the training

To think otherwise means you don’t know shit how they work

They obey statistics. Training data for #ai #foundationmodels should be subject to public #academic scrutiny. Otherwise the models are bound to fall for flooding attacks

Check out the new Helmholtz Foundation Model Initiative (#HFMI) website at hfmi.helmholtz.de/ #FoundationModels

The Helmholtz Association provides ideal conditions for developing such forward-looking applications: an abundance of data, powerful supercomputers for training the models, and extensive expertise in artificial intelligence.

Our goal is to develop Foundation Models across a wide spectrum of research fields to address the major questions of our time.

📣The Helmholtz Foundation Model Initiative (HFMI) is soliciting bids for a second round of projects. With this call, we extend our support for highest-impact pilot foundation model projects within the Helmholtz-Gemeinschaft. The call is open to all our researchers.
Apply now: helmholtz.de/en/research/curre
#ai #data #foundationmodels #supercomputing

Herausforderung an der Schnittstelle von KI und Geodaten!
Die @Cyberagentur gibt am 31. Juli 2024 beim Online-Partnering-Event erste Einblicke in das neue Forschungsprojekt: „HEGEMON“.
Die Gelegenheit, sich zu vernetzen und sich für die wegweisende Forschung fit zu machen.
Mehr Infos und Anmeldung: t1p.de/m9vpi
#Cybersicherheit #FoundationModels #GenerativeAI #KI #Benchmarking #LLM #Multimodalitaet #Geodaten #GIS #Sicherheitstechnologie #Forschung #Innovation

#FoundationModels - Open for science and open to the world: Eine neue Generation von KI-Modellen, die Foundation Models, soll eine ganze Reihe von großen Herausforderungen in der Wissenschaft angehen und bewältigen.

In einer Session auf der @republica am 29.05.2024 um 10 Uhr stellen wir vier Pilotprojekte der Helmholtz Gemeinschaft vor und diskutieren mit euch über die Möglichkeiten und Grenzen dieser KI-Modelle. #rp24 #HFMI

#ML #AI #GenerativeAI #LLMs #FoundationModels #PoliticalEconomy: "A recent innovation in the field of machine learning has been the creation of very large pre-trained models, also referred to as ‘foundation models’, that draw on much larger and broader sets of data than typical deep learning systems and can be applied to a wide variety of tasks. Underpinning text-based systems such as OpenAI's ChatGPT and image generators such as Midjourney, these models have received extraordinary amounts of public attention, in part due to their reliance on prompting as the main technique to direct and apply them. This paper thus uses prompting as an entry point into the critical study of foundation models and their implications. The paper proceeds as follows: In the first section, we introduce foundation models in more detail, outline some of the main critiques, and present our general approach. We then discuss prompting as an algorithmic technique, show how it makes foundation models programmable, and explain how it enables different audiences to use these models as (computational) platforms. In the third section, we link the material properties of the technologies under scrutiny to questions of political economy, discussing, in turn, deep user interactions, reordered cost structures, and centralization and lock-in. We conclude by arguing that foundation models and prompting further strengthen Big Tech's dominance over the field of computing and, through their broad applicability, many other economic sectors, challenging our capacities for critical appraisal and regulatory response." journals.sagepub.com/doi/full/

#GenerativeAI, #FoundationModels, #LLMs, and all of that hokey nonsense shall not appear in my #robotics roadmaps as anything other than a neat research item until it can demonstrate a feasible path to #FunctionalSafety or mathematical completeness.

I lead #Product on the largest mobile-#robotic fleet known to humankind. I will not entrust decisions that could maim or kill to a pile of nondeterminate math prone to “hallucinations” or confabulation.

What's the most efficient way of working with #FoundationModels and #LLMs? Generally I have found that I like to keep machines busy. So, something is continuously training and running validations, while I'm preparing for the next experiments and improvements.

This means that the machine isn't waiting idle for my work and my work isn't waiting idle for the machine to finish. It means that I have to mentally keep track of runs started a while ago, to make maximal use of their results, even if I am already working on the next thing which is a moving target.

It means I have to carefully design the experiments or runs so that I know what knowledge I gain from them, so that I can add that knowledge to the pile of learnings even if the actual codebase has progressed from that point already.

It also means that I have to make multiple training or validation runs in parallel in a way that doesn't stop me from working on something else, while keeping the information of the runs somewhere so that they can be retrieved later. Also, it means that I often need to incorporate multiple learnings from these runs to the subsequent runs all together, in a YOLO sort of a way, instead of trying to do slow but structured "change one thing at a time" type of more systematic progress.

In my experience this is the most effective way to work on these types of things.

Keep machines running, do not wait for results, but make sure the results are useful when you eventually get them.