mastodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance, open to everyone, but mainly English and French speaking.

Administered by:

Server stats:

815
active users

#largelanguagemodel

0 posts0 participants0 posts today
Replied in thread

@skribe Conversely, the cost of printing, distribution, and storage puts up a barrier to spamming people on other continents with mass quantities of low value slop.

Just think through the logistics of a hostile Eurasian state sending a mass quantity of printed materials to Australia or North America.

Or, for that matter, a hostile North American state sending a mass quantity of printed materials to Europe or Asia.

You would either need:–

a) At least one printing press on each continent;
b) You could try shipping the magazines, but they'd be a month out of date when they arrive; or
c) You could try flying them overseas, but that would be very expensive very quickly.

That's before you worry about things like delivery drivers (or postage), and warehouses.

These are less of an issue for books than they are for newspapers or magazines.

And if a particular newspaper or magazine is known to be reliable, written by humans, researched offline, and the articles are not available online, then there's potentially value in people buying a physical copy.

Had a very insightful conversation about the limitations on AI with a marketing copywriter.

Her comment was that actually writing marketing materials is a small part of her job.

If it was just about writing something that persuades a customer to buy a product, it would be a cakewalk.

What takes time is the stakeholder management.

It's navigating conflicting and contradictory demands of different departments.

Legal wants to say one thing. Sales something different. Legal something else entirely.

There's higher-up managers who need their egos soothed.

There's different managers with different views about what the customers want and what their needs are.

And there's a big difference in big bureaucratic organisations between writing marketing collateral, and writing something that gets signed off by everyone who needs to.

She's tried using AI for some tasks, and what that typically involves is getting multiple AI responses, and splicing them together into a cohesive whole.

Because it turns out there's a big difference in the real world between generating a statistically probable output, and having the emotional intelligence to navigate humans.

#AI#LLM#ChatGPT

🔔 New Essay 🔔

"The Intelligent AI Coin: A Thought Experiment"

Open Access here: seanfobbe.com/posts/2025-02-21

Recent years have seen a concerning trend towards normalizing decisionmaking by Large Language Models (LLM), including in the adoption of legislation, the writing of judicial opinions and the routine administration of the rule of law. AI agents acting on behalf of human principals are supposed to lead us into a new age of productivity and convenience. The eloquence of AI-generated text and the narrative of super-human intelligence invite us to trust these systems more than we have trusted any human or algorithm ever before.

It is difficult to know whether a machine is actually intelligent because of problems with construct validity, plagiarism, reproducibility and transferability in AI benchmarks. Most people will either have to personally evaluate the usefulness of AI tools against the benchmark of their own lived experience or be forced to trust an expert.

To explain this conundrum I propose the Intelligent AI Coin Thought Experiment and discuss four objections: the restriction of agents to low-value decisions, making AI decisionmakers open source, adding a human-in-the-loop and the general limits of trust in human agents.

@histodons @politicalscience

seanfobbe.com · [Essay] The Intelligent AI Coin: A Thought Experiment
More from Seán Fobbe

DeepSeek Has Ripped Away AI’s Veil Of Mystique. That’s The Real Reason The Tech Bros Fear It [opinion piece]
--
theguardian.com/commentisfree/ <-- shared media article
--
[a interesting take that I think has some merit...]
"While privacy fears are justified, the main beef Silicon Valley has is that China’s chatbot is democratising the technology...
No, it was not a 'sputnik moment'..."
#DeepSeek #AI #deeplearning #China #risk #SputnikMoment #disruption #technology #largelanguagemodel #LLM #ChatGPT #Claude #chatbot #opensource

Replied in thread

@paninid I draw great optimism from a study finding that use if AI (aka LLI) reduces people's conviction to conspiracy theories. Sure AI makes mistakes, but it's more important that AI is modeling fact-based learning, reasoning, and decision making. I literally believe that AI could be the tech to save American democracy.

mitsloan.mit.edu/ideas-made-to

MIT SloanMIT study: An AI chatbot can reduce belief in conspiracy theories | MIT Sloan