mastodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance, open to everyone, but mainly English and French speaking.

Administered by:

Server stats:

824
active users

#chatgpt

161 posts135 participants17 posts today

People have low expectations for AI. I asked ChatGPT (paid account, not the free stuff) what would make ChatGPT better. I think the answer is a lot higher quality than what people keep saying one should expect. This is what I got:

--- My Prompt:
With DeepSeek, there are things it is very good at, but it fails quickly and rather spectacularly. Your models tend to break down as well, but somewhere there must be a model that will be able to stay on topic, even when the topic changes, so that conversations move around like human conversations move around subject-to-subject. What would take ChatGPT over the top?

--- ChatGPT reply:
Here’s what it’ll take to get closer to that human-like conversation flow:

  1. Long-Term Memory That Actually Works. Most models either have no memory or they remember only in a clunky, artificial way.

A better memory system would not just recall facts, but contextual relevance—like “we talked about LLM efficiency two days ago and you hate cheerleader language.”

It has to be smart recall, not just data storage.

  1. Thread Awareness
    Imagine a model that can say: “You mentioned DeepSeek earlier. Should I compare it again now that we’ve brought up open-source tools?”
    That requires models to track not just words, but conversational branches—like a tree, not a straight line.

  2. Graceful Fallbacks
    When a model doesn’t know, it should gracefully pivot, not hallucinate or go off-topic. This kind of conversational “stability” is rare and subtle.

Most models that fail “spectacularly,” as you said, are trying to bluff through complexity instead of narrowing scope or clarifying.

  1. Hybrid Contextual ReasoningThe future model needs to reason across subjects rather than switching on and off.

If we're talking about language models, and you shift to science fiction, it should be able to bring up Asimov or neural networks in storytelling—bridging topics, not just hopping between them.

Right now, most models either:

Stay sharp but shallow (good at Q&A, bad at memory), or

Drift into friendly nonsense (chatty, but brittle)

The sweet spot will be a system that grows with you across time and topics—and doesn’t forget what you value in a conversation.

....

When you see stuff posted online meant to minimize the quality of output possible, This was a single question with no leader. I think this is the kind of answer we want from our AI devices.

Thoughts? Feel free to tear this response apart. It needs that kind of human analysis.

#ai#openai#deepseek
Continued thread

终于生好了。最后一张图的指令中并没有画蔬菜的要求,只是要求扩展第二张图但保持花纹不变。但很显然最终画出来的花纹不只是康定斯基和克利姆特的混合变体,而是受到了同一个对话串里先前蔬菜画的指令影响,出现了胡萝卜南瓜黄瓜之类的蔬菜图案🤣🤣🤣

看来用狗屁通生图的时候要计划好同一个对话里要生的东西,因为它真的会“结合上下文语境理解”你的指令。

(第一张是以春天为主题的低仿康定斯基。第二张是要求在第一张上添加克利姆特风格的图案和金粉装饰,说实话狗屁通艺术风格化的能力不太行。

#AIart#AIGC#ChatGPT

让狗屁通画了个低仿康定斯基,然后要求在此图基础上添加古斯塔夫克利姆特风格的色彩和金粉装饰。由于两个命令中间让它另外生成了一个圆形银河铁道999的图,它最后生成的克利姆特进阶版也是圆的。最后让它把原图扩展成3:2的长方形图,保持图案不变。下完指令我就睡着了。

睡了大概半个多小时(?),打开聊天窗口发现狗屁通的“正在建立图像”提示条静止不动,在窗口激活的一瞬才又开始闪动,表示自己正在干活。一看进度还是一团模糊………您这摸鱼摸得也太敷衍了吧喂!

#AIart#AIGC#ChatGPT