Sam Altman’s goal for ChatGPT to remember ‘your whole life’ is both exciting and disturbing

3 hours ago 3

OpenAI CEO Sam Altman laid retired a large imaginativeness for the aboriginal of ChatGPT astatine an AI lawsuit hosted by VC steadfast Sequoia earlier this month. 

When asked by 1 attendee astir however ChatGPT tin go much personalized, Altman replied that helium yet wants the exemplary to papers and retrieve everything successful a person’s life.

The ideal, helium said, is simply a “very tiny reasoning exemplary with a trillion tokens of discourse that you enactment your full beingness into.”

“This exemplary tin crushed crossed your full discourse and bash it efficiently. And each speech you’ve ever had successful your life, each publication you’ve ever read, each email you’ve ever read, everything you’ve ever looked astatine is successful there, positive connected to each your information from different sources. And your beingness conscionable keeps appending to the context,” helium described.

“Your institution conscionable does the aforesaid happening for each your company’s data,” helium added.

Altman whitethorn person immoderate data-driven crushed to deliberation this is ChatGPT’s earthy future. In that aforesaid discussion, erstwhile asked for chill ways young radical usage ChatGPT, helium said, “People successful assemblage usage it arsenic an operating system.” They upload files, link information sources, and past usage “complex prompts” against that data.

Additionally, with ChatGPT’s representation options — which tin usage erstwhile chats and memorized facts arsenic discourse — helium said 1 inclination he’s noticed is that young radical “don’t truly marque beingness decisions without asking ChatGPT.” 

“A gross oversimplification is: older radical usage ChatGPT as, like, a Google replacement,” helium said. “People successful their 20s and 30s usage it similar a beingness advisor.”

It’s not overmuch of a leap to spot however ChatGPT could go an all-knowing AI system. Paired with the agents the Valley is presently trying to build, that’s an breathtaking aboriginal to deliberation about. 

Imagine your AI automatically scheduling your car’s lipid changes and reminding you; readying the question indispensable for an out-of-town wedding and ordering the acquisition from the registry; oregon pre-ordering the adjacent measurement of the publication bid you’ve been speechmaking for years.

But the scary part? How overmuch should we spot a Big Tech for-profit institution to cognize everything astir our lives? These are companies that don’t ever behave successful exemplary ways.

Google, which began beingness with the motto “don’t beryllium evil” lost a suit successful the U.S. that accused it of engaging successful anticompetitive, monopolistic behavior. 

Chatbots tin beryllium trained to respond successful politically motivated ways. Not lone person Chinese bots been recovered to comply with China’s censorship requirements but xAI’s chatbot Grok this week was randomly discussing a South African “white genocide” erstwhile radical asked it wholly unrelated questions. The behavior, many noted, implied intentional manipulation of its effect motor astatine the bid of its South African-born founder, Elon Musk.

Last month, ChatGPT became truthful agreeable it was downright sycophantic. Users began sharing screenshots of the bot applauding problematic, adjacent dangerous decisions and ideas. Altman rapidly responded by promising the squad had fixed the tweak that caused the problem.

Even the best, astir reliable models inactive conscionable outright make worldly up from clip to time. 

So, having an all-knowing AI adjunct could assistance our lives successful ways we tin lone statesman to see.  But fixed large tech’s agelong past of iffy behavior, that’s besides a concern ripe for misuse.

Read Entire Article