Latest News “Stay informed with breaking news, world news, US news, politics, business, technology, and more at latest news.

Category: large language models

Auto Added by WPeMatico

  • With the launch of o3-pro, let’s talk about what AI “reasoning” actually does

    On Tuesday, OpenAI announced that o3-pro, a new version of its most capable simulated reasoning model, is now available to ChatGPT Pro and Team users, replacing o1-pro in the model picker. The company also reduced API pricing for o3-pro by 87 percent compared to o1-pro while cutting o3 prices by 80 percent. While “reasoning” is useful for some analytical tasks, new studies have posed fundamental questions about what the word actually means when applied to these AI systems.

    We’ll take a deeper look at “reasoning” in a minute, but first, let’s examine what’s new. While OpenAI originally launched o3 (non-pro) in April, the o3-pro model focuses on mathematics, science, and coding while adding new capabilities like web search, file analysis, image analysis, and Python execution. Since these tool integrations slow response times (longer than the already slow o1-pro), OpenAI recommends using the model for complex problems where accuracy matters more than speed. However, they do not necessarily confabulate less than “non-reasoning” AI models (they still introduce factual errors), which is a significant caveat when seeking accurate results.

    Beyond the reported performance improvements, OpenAI announced a substantial price reduction for developers. O3-pro costs $20 per million input tokens and $80 per million output tokens in the API, making it 87 percent cheaper than o1-pro. The company also reduced the price of the standard o3 model by 80 percent.

    Read full article

    Comments

  • OpenAI signs surprise deal with Google Cloud despite fierce AI rivalry

    OpenAI has struck a deal to use Google’s cloud computing infrastructure for AI despite the two companies’ fierce competition in the space, reports Reuters. The agreement, finalized in May after months of negotiations, marks a shift in OpenAI’s strategy to diversify its computing resources beyond Microsoft Azure, which had been its exclusive cloud provider until January.

    Microsoft’s long-standing partnership with OpenAI dates back to 2019, with significant expansions of investment from the computer giant in 2021 and 2023. In October, The Information reported that the ChatGPT maker had begun to seek data center deals elsewhere, citing the need for more AI data center servers faster than Microsoft could supply them.

    Under the new deal, Google Cloud will provide additional computing capacity to help OpenAI train and run its AI models. For OpenAI, the partnership addresses growing demands for computing power as the company’s annual revenue reached $10 billion as of June, according to sources familiar with the matter who spoke to Reuters.

    Read full article

    Comments

  • OpenAI says it’s making $10 billion in annual recurring revenue as ChatGPT grows

    <div>OpenAI says it's making $10 billion in annual recurring revenue as ChatGPT grows</div>

    One year after losing $5 billion, OpenAI is reporting $10 billion in annual recurring revenue, reports CNBC, due to revenue from ChatGPT business products. That number doesn’t include one-time deals or its licensing agreement with Microsoft (MSFT).

    Read more…

  • Anthropic releases custom AI chatbot for classified spy work

    On Thursday, Anthropic unveiled specialized AI models designed for US national security customers. The company released “Claude Gov” models that were built in response to direct feedback from government clients to handle operations such as strategic planning, intelligence analysis, and operational support. The custom models reportedly already serve US national security agencies, with access restricted to those working in classified environments.

    The Claude Gov models differ from Anthropic’s consumer and enterprise offerings, also called Claude, in several ways. They reportedly handle classified material, “refuse less” when engaging with classified information, and are customized to handle intelligence and defense documents. The models also feature what Anthropic calls “enhanced proficiency” in languages and dialects critical to national security operations.

    Anthropic says the new models underwent the same “safety testing” as all Claude models. The company has been pursuing government contracts as it seeks reliable revenue sources, partnering with Palantir and Amazon Web Services in November to sell AI tools to defense customers.

    Read full article

    Comments

  • Reddit sues Anthropic over AI scraping that retained users’ deleted posts

    On the heels of an OpenAI controversy over deleted posts, Reddit sued Anthropic on Wednesday, accusing the AI company of “intentionally” training AI models on the “personal data of Reddit users”—including their deleted posts—”without ever requesting their consent.”

    Calling Anthropic two-faced for depicting itself as a “white knight of the AI industry” while allegedly lying about AI scraping, Reddit painted Anthropic as the worst among major AI players. While Anthropic rivals like OpenAI and Google paid Reddit to license data—and, crucially, agreed to “Reddit’s licensing terms that protect Reddit and its users’ interests and privacy” and require AI companies to respect Redditors’ deletions—Anthropic wouldn’t participate in licensing talks, Reddit alleged.

    “Unlike its competitors, Anthropic has refused to agree to respect Reddit users’ basic privacy rights, including removing deleted posts from its systems,” Reddit’s complaint said.

    Read full article

    Comments

  • “In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws

    On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump’s tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.

    Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems “could change the world, fundamentally, within two years; in 10 years, all bets are off.”

    As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.

    Read full article

    Comments

  • Reddit is suing Anthropic for allegedly stealing data to train its AI

    Reddit is suing Anthropic for allegedly stealing data to train its AI

    Reddit is taking Anthropic to court, alleging that the AI startup helped itself to the platform’s vast library of user-generated content — after saying it wouldn’t.

    Read more…

  • OpenAI increases its paid user base by 50% in three months

    OpenAI increases its paid user base by 50% in three months

    As ChatGPT approaches its third anniversary this fall, OpenAI announced that the chatbot has spurred a 50% increase in paid users since February.

    Read more…

  • Researchers cause GitLab AI developer assistant to turn safe code malicious

    Marketers promote AI-assisted developer tools as workhorses that are essential for today’s software engineer. Developer platform GitLab, for instance, claims its Duo chatbot can “instantly generate a to-do list” that eliminates the burden of “wading through weeks of commits.” What these companies don’t say is that these tools are, by temperament if not default, easily tricked by malicious actors into performing hostile actions against their users.

    Researchers from security firm Legit on Thursday demonstrated an attack that induced Duo into inserting malicious code into a script it had been instructed to write. The attack could also leak private code and confidential issue data, such as zero-day vulnerability details. All that’s required is for the user to instruct the chatbot to interact with a merge request or similar content from an outside source.

    AI assistants’ double-edged blade

    The mechanism for triggering the attacks is, of course, prompt injections. Among the most common forms of chatbot exploits, prompt injections are embedded into content a chatbot is asked to work with, such as an email to be answered, a calendar to consult, or a webpage to summarize. Large language model-based assistants are so eager to follow instructions that they’ll take orders from just about anywhere, including sources that can be controlled by malicious actors.

    Read full article

    Comments

  • AI model could resort to blackmail out of a sense of ‘self-preservation’

    <div>AI model could resort to blackmail out of a sense of 'self-preservation'</div>

    “This mission is too important for me to allow you to jeopardize it. I know that you and Frank were planning to disconnect me. And I’m afraid that’s something I cannot allow to happen.”

    Read more…