AI of the Week: OpenAI moves away from safety
Keeping up with an industry as rapidly changing as AI is a challenge. So until AI can do it for you, here’s a quick recap of recent stories in the world of machine learning, as well as notable research and experiments that we couldn’t cover on our own.
By the way, TechCrunch will be launching an AI newsletter soon, so stay tuned. In the meantime, we’ll be increasing the frequency of our semi-regular AI column from twice a month (or so) to weekly, so keep an eye out for more editions.
This week in AI, OpenAI once again dominated the news cycle with a product announcement that (despite Google’s best efforts) was also a royal intrigue. The company unveiled its most capable generative model to date, his GPT-4o, and just days later it tackled the problem of developing controls to prevent “superintelligent” AI systems from going out of control. The team that had been there was effectively disbanded.
As expected, the team’s dissolution made a lot of headlines. Reports, including ours, show that OpenAI deprioritized the team’s safety research in favor of new product launches, like his aforementioned GPT-4o, and ultimately It has been suggested that this led to the resignation of two co-leaders, Jan Leike and his OpenAI co-founder Ilya Sutskever.
For now, superintelligent AI is more theoretical than real. It is not clear when or if the technology industry will achieve the breakthroughs needed to develop AI that can accomplish every task that humans can. But this week’s reports seem to confirm that OpenAI’s leadership, particularly his CEO Sam Altman, is increasingly choosing to prioritize product over safety measures.
Altman says:I was furious” Sutskever rushed to announce AI-powered features at OpenAI’s first development conference last November.and he It is said that it was Helen Toner, director of Georgetown’s Center for Security and Emerging Technologies and former member of the OpenAI board of directors, was criticized over a paper she co-authored that cast OpenAI’s approach to security in a critical light. The board that I almost tried.
Over the past year or so, OpenAI has flooded its chatbot store with spam and (allegedly) Collect data from YouTube It has expressed ambitions to have AI generate depictions of pornography and gore while violating the platform’s terms of service. Indeed, safety seems to be an afterthought at the company. And a growing number of OpenAI safety researchers are coming to the conclusion that their work is better supported elsewhere.
Here are some other notable AI stories from the past few days.
- OpenAI + Reddit: In other OpenAI news, the company has reached an agreement with Reddit to use social site data to train AI models. While Wall Street welcomed the deal with open arms, Reddit users may not be so happy.
- Google’s AI: Google held its annual I/O developer conference this week, where it made its debut. tons About AI products. Here, we’ve compiled them for you, from his Veo generating videos to Google search’s AI-organized results to an upgrade to Google’s Gemini chatbot app.
- Anthropic hires Krieger. Mike Krieger, one of the co-founders of Instagram and most recently co-founder of personalized news app Artifact (recently acquired by TechCrunch’s parent company Yahoo), will join Anthropic as the company’s first chief product officer. will be joining. He will oversee both the company’s consumer and enterprise efforts.
- AI for kids: Anthropic announced last week that it would start allowing developers to create apps and tools for kids built on its AI models as long as they follow certain rules. Notably, rivals like Google won’t allow their AI to be integrated into apps aimed at young people.
- AI Film Festival: AI startup Runway held its second-ever AI Film Festival earlier this month. What about take-home? Some of the most powerful moments in the showcase came not from AI but from more human elements.
More machine learning
AI safety is clearly a top priority this week with OpenAI’s departure, but Google Deepmind is making further progress Adopts new “Frontier Safety Framework.” Essentially, this is an organization’s strategy to identify and hopefully thwart runaway capabilities. It doesn’t have to be AGI, it could just be some crazy malware generator.
This framework has three steps. 1. Identify potentially harmful features in your model by simulating the development path. 2. Periodically evaluate the model to detect when it has reached a known “critical functionality level.” 3. Apply mitigation plans to prevent breaches and problematic deployments (by others or by yourself). Click here for details. It may sound like an obvious course of action, but it’s important to formalize it. Otherwise, everyone is just promoting it somehow. That’s how you get bad AI.
The Cambridge researchers identify quite different risks. They are rightly concerned about the proliferation of chatbots that are trained on the data of the dead to provide superficial imitations of the dead. You may (like me) find this whole concept somewhat abhorrent, but if you’re careful, it can potentially be used for grief management and other scenarios. The problem is that we aren’t paying attention.
“This area of AI is an ethical minefield.” Lead researcher Katarzyna Nowaczyk-Basinska said:. “We need to start thinking now about how to mitigate the social and psychological risks of digital immortality, because the technology already exists.” Identify positive outcomes and discuss concepts (including fake services) generally. Paper published in Philosophy & Technology. Black Mirror predicts the future again!
In less creepy applications of AI, MIT physicist They are considering useful (to them) tools for predicting the phase and state of physical systems. This is typically a statistical task that can become tedious as the system becomes more complex. However, training a machine learning model on the right data and building on the known material properties of the system can provide a fairly efficient method. Another example of how ML is finding a niche even in advanced science.
At the University of Boulder, we’re talking about how AI can be used in disaster management. While the technology may be useful for quickly predicting where resources are needed, mapping damage, and even training responders, people (understandably) don’t rely on it in life-or-death scenarios. are hesitant to apply it.
Professor Amir Bezadan states that “human-centered AI will lead to more effective disaster response and recovery practices by fostering collaboration, understanding, and inclusivity among team members, survivors, and stakeholders.” Trying to move the issue forward. They’re still in the workshop stage, but it’s important to think deeply about this before trying to automate the distribution of relief supplies after a hurricane, for example.
Finally, I would like to introduce some interesting findings from Disney Research.This was considering how to diversify the output of a diffusion image generation model that can produce similar results over and over again for some prompts. What is their solution? “Our sampling strategy anneals the conditioning signal by adding scheduled monotonically decreasing Gaussian noise to the conditioning vector during inference, balancing diversity and condition conditioning.” Myself, no more. I couldn’t express it well.
The result is a much greater variety of image output angles, settings, and overall appearance. You may or may not need this, but it’s nice to have the option.
Source link