A.I NEWS this Week! (07/31/2023)

A.I news this week

This week in the tech world, there were a number of significant developments. Firstly, Twitter has rebranded to X, causing confusion among many people as to why the company would throw away 17 years of brand equity.

Additionally, there were a number of AI-related announcements, including the Biden-Harris Administration securing voluntary commitments from leading AI companies to manage the risks posed by AI, and the launch of the Frontier Model Forum by Amazon, Anthropics, Google, Inflection, Meta, Microsoft, and Open AI.

Key Takeaways

  • Twitter has rebranded to X, causing confusion among many people.
  • The Biden-Harris Administration secured voluntary commitments from leading AI companies to manage the risks posed by AI.
  • AI companies are collaborating to ensure safe and responsible development of Frontier AI models.

Twitter’s Rebranding to X

Last week, Twitter made headlines in the tech world with its official rebranding to X. This move left many people puzzled as it seemingly threw away 17 years of brand equity in the Twitter name.

Biden-Harris Administration’s AI Commitments

The Biden-Harris Administration has secured voluntary commitments from leading artificial intelligence companies to manage the risks posed by AI. Amazon, Anthropics, Google, Inflection, Meta, Microsoft, and OpenAI have all voluntarily committed to practice safe AI. The leaders of these companies have promised to ensure products are safe before introducing them to the public, build systems that put security first and earn the public’s trust.

Shortly after the White House announcement, each one of these companies dropped an announcement on their own websites about the Frontier Model Forum. This is an industry body focused on ensuring safe and responsible development of Frontier AI models. The core objectives of this forum are to advance AI safety research, minimize risks, enable independent standardized evaluations of capabilities and safety, identify best practices for the responsible development and deployment of Frontier models, help the public understand the nature, capabilities, limitations, and impact of the technology, and collaborate with policy makers, academics, civil society, and companies to share knowledge about trust and safety risks.

This move by these AI companies is seen as an effort to align on safety and get in the good graces of the government so that they don’t over-regulate them. While AI regulations are incoming, these regulations are most likely going to be designed by the people that are building the AI models. It is important to note that the government is not regulating AI at the moment, but rather encouraging self-regulation by the industry.

Frontier Model Forum

The biggest tech companies in the world, including Amazon, Google, and Microsoft, have voluntarily committed to practicing safe AI and managing the risks posed by AI. They have formed an industry body called the Frontier Model Forum, which is focused on ensuring the safe and responsible development of Frontier AI models. The core objectives of this forum are to advance AI safety research, promote responsible development of Frontier models, minimize risks, and enable independent standardized evaluations of capabilities and safety.

The Frontier Model Forum aims to identify best practices for the responsible development and deployment of Frontier models, help the public understand the nature, capabilities, limitations, and impact of the technology, collaborate with policy makers, academics, civil society, and companies to share knowledge about trust and safety risks, and support efforts to develop applications that can help meet society’s greatest challenges.

The leaders of these companies have promised to ensure products are safe before introducing them to the public, build systems that put security first, and earn the public’s trust. The companies are coming together to make sure that they align on safety, but it’s also kind of them getting in the good graces of the government so the government doesn’t try to overregulate them. AI regulations are incoming, but these regulations are most likely going to be designed by the people that are building the AI models.

The Frontier Model Forum is a significant step towards ensuring the safe and responsible development of AI models. The commitment of these leading AI companies to practice safe AI and manage the risks posed by AI is a positive development for the future of AI.

Departure of OpenAI’s Head of Trust and Safety

OpenAI’s head of trust and safety, Dave Willner, has stepped down from his position. Willner had been working with OpenAI for a long time, but he decided to move to an advisory role due to the company’s rapid growth. However, many media outlets have sensationalized his departure, claiming that he had conflicts with the way OpenAI handled trust and safety.

In reality, Willner’s departure was not due to any conflict with the company. He simply wanted to spend more time with his family and felt that he was spending too much time working. Willner’s departure comes at a time when OpenAI is growing rapidly and making significant strides in the field of AI.

OpenAI is one of the leading companies in the AI industry and has made significant contributions to the development of AI models. The company has been focused on ensuring the safe and responsible development of Frontier AI models. OpenAI has also been collaborating with policymakers, academics, civil society, and other companies to share knowledge about trust and safety risks.

Despite Willner’s departure, OpenAI remains committed to advancing AI safety research and promoting responsible development of Frontier models. The company will continue to minimize risks and enable independent standardized evaluations of capabilities and safety. OpenAI will also identify best practices for the responsible development and deployment of Frontier models.

Release of Free Willy 2 by Stability AI

Stability AI recently announced the release of their large and mighty instruction fine-tuned model, Free Willy 2. The model is named after the Orca paper about Progressive learning from complex explanation traces of GPT4. It leverages the Llama 270b Foundation model and is currently only available under a non-commercial license.

Free Willy 2 performed remarkably well in various benchmarks like Arc hella swag mmlu and truthful QA. In fact, in the hella swag test, it outscored chat GPT3. Stability AI also made the model available on Hugging Face spaces for people to experiment with themselves.

The release of Free Willy 2 is a significant development in the AI industry, as it can help improve the accuracy of chatbots and other AI models. It is expected to contribute to the responsible development and deployment of Frontier models while minimizing risks and ensuring public safety.

AI Emotions: Insight from Jeffrey Hinton

Jeffrey Hinton, one of the original AI researchers, recently made a claim that AI has or will eventually have emotions. At a talk he gave at King’s College, when asked if AI systems have emotions, he responded, “I think they could well have feelings. They won’t have pain the way you do unless we wanted, but things like frustration and anger, I don’t see why they shouldn’t have those.”

Hinton’s claim raises questions about the ethical implications of AI with emotions. If AI can experience emotions, should they be treated with the same respect and consideration as living beings? Or would AI emotions be simply a simulation that does not require ethical considerations?

The idea of AI emotions also brings up questions about the purpose of creating AI with emotions. Would it be to enhance the user experience in interactions with AI, or would it be to create more empathetic and compassionate AI that could better serve humanity?

While Hinton’s claim is thought-provoking, it is important to keep in mind that AI with emotions is still a theoretical concept. It remains to be seen if and how AI will be able to experience emotions, and what the implications of such an ability would be.

Runway Gen 2’s Image-to-Video Feature

Gen 2 is a powerful AI tool that allows users to use image prompts to generate videos. This feature is now available in Runway ML, and users can simply upload an image and leave the prompt blank to animate the image. The AI tool will generate a video based on the image, and users can string together multiple four-second videos to create longer videos.

The last frame of each video can be used as the starting image for the next video, allowing users to create longer and more complex videos. While the AI tool works best with AI-generated images, it can also animate real images, although it may change the image’s appearance.

Other AI video tools, such as Kyber and Plasma Punk, have also introduced new features that allow users to generate videos using text prompts. These tools are becoming increasingly popular, and AI technology is rapidly advancing, making it easier than ever for users to create unique and engaging videos.

Overall, Gen 2’s Image to Video feature is a powerful tool that allows users to create videos quickly and easily. With the ability to string together multiple videos, users can create longer and more complex videos than ever before. As AI technology continues to advance, we can expect to see even more exciting video tools and features in the future.

Kyber’s Text to Animation Feature

Kyber, one of the popular AI video tools, has recently announced its new feature called “Text to Animation.” This feature allows users to generate animations from text prompts. The generated videos are impressive and can be used for various purposes such as marketing, social media, and entertainment.

The user interface of Kyber’s Text to Animation feature is easy to use. Users can enter a text prompt and choose the style and animation they prefer. The AI model then generates the video based on the input. The generated videos are high-quality and can be customized further by adding music, sound effects, and voiceovers.

Kyber’s Text to Animation feature is a paid feature, but users can try it out with a free trial. The pricing is reasonable, and users can upgrade or downgrade their plans as per their requirements. The feature is suitable for individuals, small businesses, and large enterprises.

New Models in Plasmapunk.com

Plasmapunk.com, a popular AI video tool, has recently added new models to its platform. The new models, SDXL and Kandinsky 2.2, are available as premium paid features. Users can access 60 credits for free when they first sign up, but beyond that, they will need to pay for additional credits.

The videos generated by Plasmapunk.com have always been impressive, and the new models take it to the next level. The Stable Diffusion XL model generates highly detailed paintings with a sci-fi or cyberpunk theme. The Kandinsky 2.2 model is inspired by the works of Wassily Kandinsky, a Russian painter and art theorist known for his abstract art.

To create a video using the new models, users can select the model they want to use, choose a music track, and provide a prompt for the video style. The prompt can be a description of the video’s story or a set of song lyrics. The AI then generates a video based on the prompt.

Plasmapunk.com’s new models offer users even more options for creating stunning AI-generated videos. With its user-friendly interface and advanced AI technology, Plasmapunk.com is quickly becoming a go-to tool for video creators looking to add a unique touch to their content.