Hybrid Neural Networks, Google's Major AI Updates, A New SLM That Outperforms LLMs, & More!
Welcome to the AI Search newsletter. Here are the top updates in AI this week.
AI for smartglasses to track gaze & facial expressions
Researchers have created two technologies that utilize sonar-like sensing to track a person's gaze and facial expressions on smartglasses or virtual reality headsets. The devices, called GazeTrak and EyeEcho, use acoustic signals to detect eye movements and facial expressions, respectively, with a minimal power consumption compared to camera-based tools. These technologies have potential applications in improving VR interactions, assisting individuals with low vision, and even monitoring neurodegenerative diseases like Alzheimer's and Parkinson's.
Google announces major AI updates
Google announced major updates in their latest event, including the 1M context window Gemini 1.5 Pro. They are expanding access to AI models in Vertex AI, and launching an AI Hypercomputer, with the new TPU v5p. AI-powered features and agents are also being launched in Google Workspace to enhance work efficiency.
A new way to prevent AI chatbots from giving toxic responses
Researchers discover an improved method for preventing an AI chatbot from producing harmful responses, achieving greater diversity without compromising quality. They trained a model that maximizes its reward by generating unique prompts that provoke increasingly toxic responses from the target chatbot. Through their innovative technique, the researchers demonstrated enhanced coverage and effectiveness in detecting toxic responses, surpassing traditional methods.
Murf AI
Introducing Murf, the most versatile AI text-to-speech generator. Create studio-quality voice overs in minutes using lifelike AI voices suitable for podcasts, videos, presentations, and more. Choose from 120+ text to speech voices in 20+ languages, with the ability to add video, music, or images and sync them to the voiceover.
Brain-inspired computing through hybrid neural networks
Hybrid Neural Networks (HNNs) combine aspects of computer science and neuroscience models, enhancing flexibility and universality in supporting advanced intelligence. By addressing challenges in connectivity between heterogeneous networks through a unique design framework, HNNs enhance performance and provide a foundation for further advancements in brain-inspired computing. Current applications of HNNs include target tracking, speech recognition, and decision control, showcasing their potential in various intelligent tasks. Additionally, the development of supporting systems such as brain-inspired chips and software frameworks tailored for HNN applications is crucial for the efficient deployment of these networks.
The words you use matter with ChatGPT
Researchers investigated how small changes in prompts can impact the accuracy of responses from large language models like ChatGPT. By testing various prompt variations across 11 text classification tasks, the researchers found that even subtle adjustments, such as adding spaces or greetings, can significantly influence the predictions made by these models. While specific output formats and incentives did show some improvement in accuracy, there was no one-size-fits-all method that worked for all tasks, with the "No Specified Format" prompt achieving the highest overall accuracy.
Kits AI
Kits.AI is AI platform designed specifically for modern creators in the music industry. It offers a variety of powerful tools to generate, customize, and share artificial voices. Users can alter their voices using AI models of famous artists’ voices licensed from a library. Its advanced AI engine can generate melodies and harmonies, and even suggest instrumentation based on user inputs. This unique blend of AI voice generation and training tools enables musicians and producers to transform their inspiration into reality.
MiniCPM - a new SLM that outperforms LLMs
Small Language Models (SLMs) are being explored as a cost-effective alternative to developing LLMs with trillions of parameters. Recent models like the Phi series, TinyLlama, MobileLLM, and Gemma have contributed to the field of SLMs, but still face challenges in replicating the comprehensive abilities of LLMs and establishing transparent, scalable training methods beneficial for both. Researchers from Tsinghua University introduce MiniCPM, featuring 1.2B and 2.4B non-embedding parameter variants that rival larger LLMs in performance while focusing on SLMs. MiniCPM has shown promising results, outperforming larger models in various tasks and potentially revolutionizing the development of LLMs.