Artificial intelligence has the potential to improve the analysis of medical image data. For example, algorithms based on deep learning can determine the location and size of tumors. This is the result of AutoPET, an international competition in medical image analysis. The seven best autoPET teams report on how algorithms can detect tumor lesions in positron emission tomography (PET) and computed tomography (CT).
Using this model, researchers may be able to identify antibody drugs that can target a variety of infectious diseases.
Associate Professor Matteo Bucci’s research sheds new light on an ancient process, to improve the efficiency of heat transfer in many industrial systems.
A research team introduces Automated Search for Artificial Life (ASAL). This novel framework leverages vision-language FMs to automate and enhance the discovery process in ALife research.
The post Automating Artificial Life Discovery: The Power of Foundation Models first appeared on Synced.
Researchers from the University of Texas at Austin and NVIDIA proposes upcycling approach, an innovative training recipe enables the development of an 8-Expert Top-2 MoE model using Llama 3-8B with less than 1% of the compute typically required for pre-training.
The post Llama 3 Meets MoE: Pioneering Low-Cost High-Performance AI first appeared on Synced.
Bio-inspired wind sensing using strain sensors on flexible wings could revolutionize robotic flight control strategy. Researchers have developed a method to detect wind direction with 99% accuracy using seven strain gauges on the flapping wing and a convolutional neural network model. This breakthrough, inspired by natural strain receptors in birds and insects, opens up new possibilities for improving the control and adaptability of flapping-wing aerial robots in varying wind conditions.
A DeepMind research team introduces JetFormer, a Transformer designed to directly model raw data. This model maximizes the likelihood of raw data without depending on any pre-trained components, and is capable of both understanding and generating text and images seamlessly.
The post DeepMind’s JetFormer: Unified Multimodal Models Without Modelling Constraints first appeared on Synced.
An NVIDIA research team proposes the normalized Transformer, which consolidates key findings in Transformer research under a unified framework, offering faster learning and reduced training steps—by factors ranging from 4 to 20 depending on sequence length.
The post NVIDIA’s nGPT: Revolutionizing Transformers with Hypersphere Representation first appeared on Synced.
Even highly realistic androids can cause unease when their facial expressions lack emotional consistency. Traditionally, a 'patchwork method' has been used for facial movements, but it comes with practical limitations. A team developed a new technology using 'waveform movements' to create real-time, complex expressions without unnatural transitions. This system reflects internal states, enhancing emotional communication between robots and humans, potentially making androids feel more humanlike.
Corvus Robotics, founded by Mohammed Kabir ’21, is using drones that can navigate in GPS-denied environments to expedite inventory management.
Artificial intelligence that is as intelligent as humans may become possible thanks to psychological learning models, combined with certain types of AI.
Researchers developed a laser-based artificial neuron that fully emulates the functions, dynamics and information processing of a biological graded neuron, which could lead to new breakthroughs in advanced computing. With a processing speed a billion times faster than nature, chip-based laser neuron could help advance AI tasks such as pattern recognition and sequence prediction.
MIT engineers developed AI frameworks to identify evidence-driven hypotheses that could advance biologically inspired materials.
An electronic stacking technique could exponentially increase the number of transistors on chips, enabling more efficient AI hardware.
A research team at Meta introduces the Large Concept Model (LCM), a novel architecture that processes input at a higher semantic level. This shift allows the LCM to achieve remarkable zero-shot generalization across languages, outperforming existing LLMs of comparable size.
The post From Token to Conceptual: Meta introduces Large Concept Models in Multilingual AI first appeared on Synced.
Our comprehensive benchmark and online leaderboard offer a much-needed measure of how accurately LLMs ground their responses in provided source material and avoid hallucinations
Our comprehensive benchmark and online leaderboard offer a much-needed measure of how accurately LLMs ground their responses in provided source material and avoid hallucinations
Our comprehensive benchmark and online leaderboard offer a much-needed measure of how accurately LLMs ground their responses in provided source material and avoid hallucinations
Our comprehensive benchmark and online leaderboard offer a much-needed measure of how accurately LLMs ground their responses in provided source material and avoid hallucinations
Our comprehensive benchmark and online leaderboard offer a much-needed measure of how accurately LLMs ground their responses in provided source material and avoid hallucinations
Our comprehensive benchmark and online leaderboard offer a much-needed measure of how accurately LLMs ground their responses in provided source material and avoid hallucinations
Our comprehensive benchmark and online leaderboard offer a much-needed measure of how accurately LLMs ground their responses in provided source material and avoid hallucinations
Our comprehensive benchmark and online leaderboard offer a much-needed measure of how accurately LLMs ground their responses in provided source material and avoid hallucinations
Our comprehensive benchmark and online leaderboard offer a much-needed measure of how accurately LLMs ground their responses in provided source material and avoid hallucinations
With models like AlphaFold3 limited to academic research, the team built an equivalent alternative, to encourage innovation more broadly.
We’re rolling out a new, state-of-the-art video model, Veo 2, and updates to Imagen 3. Plus, check out our new experiment, Whisk.
We’re rolling out a new, state-of-the-art video model, Veo 2, and updates to Imagen 3. Plus, check out our new experiment, Whisk.
We’re rolling out a new, state-of-the-art video model, Veo 2, and updates to Imagen 3. Plus, check out our new experiment, Whisk.
We’re rolling out a new, state-of-the-art video model, Veo 2, and updates to Imagen 3. Plus, check out our new experiment, Whisk.
We’re rolling out a new, state-of-the-art video model, Veo 2, and updates to Imagen 3. Plus, check out our new experiment, Whisk.