AI

Use Llama 3.1 405B to generate synthetic data for fine-tuning tasks

Today, we are excited to announce the availability of the Llama 3.1 405B model on Amazon SageMaker JumpStart, and Amazon Bedrock in preview. The Llama 3.1 models are a collection of state-of-the-art pre-trained and instruct fine-tuned generative artificial intelligence (AI) models in 8B, 70B, and 405B sizes. Amazon SageMaker JumpStart is a machine learning (ML) […]

Use Llama 3.1 405B to generate synthetic data for fine-tuning tasks Read More »

Llama 3.1 models are now available in Amazon SageMaker JumpStart

Today, we are excited to announce that the state-of-the-art Llama 3.1 collection of multilingual large language models (LLMs), which includes pre-trained and instruction tuned generative AI models in 8B, 70B, and 405B sizes, is available through Amazon SageMaker JumpStart to deploy for inference. Llama is a publicly accessible LLM designed for developers, researchers, and businesses

Llama 3.1 models are now available in Amazon SageMaker JumpStart Read More »

Proton-conducting materials could enable new green energy technologies

As the name suggests, most electronic devices today work through the movement of electrons. But materials that can efficiently conduct protons — the nucleus of the hydrogen atom — could be key to a number of important technologies for combating global climate change. Most proton-conducting inorganic materials available now require undesirably high temperatures to achieve

Proton-conducting materials could enable new green energy technologies Read More »

Large language models don’t behave like people, even though we may expect them to

One thing that makes large language models (LLMs) so powerful is the diversity of tasks to which they can be applied. The same machine-learning model that can help a graduate student draft an email could also aid a clinician in diagnosing cancer. However, the wide applicability of these models also makes them challenging to evaluate

Large language models don’t behave like people, even though we may expect them to Read More »