AI

Defending against Prompt Injection with Structured Queries (StruQ) and Preference Optimization (SecAlign)

Recent advances in Large Language Models (LLMs) enable exciting LLM-integrated applications. However, as LLMs have improved, so have the attacks against them. Prompt injection attack is listed as the #1 threat by OWASP to LLM-integrated applications, where an LLM input contains a trusted prompt (instruction) and an untrusted data. The data may contain injected instructions

Defending against Prompt Injection with Structured Queries (StruQ) and Preference Optimization (SecAlign) Read More »

New method efficiently safeguards sensitive AI training data

Data privacy comes with a cost. There are security techniques that protect sensitive user data, like customer addresses, from attackers who may attempt to extract them from AI models — but they often make those models less accurate. MIT researchers recently developed a framework, based on a new privacy metric called PAC Privacy, that could

New method efficiently safeguards sensitive AI training data Read More »

Reduce ML training costs with Amazon SageMaker HyperPod

Training a frontier model is highly compute-intensive, requiring a distributed system of hundreds, or thousands, of accelerated instances running for several weeks or months to complete a single job. For example, pre-training the Llama 3 70B model with 15 trillion training tokens took 6.5 million H100 GPU hours. On 256 Amazon EC2 P5 instances (p5.48xlarge,

Reduce ML training costs with Amazon SageMaker HyperPod Read More »

Model customization, RAG, or both: A case study with Amazon Nova

As businesses and developers increasingly seek to optimize their language models for specific tasks, the decision between model customization and Retrieval Augmented Generation (RAG) becomes critical. In this post, we seek to address this growing need by offering clear, actionable guidelines and best practices on when to use each approach, helping you make informed decisions

Model customization, RAG, or both: A case study with Amazon Nova Read More »

Generate user-personalized communication with Amazon Personalize and Amazon Bedrock

Today, businesses are using AI and generative models to improve productivity in their teams and provide better experiences to their customers. Personalized outbound communication can be a powerful tool to increase user engagement and conversion. For instance, as a marketing manager for a video-on-demand company, you might want to send personalized email messages tailored to

Generate user-personalized communication with Amazon Personalize and Amazon Bedrock Read More »

Automating regulatory compliance: A multi-agent solution using Amazon Bedrock and CrewAI

Financial institutions today face an increasingly complex regulatory world that demands robust, efficient compliance mechanisms. Although organizations traditionally invest countless hours reviewing regulations such as the Anti-Money Laundering (AML) rules and the Bank Secrecy Act (BSA), modern AI solutions offer a transformative approach to this challenge. By using Amazon Bedrock Knowledge Bases alongside CrewAI—an open

Automating regulatory compliance: A multi-agent solution using Amazon Bedrock and CrewAI Read More »

Implement human-in-the-loop confirmation with Amazon Bedrock Agents

Agents are revolutionizing how businesses automate complex workflows and decision-making processes. Amazon Bedrock Agents helps you accelerate generative AI application development by orchestrating multi-step tasks. Agents use the reasoning capability of foundation models (FMs) to break down user-requested tasks into multiple steps. In addition, they use the developer-provided instruction to create an orchestration plan and

Implement human-in-the-loop confirmation with Amazon Bedrock Agents Read More »