AI

Empower your generative AI application with a comprehensive custom observability solution

Recently, we’ve been witnessing the rapid development and evolution of generative AI applications, with observability and evaluation emerging as critical aspects for developers, data scientists, and stakeholders. Observability refers to the ability to understand the internal state and behavior of a system by analyzing its outputs, logs, and metrics. Evaluation, on the other hand, involves […]

Empower your generative AI application with a comprehensive custom observability solution Read More »

Automate Amazon Bedrock batch inference: Building a scalable and efficient pipeline

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and

Automate Amazon Bedrock batch inference: Building a scalable and efficient pipeline Read More »

MIT Schwarzman College of Computing launches postdoctoral program to advance AI across disciplines

The MIT Stephen A. Schwarzman College of Computing has announced the launch of a new program to support postdocs conducting research at the intersection of artificial intelligence and particular disciplines.  The Tayebati Postdoctoral Fellowship Program will focus on AI for addressing the most challenging problems in select scientific research areas, and on AI for music

MIT Schwarzman College of Computing launches postdoctoral program to advance AI across disciplines Read More »

Build a video insights and summarization engine using generative AI with Amazon Bedrock

Professionals in a wide variety of industries have adopted digital video conferencing tools as part of their regular meetings with suppliers, colleagues, and customers. These meetings often involve exchanging information and discussing actions that one or more parties must take after the session. The traditional way to make sure information and actions aren’t forgotten is

Build a video insights and summarization engine using generative AI with Amazon Bedrock Read More »

Automate document processing with Amazon Bedrock Prompt Flows (preview)

Enterprises in industries like manufacturing, finance, and healthcare are inundated with a constant flow of documents—from financial reports and contracts to patient records and supply chain documents. Historically, processing and extracting insights from these unstructured data sources has been a manual, time-consuming, and error-prone task. However, the rise of intelligent document processing (IDP), which uses

Automate document processing with Amazon Bedrock Prompt Flows (preview) Read More »

Governing the ML lifecycle at scale: Centralized observability with Amazon SageMaker and Amazon CloudWatch

This post is part of an ongoing series on governing the machine learning (ML) lifecycle at scale. To start from the beginning, refer to Governing the ML lifecycle at scale, Part 1: A framework for architecting ML workloads using Amazon SageMaker. A multi-account strategy is essential not only for improving governance but also for enhancing

Governing the ML lifecycle at scale: Centralized observability with Amazon SageMaker and Amazon CloudWatch Read More »

Import data from Google Cloud Platform BigQuery for no-code machine learning with Amazon SageMaker Canvas

In the modern, cloud-centric business landscape, data is often scattered across numerous clouds and on-site systems. This fragmentation can complicate efforts by organizations to consolidate and analyze data for their machine learning (ML) initiatives. This post presents an architectural approach to extract data from different cloud environments, such as Google Cloud Platform (GCP) BigQuery, without

Import data from Google Cloud Platform BigQuery for no-code machine learning with Amazon SageMaker Canvas Read More »

Customized model monitoring for near real-time batch inference with Amazon SageMaker

Real-world applications vary in inference requirements for their artificial intelligence and machine learning (AI/ML) solutions to optimize performance and reduce costs. Examples include financial systems processing transaction data streams, recommendation engines processing user activity data, and computer vision models processing video frames. In these scenarios, customized model monitoring for near real-time batch inference with Amazon

Customized model monitoring for near real-time batch inference with Amazon SageMaker Read More »