AI

MIT students’ works redefine human-AI collaboration

Imagine a boombox that tracks your every move and suggests music to match your personal dance style. That’s the idea behind “Be the Beat,” one of several projects from MIT course 4.043/4.044 (Interaction Intelligence), taught by Marcelo Coelho in the Department of Architecture, that were presented at the 38th annual NeurIPS (Neural Information Processing Systems) […]

MIT students’ works redefine human-AI collaboration Read More »

New training approach could help AI agents perform better in uncertain conditions

A home robot trained to perform household tasks in a factory may fail to effectively scrub the sink or take out the trash when deployed in a user’s kitchen, since this new environment differs from its training space. To avoid this, engineers often try to match the simulated training environment as closely as possible with

New training approach could help AI agents perform better in uncertain conditions Read More »

Develop a RAG-based application using Amazon Aurora with Amazon Kendra

Generative AI and large language models (LLMs) are revolutionizing organizations across diverse sectors to enhance customer experience, which traditionally would take years to make progress. Every organization has data stored in data stores, either on premises or in cloud providers. You can embrace generative AI and enhance customer experience by converting your existing data into

Develop a RAG-based application using Amazon Aurora with Amazon Kendra Read More »

Optimizing AI responsiveness: A practical guide to Amazon Bedrock latency-optimized inference

In production generative AI applications, responsiveness is just as important as the intelligence behind the model. Whether it’s customer service teams handling time-sensitive inquiries or developers needing instant code suggestions, every second of delay, known as latency, can have a significant impact. As businesses increasingly use large language models (LLMs) for these critical tasks and

Optimizing AI responsiveness: A practical guide to Amazon Bedrock latency-optimized inference Read More »

Track LLM model evaluation using Amazon SageMaker managed MLflow and FMEval

Evaluating large language models (LLMs) is crucial as LLM-based systems become increasingly powerful and relevant in our society. Rigorous testing allows us to understand an LLM’s capabilities, limitations, and potential biases, and provide actionable feedback to identify and mitigate risk. Furthermore, evaluation processes are important not only for LLMs, but are becoming essential for assessing

Track LLM model evaluation using Amazon SageMaker managed MLflow and FMEval Read More »