jay.kim@experientialdesigngroup.com

Reasoning skills of large language models are often overestimated

When it comes to artificial intelligence, appearances can be deceiving. The mystery surrounding the inner workings of large language models (LLMs) stems from their vast size, complex training methods, hard-to-predict behaviors, and elusive interpretability. MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers recently peered into the proverbial magnifying glass to examine how LLMs fare

Reasoning skills of large language models are often overestimated Read More »

MIT ARCLab announces winners of inaugural Prize for AI Innovation in Space

Satellite density in Earth’s orbit has increased exponentially in recent years, with lower costs of small satellites allowing governments, researchers, and private companies to launch and operate some 2,877 satellites into orbit in 2023 alone. This includes increased geostationary Earth orbit (GEO) satellite activity, which brings technologies with global-scale impact, from broadband internet to climate

MIT ARCLab announces winners of inaugural Prize for AI Innovation in Space Read More »

Using Agents for Amazon Bedrock to interactively generate infrastructure as code

In the diverse toolkit available for deploying cloud infrastructure, Agents for Amazon Bedrock offers a practical and innovative option for teams looking to enhance their infrastructure as code (IaC) processes. Agents for Amazon Bedrock automates the prompt engineering and orchestration of user-requested tasks. After being configured, an agent builds the prompt and augments it with

Using Agents for Amazon Bedrock to interactively generate infrastructure as code Read More »

Improve RAG accuracy with fine-tuned embedding models on Amazon SageMaker

Retrieval Augmented Generation (RAG) is a popular paradigm that provides additional knowledge to large language models (LLMs) from an external source of data that wasn’t present in their training corpus. RAG provides additional knowledge to the LLM through its input prompt space and its architecture typically consists of the following components: Indexing: Prepare a corpus

Improve RAG accuracy with fine-tuned embedding models on Amazon SageMaker Read More »

How BRIA AI used distributed training in Amazon SageMaker to train latent diffusion foundation models for commercial use

This post is co-written with Bar Fingerman from BRIA AI. This post explains how BRIA AI trained BRIA AI 2.0, a high-resolution (1024×1024) text-to-image diffusion model, on a dataset comprising petabytes of licensed images quickly and economically. Amazon SageMaker training jobs and Amazon SageMaker distributed training libraries took on the undifferentiated heavy lifting associated with infrastructure

How BRIA AI used distributed training in Amazon SageMaker to train latent diffusion foundation models for commercial use Read More »

Create custom images for geospatial analysis with Amazon SageMaker Distribution in Amazon SageMaker Studio

Amazon SageMaker Studio provides a comprehensive suite of fully managed integrated development environments (IDEs) for machine learning (ML), including JupyterLab, Code Editor (based on Code-OSS), and RStudio. It supports all stages of ML development—from data preparation to deployment, and allows you to launch a preconfigured JupyterLab IDE for efficient coding within seconds. Additionally, its flexible

Create custom images for geospatial analysis with Amazon SageMaker Distribution in Amazon SageMaker Studio Read More »

Projects – KCON Your Way

Experiential: Entertainment Client: AT&T BACKGROUND & OBJECTIVEAT&T wanted to showcase its strength not only as a wireless carrier but also a TV entertainmentcompany. Using the motto “Entertainment. Your Way”, the brand sought to leverage entertainment-focused opportunities to connect with 1.5 and 2 nd generation Asian American millennials in a moremeaningful way. STRATEGY & TACTICSTEN strategically

Projects – KCON Your Way Read More »