Migrating an embedded BI platform is one of those projects that sounds straightforward and absolutely is not. You're replacing the analytics interface that your customers use daily (the dashboards, the filters, the reports and schedules they've built workflows around) with something that looks different, behaves differently, and had better return the same numbers.

We migrated customers from Looker to Omni. I owned the complete architecture of the new Omni environment: configuring connections and access controls, designing the topic and model structure, and collaborating with platform engineering on the embedding implementation. Here's what that actually involved.

Why we moved

This was the second migration in a sequence. Earlier in the year, we'd migrated our data warehouse from BigQuery to ClickHouse. Looker worked with ClickHouse, but we were running into limitations: the aggregate awareness issues I've written about elsewhere, plus broader concerns about Looker's trajectory as a product and how well it served our embedded use case.

Omni offered a better fit for our embedded BI needs, and moving to it was the right long-term call. But "right long-term call" and "smooth short-term execution" are different things when customers are relying on the current system every day.

The approach

The first decision that mattered was ruling out a hard cutover. Customers weren't going to tolerate waking up one morning to a completely new analytics interface. We worked with engineering to implement feature flagging that let both Looker and Omni coexist during the transition. Customer content could be migrated in groups, and if something broke, they still had Looker accessible without affecting anyone else.

For the model migration itself, I recognized early that a manual field-by-field approach wouldn't meet our timeline. Our LookML semantic layer was extensive. I used Cursor with a multi-root workspace configuration to build AI-assisted conversion rule templates that could migrate LookML definitions to Omni's YAML format at scale. At the time this was before Claude released the Skill feature; Cursor Rules were a precursor that defined repeatable patterns and workflows. This wasn't just search-and-replace (LookML and Omni YAML have different structures and conventions), but the AI-assisted approach using Cursor Rules let me handle the systematic parts quickly and focus my time on the judgment calls.

The cross-functional stretch

The embedding work required me to develop skills I didn't have entering the year. Our platform engineering team was building the embedding integration in TypeScript and Ruby including managing session token creation, user attribute passing, access control logic. To meaningfully review their work and identify data-layer issues early, I needed to read and understand code in languages I hadn't worked in before. I got fluent enough to spot major problems, ask the right questions, and keep the integration from becoming a game of telephone between the data and the engineering teams. That was worth the discomfort.

Documentation and handoff

One thing I did intentionally on this project was build documentation into the delivery, not treat it as an afterthought. I created architecture and implementation documentation for the Omni environment, and I coached teammates on supporting updates and customizations to our Omni models going forward. This matters because migrations have a long tail and with this size of a project there was a team of us involved. The initial migration ends, but the model evolves. If only one person understands the architecture, you haven't finished the migration. You've just created a dependency.

What I'd do the same way

The feature flagging decision changed the shape of the whole migration. When you can roll a customer back to Looker without touching anyone else, you stop managing a crisis and start managing a queue. It also gave the team the overlap and time that was needed to migrate customer content and be able to properly verify it. That's a different cognitive load entirely. The engineering overhead felt like a tax at the time, but it wasn't.

The AI-assisted LookML conversion saved time, but what it actually did was force me to separate the systematic parts of the migration from the judgment calls. Any large conversion has both. Getting clear on which is which before you start tells you where your attention needs to go, and where it doesn't.

I'd get close to the engineering work earlier. Problems at the integration point between the data layer and the application layer don't look like data problems or engineering problems in isolation. They look like nothing until they blow up in production when we realized that we needed an additional set of attributes set and passed to Omni. The only way to catch them early is to be in both conversations.