DevelopmentAug 30, 20257 min read

Lisa Update #001: Rebranding, Team, and New Features

A look at the latest progress on Lisa AI: renaming things, finally having a team, new features in the works, and a meeting powered by Google

RZ
Rodrigo Zúñiga
Updated At Aug 30, 2025

Lisa Update #001: Current State of the Project

Having fun on a Friday night
I was having fun on a Friday night when I remembered that it's been a few weeks since the last update about Lisa, and it's time to document some progress. Between product naming decisions, team additions, and the never-ending fight against audio processing latency, there's enough material for a somewhat technical update.

TL;DR 📖

  • Lisa One: the “central hub” no longer sounds like a Wi-Fi router, Apple-inspired just like "Lisa."
  • Lisa Dots: bye SmartLink, now a name that actually makes sense (and sounds better).
  • Team: +2 Rubén and Alexander joined for hardware. Still missing design and an “entrepreneurial” profile.
  • Interviews: Doing 10 more, this time to figure out who NOT to build Lisa for.
  • STT/TTS on CPU: Playing with near real-time voice pipeline without GPU. Upcoming posts with graphs and benchmarks for the curious and an eventual GPU migration.
  • AI Salon Lima: Team's social debut at a Google-backed event.

Rebranding: from descriptive to memorable

We decided to rename the main components of the ecosystem. What used to be the very generic central hub is now called Lisa One. The former SmartLinks are now Lisa Dots. I mentioned before that the name Lisa was inspired by the Apple Lisa, and we're sticking with that line of thought… except this time, the goal isn't to repeat THE 80s failure, but to give it some redemption.
The reason is pragmatic: descriptive names like central hub “work” in technical docs but fail in user or stakeholder communication. Apple didn't call the iPhone “multifunctional mobile communication device” for a reason. Simplicity in naming reduces cognitive load and makes things easier to remember.
Lisa Dots keep the metaphor of distributed network nodes but with a name that suggests both connectivity (dots linked by lines) and the idea of distributed control points in physical space.

Reinforcements: new team members

After a few weeks of searching for people in hardware, we finally have solid additions that bring real value to the project.
  • Alexander Salluca: Mechatronics Engineering student (8th semester), with experience in automation and electronics.
  • Rubén Meléndez: Also a Mechatronics Engineering student (8th semester), with experience in microcontrollers and additive manufacturing.
Both were my teammates in past university projects, which means we've already been through the trial of “why doesn't this work if it did 5 minutes ago?” A fluent communication style is often underrated in technically complex projects.
We're still looking for someone in brand design, maybe someone with finance experience, and probably someone with more of an “entrepreneurial” mindset. We're good at building the architecture, but I'll admit (speaking for myself, not the team) that I lack experience in actually taking a product to market, validating from different angles, and so on.

Interviews: defining the anti-target

Right now, I'm in a second round of validation interviews as part of the program I'm in (404 Tech Found), this time with a different focus. The first round was about identifying pain points. This one is about the concept of an anti-target: specifically, who I should not be building for.
Doing these interviews and the “assignment” itself, reminded me of The Innovator's Dilemma by Clayton Christensen, which explains exactly what I'm trying to figure out: knowing who your product is not for is just as important as knowing who it is for. I'm documenting user profiles that could technically benefit from Lisa AI but don't represent the initial target market.
Leading firms most profitable customers generally don't want, and indeed initially can't use, products based on disruptive technologies
— Clayton M. Christensen, The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail
So far: 4 interviews completed out of the 10 planned.

STT/TTS on CPU: exploring latencies

Here comes the fun part. I've been implementing the speech-to-text and text-to-speech pipeline, and the results are… enlightening?
I managed to get STT/TTS running almost in real time entirely on CPU. Yes, CPU, not GPU. It's like running a marathon in flip flops. Possible, but questionable. Especially for a project aiming to be as complex as I described in the first post.
I won't share detailed numbers or graphs yet, but early results are promising, even more so if we consider migrating to GPU. Latency is acceptable for casual interactions, though not great for applications needing instant response and that's without adding the “thinking” time or tool calling that the system will constantly need.
I'm preparing a dedicated post on this, with charts and detailed benchmarks for the technically curious. I'll go into the current implementation using Whisper with some wild variations involving multithreading, model optimizations, and other CPU performance hacks.
For now, everything is local, no cloud models. The idea is for Lisa AI to work offline in its most basic form, and then scale up with cloud capabilities for heavier tasks.
I'll also prepare a post fully dedicated to CUDA optimization once we jump to GPU. There's a whole performance universe waiting in GPU acceleration that could likely cut latencies to below 200ms.

Upcoming Technical Deep-dives

Since I've mentioned it three times already, here's a teaser of the upcoming technical posts I'm working on, some of which I already have partially documented:
  • Full architecture of the audio processing pipeline
  • Implementation of the MQTT protocol for Lisa Dots
  • Comparative benchmarks of different STT/TTS models
  • Memory management optimization for local models

AI Salon Lima: mandatory networking

As a team we'll be attending AI Salon Lima (part of the Lima Ecosystem Summit 2025 by Google). It's one of those conferences where you're supposed to go to learn, but in reality it's more about networking and validating whether you're on the right track.

State of the Art

Lisa AI remains technically ambitious and operationally complex. But the critical components are working, the team is growing with competent people, and the technical challenges are the kind we actually enjoy solving.
Next steps: further optimize audio pipeline latency, migrate to GPU, keep validating the product, launch socials, a landing page, get a logo—and the list goes on.
Stay tuned, more updates coming soon.

References

Tags:#lisa#lisa-ai#update#rebranding#team#features#development
Lisa Update #001: Rebranding, Team, and New Features | @rnzch