Ep. 251 - Lauren Golembiewski, CEO and Co-founder of Voxable, on the Future of Voice and Wearables

Ep. 251 - Lauren Golembiewski, CEO and Co-founder of Voxable, on the Future of Voice and Wearables

Lauren Golembiewski, CEO and Co-founder of Voxable and Brian Ardinger, Inside Outside Innovation Co-Founder, talk about the future of voice and other wearables, and the challenges of designing for new technologies and applications. For more innovation resources, checkout insideoutside.io.
On this week's episode of Inside Outside Innovation, we sit down with Lauren Golembiewski, CEO and Co-founder of Voxable. Lauren and I talk about the future of voice and other wearables and the challenges of designing for new technologies and applications. Let's get started.

Inside Outside Innovation is the podcast to help you rethink, reset, and remix yourself and your organization each week. We'll bring you the latest innovators, entrepreneurs, and pioneering businesses, as well as the tools, tactics, and trends you'll need to thrive as a new

Interview Transcript with Lauren Golembiewski, CEO and Co-founder of Voxable

Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I'm your host, Brian Ardinger. And as always, we have another amazing guest. Today we have Lauren Golembeski. She is CEO and co-founder of Voxable which is an agency that designs and develops chatbots and voice interfaces based in Austin, Texas, and Pittsburgh, Pennsylvania. So welcome Lauren. 

Lauren Golembiewski: Thanks for having me.

Brian Ardinger: Hey I am excited to have you on the show. You and I connected pre pandemic. I had been reading some of your work in Harvard Business Review. You wrote a couple articles, one entitled How wearable AI will amplify human intelligence. And another one more recently called, Are you ready for tech that connects to your brain?

And I thought those articles were so insightful. I want to get some insight into some of the things that you're seeing in some of the new technologies when it comes to wearables and voice and, and things like that. So maybe to kick it off, why don't you tell the audience a little bit about how you got started in this field and tell us more about what Voxable is.

Lauren Golembiewski: Yeah, absolutely. So Voxable is a design platform for teams that want to build better voice and chat apps. We had been consulting in the voice and chat app design and development space, helping companies, large enterprise teams build their voice and chat experiences. 

And then we pivoted to creating this product because we realized that every team, no matter how much they invested in creating a great conversational experience, they still had no tool that was available to them to efficiently build that experience and define it in a way that created a great user experience for their end customers. So that's what we're currently doing today.

And we got into the voice space just by tinkering in our own homes. I was mentioning to you before we started the show that I started the business with my husband who's a software engineer and my background is in product design. And we basically, as soon as, you know, early voice technology had become available to us, we started playing around with integrating it into our smart home devices.

And we realized that in creating our own voice experiences, that this was really going to be the next paradigm shift in human computer interaction. So, we quit our jobs and started Voxable, the consulting business, or what became the consulting business. 

And then, like I said, through those five years, recognizing that the significant problem in the industry is that there's no good design tools. That's kind of our current mission is seeking to change that and to help teams create a better UX in the process. 

Brian Ardinger: That's pretty amazing. My career started back in internet 1.0 in the UX UI design research field and designing for new technologies back then when it was a screen-based kind of thing, it's obviously evolved in that. And you mentioned this term conversational design. Tell us a little bit about what does that mean? What does it entail? 

Lauren Golembiewski: Conversation Design is a new term and conversation designers is a new role that has come about on the market. And these are people who focus on creating that voice or chat experience and defining what that looks like for the end user.

And so just like today, you might have a product designer or a user interface designer. The way that we talked about that role in a voice application or a chat application is just by calling them a conversation designer because they're focusing on the actual substance of the conversation, writing the words that will be said, be spoken by, for example, an Alexa skill or sent through a chat bot in a chat application. They're dealing with those substances as opposed to, you know, HTML CSS that a web designer would be considering or iOS framework that a mobile designer would be considering. And so, a conversation designers are focused on affordances of conversational experiences, which includes synthesized speech. 

It includes new conversational AI. So that would include something like natural language understanding, that can now take the natural words that a user says and translates that into something a machine or an application can actually do something with and can perform actions based on a more natural interaction.

And so, these types of affordances are what conversation designers become experts in, and then can craft these experiences that help fulfill the end user's goal, whether it be getting help, they have a support issue in, you know, your product. And that's one big place that people are automating these types of interactions is on customer support chat.

As well as, you know, people want to be able to speak to their mobile device and they want to be able to play music and perform actions without having to use their hands. So, whether that's on a mobile device or on one of these smart speakers. And I think the other really big sector that this is exploding on is in the wearable markets because now not only do we have like a smart speaker that's sitting in a room or a personal mobile device, but we now have a kind of always on piece of personal accessorizing that people are wearing almost throughout the entire day. 

And it's like a whole other channel through which conversational interaction can happen. And it can be both very personal and on consumer level interactions where I'm playing music or tracking my fitness. Or can be on a very enterprise level where I'm trying to automate certain parts of my job and do my job more intelligently by having an assistant that can kind of help me do that on the go, either hands-free or site free. 

Brian Ardinger: One of the things that the whole field is still fairly new and it's fairly new, even for the user to understand, you know, obviously a lot of people use the Alexa to ask, you know, what the weather's going to be. What are the core applications that you are seeing that are really having an impact in this new field? 

Lauren Golembiewski: So, very similar to the early days of the mobile market, when mobile apps became really popular gaming has established itself as a very early craze. And I think a lot of people are gravitating towards voice only games. And then there is the whole media consumption landscape. 

Podcasting is a very big interest of a lot of enterprises and consumers alike. It's a channel that a lot of people are starting to recognize the value of. And so, creating voice experiences in new places where audio and voice interaction can happen.

Brian Ardinger: Like, what do you see with things like a Clubhouse or the new Twitter spaces you see that is just a place to consume audio? Or do you see this interaction element coming into play? 

Lauren Golembiewski: I think that if those particular companies aren't already thinking about how the interaction can come into play, there are definitely other companies and startups I know of that are thinking of that. And so, regardless of whether that is Clubhouse, who creates kind of like a two-way type of interaction or even just, you know, one of the big use cases of people who are interacting with Clubhouses is they're out taking walks and they're using their wearables as an interface to the Clubhouse world. 

And could that browsing experience be easier if there was a better voice integration into it? Or better ways of interacting. Actually, one of the early Clubhouse rooms that I was in was with a lot of voice industry folks who were thinking about how Clubhouse could incorporate voice into its strategy. Regardless of whether that's right for clubhouse, I do think that audio interaction essentially reinventing telephone calls is, is something that we're constantly trying to do.

And I do think that there will be a new way, that people start to interact with a combination of not only just like my being able to interact with my voice, but also being able to interact with like my virtual assistant that embodies me as Lauren, that can automate some of my responses to people at work, friends and family, kind of like the autoresponder. That's just much more intelligent and personalized for any individual worker or consumer. 

And can I have a synthesized voice that is modeled after my own, that I've trained, that can then step in and speak my words for me in certain situations. And so that seems really futuristic, but I do think that we're getting closer and closer to places where that is relevant. And there's voice channels like Discord, the chat, the gaming chat application, where gaming has a whole other culture around it, where people start to think of modifications. 

So, like what if I could buy a voice model occasion that then changes my voice when I'm speaking through to my fans or to my fellow gamers. Because I'm playing a character and that's part of my, you know, gaming interaction and lifestyle. And so, I think there's a lot of places where voice is relevant and can change the way interaction happens beyond just incorporating it in the current app that we're kind of thinking about today. 

Brian Ardinger: You mentioned gaming. Are there other industries that are kind of ahead of the curve when it comes to utilizing some of these technologies?

Lauren Golembiewski: I think the financial industry is definitely ahead of the curve. There's a ton of financial virtual assistance. You see it in a lot of the industries where information and search are fairly complex and there's a lot of data and just information that a consumer has to wade through. And you can either pay someone to do that for you, but there's a whole host of people who don't have that type of budget, who aren't paying a financial advisor.

And so, there's starting to be offerings by these larger companies. Capital One has a virtual assistant Bank of America. I think her name is actually Erica, their virtual assistant. The Charles Schwab, there's a lot of businesses that are starting to, or a lot of financial institutions that are starting to look at how they can help their customers through some of these more automated interactions. But, you know, delivering information in a way that's more natural and consumable and timely, and kind of responding to contextually the situation that the user's in.

Brian Ardinger: Are those companies using the voice technologies, I guess, integrated with existing telephone types of systems and that, or are they including them on the websites? How's the use case scenario actually playing out? 

Lauren Golembiewski: I think most companies are incorporating them into their custom mobile apps. So, if they have like a mobile banking experience, it's like a layer that is part of that mobile banking experience. Whether it's a message center, or an actual voice like touch to speak type of interaction. I haven't yet seen or tried any experiences that exist via a conversational channel like Alexa or Google assistant. I think some of those businesses are hesitant to deploy to those platforms, both for a couple of reasons. 

Both because it's validating to these larger enterprise businesses, Amazon and Google that may be seen as competitors or in some way competition to a lot of enterprises out there. On the other hand, while Amazon and Google, and Samsung is also a player in this realm of creating a voice channel because they have Bixby. While they've created a really great ecosystem that does a lot of things, they still haven't really nailed the end user experience in the same way that I think iPhone did when it kind of established the mobile application market. 

And so, I think we're still kind of waiting for these big players to create that really killer user experience that makes these secondary applications really see value in deploying to these secondary channels. So, I think that's why we're not seeing as much like a banking experience on Alexa for those two reasons. 

Brian Ardinger: It definitely makes sense. I'm curious to understand, like, is it because the consumer, the behavioral changes the consumer has to make is too much to overcome, or is it because the application developers so to speak are having a tough time creating something that's usable? Because like you said, it's a different format. It's a different way of thinking about how to have an interaction with a customer. It seems like the technology itself is there, but that behavioral hurdle is difficult to overcome.

Lauren Golembiewski: Yeah. I definitely think it's more the behavioral hurdle. I for sure am confident that the technology can support. To some extent can support a good user experience. I will say that in a lot of ways, just like, you know, the way that iPhone competed in the mobile market when it was like Blackberry and Palm Pilot, and those were the big players and iPhone came in and it was not competing on features, it was competing on experience.

And then when the App Store came out, similarly, it said, we're restricting the amount of applications that can be deployed on the store, and we're going to make it harder for you developer to create that. But, Hey, we're going to create this whole developer program. 

We'll create all these standards for how we think applications should be built and for better or for worse, like people could disagree with their, you know, interaction paradigms and the way that they set things up, but for better, or for worse, that represented some of the best UX design that was available, and guidance that was available.  

And so, I think the way that we're seeing that happen on the voice side is that like, we're not seeing that kind of leadership play out in the same way, where we have someone that's saying, Hey, we're going to restrict the amount of skills or, you know, actions that can be deployed to these systems.

We're going to make them, make sure that they're really good. And then we're going to make sure that third party developers have all of the resources that they need to get those experiences to be as good as they need to be. 

And I think, like not to say that those companies aren't going in the right direction, but I just think that in kind of instantiating that behavior shift that does need to take place with consumers, I don't think it's a difficult behavior shift. And I think it's actually an easier one than perhaps learning how to work a touch screen. 

So, I think that that shift isn't necessarily like totally a blocker in making the market proliferate in the way that it can. But I do think that there does need to be a big enough push to satisfying that end user experience to ensure that that behavior change is simple and seamless. And that not only am I interacting on these devices, but that I'm also interacting with the businesses that are deploying experiences through these devices. 

Brian Ardinger: You kind of have to have both. And I think that's one potential tipping point is where we're getting to the ubiquity of these particular devices. Everybody has an Air Pod in their head almost all times, or, you know, they're carrying around their phone that can quickly tap on it and access Siri or whatever. And so again, the technology is ubiquitous enough now. Now I think is the next step is trying to figure out what are those applications to make somebody actually want to turn it on and interact with it.

Some of the things that I've seen when it comes to voice and that is I love your opinion and insight and curious how the area of, because you have to speak, and it's public when you're talking to a microphone in front of other folks, is that a behavioral challenge to overcome? Or what do you think are some of the behavioral issues that are holding people back from trying these things or using these things more regularly?

Lauren Golembiewski: Yeah, absolutely. I definitely think that there are social changes that we are both experiencing and are yet to come, that will affect how people interact with voice interfaces. And so, I'm reminded of when those Bluetooth headsets became popular and to the phenomenon of the person alone on the street talking out loud, right.

Us all like grappling with that. And you know, and now I think, we're used to seeing that, or we're used to looking for the headphones to ensure that that person is speaking, not to us and not directing the conversation towards us. So, I think that's a certain social change that we're getting more used to.

And I do think that the public nature of speaking out loud is a limiting factor to voice interaction. It for sure reduces the amount of contexts around which you would be willing to talk. Because either I want to be in my home to talk about this sensitive matter out loud. I don't want to be just like in a public place speaking about maybe like my personal financial matters. 

Or I want to be able to interact in a different mode of interaction. So, I want to have a voice available to me when it matters when I do need to, I feel like talking or I'm out and I'm hands-free and I have like a low stakes voice interaction that I don't care is public. But a higher stakes interaction. Maybe I want to text that interaction. 

And then there's also a lot of cases, not just the private public major voice, but the synchronous, asynchronous nature of it, where to have a voice interaction, both parties need to be like active and consuming that information in real time, basically. So, if I'm listening to an interaction or I'm listening to instructions from a voice interface, it's kind of streaming out at me. And if I don't catch it in that moment, then it's kind of cumbersome to like recall it, you know, repeat et cetera. 

So, there are situations when audio information along with a visual aid. Or a visual element to go along with the interaction is really helpful. We're seeing people start to think not just about like voice only or graphic only, or screen-based only, but think of really like the multimodal interaction. Because beyond just voice and, you know, tapping on a screen or chatting in a, you know, text field, we also have all these gestures and other sensors that we're wearing, like heart rate monitors and there's gyro scopes in all of our devices and location information that's constantly being grabbed.

And so, all of these data points, sensors, and gestural types of interactions are now inputs into any type of device that we're interacting with. And so that really, I think, breaks us beyond like this thinking about one particular mode of interaction at a time, and thinking about what does the user actually need and what are they going to, like, what are they using?

What is available to them and what makes the most sense to them. And they could really switch between voice to text, to gesture. For any given interaction, pretty seamlessly. And that's probably what they want to do, but that experience, that's what I see in the future is that seamlessly moving between pretty much any input that I, as the user want.

Brian Ardinger: Yeah. Being able to capture and use all your senses whenever you need them. 

Lauren Golembiewski: Yeah, absolutely. And then, I mean, all those devices and sensors are getting so much smarter and more interesting. And that's what I wrote about in Harvard Business Review was, so instead of having a speech recognition is we're starting to see research being done on sub vocal recognition, which is the brainwaves that you send to your mouth right before you're thinking about what you're going to say, that can be read and translate that into data that represents the words that you were thinking to say. And that's a vocal recognition. 

Alter ego is a device that's using sensors attached to the face to perform that sub vocal recognition. So, then you can have a feedback loop that's actually silent, but it's still based on a voice interaction paradigm. And that kind of gets around some of that public private notion where maybe I can have a voice interaction that's actually happening all through like a closed loop system. That's like a sensor attached to my face and like a bone conductor. Headphone that's speaking it into my brain then.

For More Information

Brian Ardinger: It's definitely a fascinating world. If someone wants to find out more about how to explore these new opportunities or are there particular go-to resources or things that you would recommend people take a look at?  

Lauren Golembiewski: Yeah, absolutely. So you can certainly search for those articles on Harvard Business Review. I read a lot over at Voxable's blog at voxable.io. You'll be able to find our blog and on my Twitter account @LaurenGolem is a place that I often muse about the future of technology. 

And then one person who I do follow quite, quite regularly from seeing her speak at South by Southwest pretty regularly is Amy Webb, who's a futurist. And she releases the tech trend report every year, usually at South by Southwest. I mean, I think she did it virtually this year, but she walks through a lot of the individual trends happening in wearables, in healthcare, in artificial intelligence and in the confluence of all of these things. And really goes deep on a lot of the fascinating things happening in the future of technology.

And so, I recommend a lot of people seek her out. Amy Webb at Future Institute Today. And you can find her reports, which is all like open sourced and free for people, which is another amazing aspect of what she does. 

Brian Ardinger: Absolutely all good stuff. Lauren, I really do appreciate you coming on and kind of sharing your insights on where the world's going. And, I'd love to have you back on the show at some point and continue the conversation as we move forward. If people want to connect with you directly, or what's the best way to do that, 

Lauren Golembiewski: You can email me lauren@voxabale.io

Brian Ardinger: Excellent. Well, Lauren, thanks again for being on Inside Outside Innovation.  Look forward to the, the world ahead and appreciate you spending some time with us. 

Lauren Golembiewski: Thanks so much, Brian.

Brian Ardinger:  That's it for another episode of Inside Outside Innovation. If you want to learn more about our team, our content, our services, check out InsideOutside.io or follow us on Twitter @theIOpodcast or @Ardinger. Until next time, go out and innovate.

FREE INNOVATION NEWSLETTER & TOOLS

Get the latest episodes of the Inside Outside Innovation podcast, in addition to thought leadership in the form of blogs, innovation resources, videos, and invitations to exclusive events. SUBSCRIBE HERE

For more innovations resources, check out IO's Innovation Article Database, Innovation Tools Database, Innovation Book Database, and Innovation Video Database.

Subscribe to the IO Newsletter

checkmark Got it. You're on the list!
2022