top of page

Meet a Founder: Orca Founder and CEO Stephen Kemmerling


By Randall Woods and Aruna Mandulapalli


The recent explosion of AI tools will bring unforeseen benefits to society – and unexpected problems. Already some of those complexities are starting to emerge, with the cost and energy consumption of massive AI programs exceeding early expectations and causing strain.


Fortunately, some of the best minds in engineering are searching for solutions, including Boston-based founder Stephen Kemmerling. After working at several successful startups as well as on one of Facebook’s machine learning teams, he set out to found his own company, Orca, to develop a way to make AI more accessible and efficient as the technology scales. Startup Boston caught up with Stephen to learn more about his company, and the path that led him to resolving some of tech’s thorniest problems.


SB: I don’t want to bury the lede, so let’s start off by talking about your company, Orca, and how you created it.

 

Stephen Kemmerling: I've been an AI and machine learning practitioner in some form or another pretty much my entire career and in various forms and different companies. And I’ve been seeing problems on the horizon. For me, that revolves a lot around, for lack of a better word, control of AI systems. Essentially, being able to know why your AI is acting the way it is and being able to impact those outcomes in a cost-efficient and timely way.


For a long time in classical machine learning, you just made a new model when you needed to change a model’s behavior; in other words, if your AI wasn’t doing what you wanted, you just made a new one. Making a new model is fairly straightforward in classical machine-learning scenarios because the models are smaller and usually comparatively cheap to train. That strategy starts to break down with more advanced AI systems (like LLMs) because training or tuning these systems is often extremely expensive.


SB: The issue of cost certainly is top of mind for AI companies.

 

SK: We need to address the cost explosion that we're seeing with these more advanced systems, where it takes you more money, more electricity, more computing power, and more time to get the desired results. And these costs have other side effects too: Fewer organizations get to work on state-of-the-art systems because it's too expensive, and those that can afford to get slower and slower because iteration cycles are getting longer and longer. That means slower progress toward a future where AI systems are actually enormously, positively transformative to society, in ways we can't even imagine. Without getting too grandiose here, if we get this right, this can dramatically reshape how we live and work; it could cure cancer, solve climate change, figure out fusion based electricity, and many more things.

 

This slowness in being able to make changes to AI systems has safety implications too. If it takes three weeks to fix something that’s wrong with your system, you can do real damage. It could be minor, from your chatbot being a little off kilter so you create an awkward customer interaction, to something more harmful like your household robot (when we all have them) hitting kids. I didn’t see enough people trying to work on this, and that's ultimately what led to the company.

 

SB: And in a nutshell, without getting too technical, how do you solve that problem with Orca?

 

SK: We need some ways to mitigate this – to be able to control and adjust AI systems in a way that's cheaper and faster than conventional approaches. When you look at the root cause of the problem, it very often comes down to the data that the model has been trained on. 


Our solution draws a lot of inspiration from “conventional” software. Software that’s been programmed and not trained, in contrast with machine learning or AI systems.


In conventional software, you don't put your data directly into your program; you put it into a file or into a database or something external to the program. If you have a Word document, for example, the data isn’t inside Word. It's in a file that's outside of the program. That pattern holds true for virtually all software in some form or another. It gives you the ability to change the behavior of your software without changing the software itself.  


This age-old pattern in computing of separating code, logic, reasoning-ability – whatever you want to call it – and the data barely exists for AI systems. With Orca, we've built a solution that allows you to separate the reasoning abilities of the model from the data. 


Once the data is external, you can look at what's actually happening, rather than it being obscured in the inscrutable matrices inside the model. And just like conventional software, you can edit that data in real time with very little computational cost and little to no machine learning knowledge. You can fix issues, update your data over time, customize behavior, make subjective adjustments and so on – ultimately giving you the ability to control your systems for less money and in shorter timeframes.

 

SB: Where are you right now in the development phase? Do you have customers?

 

SK: We have a few companies that are testing it out, but we're still pretty early.  


SB: When will it become commercially viable?

 

SK: Fairly soon: late summer or fall is what we're aiming for. The AI ecosystem is evolving, and we have to keep up with that. But we're not far off from having something that's properly commercially viable.


SB: And shifting gears to early in your career, how did your work at startups lead to your creation of Orca? And what did you learn that helps you now, as you tackle these incredibly difficult problems?

 

SK: It's a complicated journey. Early-stage companies taught me what it’s like to work on a greenfield problem with a lot of uncertainty but also a lot of flexibility. What got me hooked was the problem-solving aspect of it. There's no politics and no corporate strategizing at the early stage. You must have a good product, and everything else kicks in later.

 

There were some very stressful times, I'm not going to lie, but I found it all very compelling, and it helped me reach the conclusion that the best way to make a difference is to build something. I'm a builder, first and foremost.

 

SB: I hear that a lot from startups: You don't have the red tape and corporate politics to deal with. But I've also heard it can be very challenging. Long hours, and delving into a lot of areas that might not be your core expertise. So given those experiences, what were some of the mistakes you learned early on that helped you in later ventures?

 

SK: Loads of mistakes were made. We're still making mistakes in my new company all the time. The biggest lesson for me is that mistakes are okay. You have to learn not to place blame and look for lessons instead. What went wrong and why? If we had a do-over, what would we do differently? It really is a gift to learn from mistakes and early on I didn't think that way.

 

The main goal is to make a significant difference on a time scale of years. Individual missteps can really wear you down if you don't think that way. And if you don't make mistakes, you’re probably on the wrong track and won’t realize it until it's too late. You want to know when you're wrong quickly. Embracing that to me is the most important lesson working in a startup environment. 


Want to read more investor stories as they're published? Stay in the loop by subscribing to our newsletter in the footer below. 


About the authors: 


Randall Woods is a former editor at Bloomberg News and currently is a Senior Vice President at SBS Comms, a communications agency for technology companies and startups.


Aruna Mandulapalli is a seasoned recruiter who has worked across industries and has recruited for several reputable companies in the US. She founded her own recruiting company HireSimplified with a mission to bring her expertise to scaling startups. 


1 comentário


jonhmopthraa
20 de ago.

For a long time in classical machine learning, you just made a new model when you needed to change a model’s behavior; in other words, if your AI wasn’t doing what you wanted, you just made a new one. backrooms game

Curtir
bottom of page