Watch on Youtube, Apple Podcasts, or Spotify.
Introduction
After having written long-form essays over a weirdly diverse number of areas of the life-sciences, I am increasingly confident in my status as someone who knows a little about a lot of things. But every now and then, you meet someone who casually reveals to you an entire subfield who, up until your conversation with them, you’d never even thought of before. This happened to me when I met Sterling a few months back. We met in the elevator as we were both leaving an event, and by the time we’d reached the bottom floor, the conversation had become so interesting that we stood in the lobby for an hour as I pestered him with more and more questions.
Sterling runs a company called Iku Bio. Iku ostensibly does something quite simple: it helps biologics manufacturers figure out what to feed their cells. This is called media optimization, and it is done in an astonishingly old-fashioned way. An engineer runs a handful of experiments in a benchtop bioreactor the size of a Fiji water bottle, waits days for analytical results, and repeats, maybe three or four times before timelines force them to stop searching.
Sterling’s solution was to use printed circuit boards (PCBs)—the same green wafers inside your phone and your microwave—as the substrate for microfluidic bioreactors. Because PCBs are made via lithography, you get complexity for free. Because they’re already mass-manufactured at planetary scale, you inherit sixty years of cost optimization. And because they’re literally designed to carry electrical signals, you can embed sensors directly into the thing rather than cramming them in after the fact.
The result is a device that costs $8 per experimental lane versus $20,000 for the nearest comparable microfluidic system. And there are many, many ways for to improve from here on out.
This conversation covers the full stack: what cell culture media actually is and why it’s so much more than sugar water, why biologics manufacturing has more in common with semiconductor fabs than chemistry labs, how Sterling arrived at PCBs, and at the end of the talk, why he thinks a fair bit of lab automation is “philosophically a crime.”
Timestamps
[00:00:48] Introduction
[00:01:26] What is Iku Bio?
[00:05:00] Media optimization as the biggest lever
[00:06:23] What actually is media?
[00:13:07] Fetal bovine serum and the move to synthetic media
[00:15:10] Walk me through a media optimization workflow
[00:18:49] Why biologics manufacturing is closer to semiconductors than chemistry
[00:21:50] Matching the phase three batch and generics
[00:24:12] The 200-dimensional search space
[00:37:02] Printed circuit boards as a medium for microfluidics, and the utility of lithography
[00:40:48] Anatomy of the Iku device
[00:57:09] What sensors are on the device today?
[01:01:36] How do you use the Iku device to perform media optimization?
[01:14:44] Does media optimization survive scale-up?
[01:24:32] $8/lane vs. $20,000/lane: the economic utility of Iku’s device
[01:32:05] Why PCB microfluidics didn’t exist 10 years ago
[01:39:24] Who is the customer?
[01:43:14] What is the ultimate goal of Iku?
[01:49:07] What does the validation evidence need to look like?
[01:52:14] What would you do with $100M equity-free?
[01:57:31] Lab automation is in a strange place right now
Transcript
[00:00:48] Introduction
Abhi: Today my guest is Sterling Hooten. Sterling is the founder of Iku Bio, where he is building a microfluidic bioreactor built on a printed circuit board that cultures, senses, and streams biological data in real time, claiming 10,000x higher experimental throughput at a 100x lower cost. It is one of the most niche areas of wet lab automation that I think I’ve ever discussed on this podcast, and I don’t think I would’ve ever learned about it had I not stumbled across Sterling at an event a few months back where we had a conversation that was so fascinating that I immediately wished we had filmed it. Sterling, welcome to the podcast.
Sterling: Thank you for having me. Very big fan. Really enjoy your articles.
[00:01:26] What is Iku Bio?
Abhi: Thank you. So I’ve given a brief introduction of what you’re working on at Iku, but I’m sure I oversimplified some things. I’d like to hear your own pitch for what you’re doing there and why is it so valuable.
Sterling: So the largest problems of the 21st century — things in medicine, for climate, for material optimization — all of these are predicated on our ability to manipulate and control living matter. So advancing our understanding of biology is just so fundamental to these problems in the future, and yet the tools that we use right now to interact with biology are primitive. They’re primitive in an absolute sense, and they’re primitive in a relative sense to what we could be doing. At its core, biology is time varying, it’s parallel, and it’s sensitive. And yet the tools that we use right now — that interface destroys at least one of those properties. And in principle, advances in AI also would be an excellent connection with biology. But that interface is fundamentally broken. So lab automation right now is stuck at the Petri dish and the microtiter plate level. It’s equivalent to handwriting manuscripts in the 15th century, sometimes. And so what we’re building is a printing press for biological data. And the way that we’re doing that is we’re rethinking that interface between compute and biology, and we’re replacing traditional microfluidics with a printed circuit board that allows you to embed the fluidics — cells can live inside of it. And that allows you to communicate and control cells in a way that has not been possible before at high throughput. And the largest application that we see for that is in biologics manufacturing. Right now, biologics — it’s a half a trillion dollar industry and it’s supply limited. So every year, Samsung Biologics has to build a new $400 million facility. The reason they’re doing that is because you can only get so much out of a traditional fab plant. They’re closer to silicon fabs actually. And the largest lever that they have is in yield — so how much can you get out of these things, are they producing, and also what are the costs. The core of that comes down to literally how many of these dynamic cell culture experiments can you run. And that’s a process called media optimization. And it ends up that that one problem ends up being connected to this half a trillion dollar industry.
[00:05:00] Media optimization as the biggest lever
Abhi: So to paraphrase, if I wanted to increase biologics manufacturing by an order of magnitude — at least my capacity to produce like antibodies and the like — the lever that is most easily pushed on and most likely to give you the most bang for your buck is media optimization.
Sterling: It is the most bang for your buck. You are unlikely to get 10x on that. What you’re looking at is how much can I produce per unit time, and then how consistent is that. And if you can produce more per unit time, you get higher throughput for the entire facility. And then if you have more stability in the product — for biologics and for things that go in our bodies — that’s a desirable outcome.
Abhi: And so my conception of these bioreactors that are producing antibodies is you have a bunch of CHO cells maybe sitting in a very large tank. They’re sitting in a fluid of media and they’re constantly just excreting out these antibodies that are later purified. Iku comes in at the step of deciding what media to actually put into this tank. Is that fair to say?
Sterling: Correct. Yeah.
Abhi: What is — well, like I’ve never worked in a wet lab before.
[00:06:23] What actually is media?
Abhi: My conception of media is that it is sugar water that cells are generally fine with drinking up. I’ve learned that this is incorrect and I’d like to hear your take for what actually is media.
Sterling: I would say that that is a very limited view of what media is — not incorrect in that, if we were talking about media for growing yeast, sugar in water is pretty close to sufficient. But the more powerful way of thinking about media is that it is a very high dimensional control surface for what you can get cells to do, right? Cellular communication comes through things in the media, right? The media actually is the communication channel in a sense between cells. It’s also what carries nutrients into the cells. In mammalian cell culture, it’s closer to serum in blood. So it has either many different types of proteins in it. It’ll have different metabolites. It’ll have salts. In defined media it’ll have buffers to keep the pH. It basically has a lot of components — and there are hundreds of them really, down to things like magnesium. And each of these are really communicating and interacting with the cells. And they also work across different time periods. So you’ll have growth media, which is when you’re building up the cells, and then there’s media when you really just want them producing these particular things. And right now, if you buy or produce media internally, it tends to be connected to a particular clone or particular cell line. And so you will optimize the media for that particular cell line, or you’ll optimize media for — if you’re growing neurons. And so every — it’s complicated enough and important enough to the results that you get that exploring it is very valuable.
Abhi: Like I know that there are a few companies that have popped up claiming to technically redesign cell lines to make them better at biologics manufacturing. Does that also demand a change in media?
Sterling: It can demand — the key thing is that the biologics that we are producing now are becoming more complicated, and that is making media optimization more difficult. So you do tend to pair the cell line with a media line, both for repeatability and ease of use, also just for commercial reasons — that’s a better business. But you can — what really happens is you tend to take a standard growth media or something off the shelf, and then you will customize it for this particular thing that you’re trying to make. Because ultimately, productivity is really the interaction of these three or four things: it’s the cell line, it’s the media, it’s the process conditions or the tank that you put it in, and then the actual compound of interest and things that you’re trying to do.
Abhi: You mentioned earlier about like media is both a way — like nutrients for the cell — but is also the substrate upon which they actually communicate with each other. That second part was surprising to me. I did not naturally conceptualize cells in a tank actually talking to each other while they’re churning out antibodies. What are they communicating exactly? Does that question make sense?
Sterling: I think it’s maybe easier to think about it in the sense of our bodies, right? Cells will send out or communicate through different hormones, right? Those will get released. There are small signaling molecules that get broadcast — those are carried through the media. Well, in the body we call it blood serum, right? But in the sense, it’s media.
Abhi: You mentioned also that you have different stages of media that you want to introduce to the cells depending on the cell’s actual life cycle. Is that also true for serum in the human body? Does the body constantly adjust its own serum to whatever the cells need?
Sterling: Yeah. I mean, that is the way that cells differentiate, in a way. You’ve got some gradient that will happen, and then that gradient — that’s basically saying you’ve got different media, and that gradient can tell cells how to orient or can tell cells how to develop. And from stem cells, triggering when — what they’re going to end up being — that’s also basically — it becomes media as you add things into the cell environment there.
Abhi: So why — what’s stopping me from just replicating human serum for mammalian cells? Is that not the best substrate to use?
Sterling: Well, the first question is, where are you gonna get it?
Abhi: Well — I guess this is a more basic question. Do we understand human serum well enough to perfectly replicate it?
Sterling: Replicate it? I don’t know. What I will say — and that gets closer to what you were talking about originally — is that’s what we’ve been doing historically. But instead of using humans, which — not that — very limited supply, or limited willing supply —
[00:13:07] Fetal bovine serum and the move to synthetic media
Sterling: we’ve been using fetal bovine serum, so from calves. There are problems with that. It is highly variable. And for all of biologics manufacturing, the goal is reduce variability. And if one of your largest inputs is variable, that’s a problem. It’s also a challenge because things like — you can’t sterilize it in the traditional way. You can filter it, but you can’t heat it up without destroying — and things like prions, which could be quite bad, you would need to prevent those coming in. So the industry has really moved much towards formulated medias. So you’re building it up from the constituent parts, and that also allows you to — it reduces variation and gives you a lot more control over how you are particularly tuning that media.
Abhi: When you say like at some point fetal bovine serum was being used —
Sterling: Still. It is still in use. It’s mainly in use in research. I think — I’m — maybe there are some biologics manufacturers who are using fetal bovine serum. I don’t know. But I think the industry has pretty much moved to —
Abhi: At this point, would you consider that the synthetic serums that are attempting to recapitulate the biochemical properties of fetal bovine serum — the synthetic stuff is better? Or is it just like it’s easier to get, so you’re okay with not perfectly recapturing fetal bovine serum?
Sterling: I think it’s better.
Abhi: Okay.
Sterling: I think it’s better, and I think it’s better in that you again get to tune it.
Abhi: And so attempting to be more concrete about —
[00:15:10] Walk me through a media optimization workflow
Abhi: what is a media optimization engineer exactly doing? Let’s say I have a plate of CHO cells. I want to produce Keytruda, so pembro. I have a bunch of cells. I have all of them willing to produce the drug. They’ve been genetically edited to do that. What’s the next step?
Sterling: So the process in general is guess and check. So you will take a cell line that you’ve edited or produced for this. Most of the time it’s just — and then you’ll take it out from the freezer. You’re gonna grow it up a little bit. And then you will probably take four or five of those because you don’t kind of know yet, right — which particular strain will do best.
Abhi: So you’re trying with multiple strains.
Sterling: You’re gonna try with multiple strains. And then you will run experiments that allow you to — first you’re gonna run in microtiter plates normally, right. And you’re going to just see where are we, which of these cell lines seems like it fits best with these. After you’ve narrowed it down, you’re going to move to something that has more control. And the reason that you’re gonna move to something that has more control is that what happens in a microtiter plate is extremely disconnected from what happens in any kind of production environment. And the core reason for that has to do with flow. So in a microtiter plate, you get a lot of capillary issues, right? It changes the — you’ve got the surface tension kind of comes up, that changes the gas exchange rates. You get evaporation. And you don’t get any of the different gradients or different little bits of shear forces — all these things that actually affect how cells grow in large reactors. So what you do is you put it into what’s called a benchtop bioreactor. And so this is a little bit bigger than a Fiji bottle in terms of what it’ll contain, and it’s got an impeller in there and it’ll spin it around. So now you’re going to grow those cells in that media for 10 days or something, right? And during that time, you’re going to also change or control the pH level that’s in there. You’re going to control the temperature. You’ll set different impeller rates, seeing what’s optimal. And you’re going to run that for — one person can maybe run 12 of those experiments, 15 of those experiments. It’s pretty laborious right now to actually set those up. It’s gonna run, and during that time, you’re gonna pull off some samples. You’ll take those to the analytics section, depending on how booked up that is — that could be three days to a week sometimes to get all of your answers there. And then you’ll do that.
[00:18:49] Why biologics manufacturing is closer to semiconductors than chemistry
Abhi: I’m sorry, what questions are you asking at that point? What are the samples meant to answer?
Sterling: So ultimately, your sample is meant to answer how much total biologic did we produce in here, at what quality, right? And then the other question there is how overall — how consistent is it? Will it be — that’s actually a large sort of hidden cost, as I said. The best way to think about biologics manufacturing is to think about it as high precision manufacturing, closer to semiconductor manufacturing. That’s really the reason why Samsung Biologics is in the position that they are — because they took what they learned in terms of process control and brought that over. The reason that Fujifilm is a large manufacturer is because they took chemical process engineering and brought it over. Now, these were not biological companies, right? They are industrial manufacturing companies. And when you think about reducing process variability, one way of looking at that is how precise is the part that comes out. But then what makes up that, right, is like how much variation can we absorb without it affecting the end product? And so if you can come up with media and process conditions that are more forgiving, you’re relaxing it a bit, right? You can still end up with something that’s very precise at the end, but oh, we didn’t actually need as much — we were more forgiving over here. And that can be important because if you lose a batch of biologics, it’s very expensive. And that can happen. And it does happen. And so the way to reduce that is through media optimization. And so to finish on this — you’ve run that set of experiments, you’ve got your readout there. And those readouts, although those are the most important, you’re also going to characterize kind of everything in there that you can, because you want to see how those are affecting that actual result. Then you will repeat this. And depending on how much time you have, maybe you will get three or four runs at that, and then that’s it. And that comes down for biologics manufacturing to the regulatory reasons.
[00:21:50] Matching the phase three batch and generics
Abhi: So how much of — would you say the optimal cell lines and the optimal media — it’s like there is a threshold of quality you want to meet and after that you’re done, versus you are trying to make this as perfect as possible? Is it kind of dependent on what drug you’re trying to produce?
Sterling: I think the goal is match what was in the phase three trials. So in the process of taking a drug to market, during your phase three trials, the batch that you produced there — that is what all of the FDA’s evaluation was based on. So they want to keep that the same. So anything that deviates from that is undesirable.
Abhi: Is this true even when the drug goes off patent and the generics manufacturers — are they trying to make it even — they’re trying to improve the process even more, or even for them, they’re trying to replicate exactly what went on with the original company?
Sterling: That is a great question. I should look into that because — no, truly, because they do have to go through — so they have a couple options. The first thing is that they will basically just license the cell line and the media from the existing pharma company, right? Pay them for that. And then that way the pharma company can still get some revenue from that. The alternative is they need to come up with their own cell line and — I think the regulations are such that there’s a way of — I think it’s like if you can prove that it’s similar enough, then it just counts as a process change.
[00:24:12] The 200-dimensional search space
Abhi: And getting back to the question of actual media optimization — the media optimization person goes to the analytical chemist. The chemist tells you all you need to know about the samples that you’ve been given. You repeat this five to six times. What are the levers of change that you have over the media?
Sterling: So media is best thought of as this control surface for affecting what the cells are doing. What are the levers in there? You can change the components, and then you can change the concentration of those components, and then you can change timing of those things. And if you start with 200 or more — let’s start with 200 components that you could put in there, and then the different concentrations that they come in, and then the timing — that already is quite a large space to explore. Then you have that interacting with the cell and the different cell lines — larger space. And then with that fixed compound that you’re looking for. So the standard things that people are going to change or tune, right, is when is a carbon source coming in, and when — as you start producing different proteins, the needs of the cell change. So if you shift into a different mode for the cell — you can signal it to shift into a different mode, starts producing these other — all of a sudden its needs change.
Abhi: Mm-hmm.
Sterling: And being able to anticipate, buffer, and meet those needs — that then has a lot to do with the output.
Abhi: How much of the optimization — like even the direction or specifics of the optimization — can be theoretically known and applied versus just always empirically determined? I guess the more specific question I’m asking is, does a media optimization engineer — are they coming to every new problem almost like tabula rasa? Whatever experience they had in the past does not apply to this new cell line with this new drug.
Sterling: So the question of how tractable is this of a problem and what’s the current state of the art — the current state of the art is that best practices live in the mind of the practitioners. And a lot of that comes down to familiarity with that cell line, familiarity with the media they already have. And most manufacturers are working in a particular kind of domain or specialty, right? And so as you’re constraining that search space, it does make it easier to operate in there. However, it is not the case that you will one-shot it coming through. And then the second thing is, it’s actually reasonably easy to get caught in a local maxima. And if the cost of running those experiments or experiments themselves are sort of precious, you’re really not going to push very far out. The lever they currently use is mainly in strain engineering. And so they’ll try to select strains that’ll have the highest performance. But once those cells that you’re using are set, it does all come down to the media for optimization. In a model sense, it does seem that it’s tractable. It does seem like there’s transfer learning. How broad that really comes down to what experiments have we been able to feed into these models so far? And the answer is not very many. The largest facility that I know of for running sort of like dynamic cell culture experiments — they can run like 300.
Abhi: In parallel at any given time?
Sterling: Yeah. 300. And that’s like, the entire company is just doing that. So that’s the state of the art. And a lot of that comes back to the fact that it’s so manual.
Abhi: So the one last question I have before we move on to how Iku is fixing this — I can understand being able to easily modify concentration of the media. I understand being able to modify the timing of when you’re giving which media to the cell line. The components, the constituent components, feels a lot more complicated. Because that’s like 200 components. How much of that is like — in practice there’s 10 of them you modify at any given time, and the other 190 are pretty standard and all cell lines will need this.
Sterling: Yeah. So how much is like — what’s the core? Is there some —
Abhi: Dimensionality reduction?
Sterling: Yeah, like is there an 80/20 thing going on? Oh yeah, absolutely. Absolutely. Which, as I said, the glucose — your sugar source or carbon source, energy, the pH that you’re running at — those are, yeah, there probably are 10 that are dominating. But that’s why it’s actually so challenging — because there are 10 that are dominating, but because the system that we’re controlling is quite non-linear, it can amplify what are sometimes in certain conditions some small change. And my favorite example of this is that — this was in industrial manufacturing — but changing the amount, just changing the amount of magnesium at a particular point doubled the output. And it didn’t necessarily need — there was no a priori way of knowing that it would’ve been magnesium that went in there. And you can say, oh, okay, sure, that’s a lever and we should do that on each of these. But the problem is that potential exists for all of those other 190 things, right? So it’s like, sure, there are these core things that tend to dominate —
Abhi: But those 10 things could vary based on what the problem actually is.
Sterling: Yeah. Well, those core things of like — you do need to, the salts that are in there, right, and when energy comes into the system — those are definitely floor level. You have to figure those out. But then — and if you get those wrong, basically those are controlling the — where the floor is. So if you get those wrong, it kind of doesn’t matter what you do in these other areas. You’re not going to have high performance. But just because you get those right doesn’t mean that you have high performance at all. They’re just table stakes. You need to get those done.
Abhi: That makes sense. And so we mentioned this engineer who’s trying to produce Keytruda.
Sterling: Sure.
Abhi: They’re evidently building, at the very beginning, in a Fiji-shaped bioreactor.
Sterling: Yep.
Abhi: Doing these rounds of iteration, trying to get to something good. What is Iku’s proposal for a better way to do it?
Sterling: Our proposal is to rethink what it is that you’re trying to do when you run that experiment. So that Fiji bottle device gets used for two purposes, one of which is you want to grow cells and you want to grow them to feed a seed train. So you’re growing them, or you need that quantity of those cells. That’s one. And the second is that you need information and you need to be able to control the environment that the cells are in over time in order to get it. And so for this first set of things where you’re trying to grow a lot of cells or grow them up — great, perfect use for it. If you’re trying to extract the most amount of information and trying to control the cells, it’s a very limited way of doing it. Before starting on any of this, I’d actually seen some of these benchtop reactors and I asked them — if the thesis is that it gets better when you go smaller, why did you stop at the Fiji bottle? And the answer was, well, if we go any smaller, our sensors won’t fit. And that’s because they’re using off-the-shelf sensors. And if you ever see a photo of these things, it’s a hodgepodge of different things that have been kind of crammed in there. And that literally is — doing sensor design is its own field. And you need to design not just one type of sensor. You need to design many different types of sensors. And there’s also not that much of a benefit going from a Fiji bottle to half a Fiji bottle in size because of the manual labor and all these things. So our solution is to think about what’s actually the best platform for building sensors, and then can you put cells inside of it? And my last company was a robotics company. Any of the humanoids now that you see going on — I’m highly skeptical of the economics on these things — but any of the humanoids that you see, the core technology that enables them to move and interact with the environment — that was what we built. And that is a sensor problem. And it’s a sensor in a high-noise environment. And that is abstractly quite close to what we’re doing in biology, right? So the idea is, if you have a good place for building and placing sensors of different types around, now you’ve reduced the problem. And so, easy place to build sensors — now you just have to figure out how to grow cells inside of it and keep them alive. And if you pick a mass-manufacturable technique for doing that, it also solves some of the scaling problems. Because the challenge with controllable systems right now is that they still literally require somebody to come over, unhook everything, set it up. You can use disposables to take that down a bit. But it also takes — when you go larger, it takes more media. It’s more expensive to run it. It’s less repeatable. None of it makes sense except that it’s a difficult engineering problem.
Abhi: In a practical sense — I can buy that this form factor was chosen purely because our sensors aren’t small enough to fit in something smaller. What is the form factor that you guys have?
[00:37:02] Printed circuit boards as a medium for microfluidics, and the utility of lithography
Sterling: So the core differentiator is that we are reusing printed circuit boards, which are ubiquitous. They are in your phone, in your microwave. And we put microfluidic channels inside of them. And by doing that, it allows you to then have cells live inside. They can pass through, they can live inside there. And it turns out that making microfluidics previously that integrate those types of sensors is extremely awkward. And so you either don’t do it, or if you do do it, it’s still hand-finished. And so the big differentiator is everything comes straight from the fabricator ready to go. And this is a theme that has happened before. So in silicon photonics, which is where you take existing silicon fabs and you say, hey, can we use this in a new way? And not just to do integrated circuits, but can we now do things with light in it? Or in your iPhone, it has a light detector. That was a new way of using that. And the core there is that the process that’s used is called lithography, which is where you’ll take a mask, kind of like a snowflake, you project light down through that or something, and that causes certain things to react and certain things not. And lithography is a really powerful manufacturing technique because you get complexity for free. What that means is, normally if you’re doing traditional subtractive manufacturing, as your part gets more complex — you’ve got more nooks and crannies in here — it takes more time to make it, or you’ve got more tool changes, all these things. But with lithography, you pay that cost once. You pay that cost when you make your snowflake. But it actually doesn’t matter how complicated you make the snowflake for what’s down here. And so it pushes you to say, what’s the most complicated thing we can make here that has the most value? Because it literally costs the same. It doesn’t matter if it’s one line through here or some complicated maze. So that’s what semiconductors are doing. Then they apply that to photonics, right? LIDAR — printed circuit boards are made the same way. It’s lithography. And if you can leverage that in more complicated ways, you start both enabling capabilities that weren’t possible before, and also are riding a cost curve that’s really beneficial. So the idea is, every time that we have found as a society a new use for lithography, large industries get built off of that.
Abhi: And sorry, so where’s the lithography component coming in when you’re talking about building a new bioreactor?
Sterling: So the way that we make our chips — which you have, right?
Abhi: Yeah. Let’s — do we? Oh man. Here it comes out pretty small.
Sterling: Yeah.
[00:40:48] Anatomy of the Iku device
Abhi: I am seeing that there’s a bunch of circuits coming on from here. Walk me through the anatomy of this device.
Sterling: Sure. So the first thing is that it looks kind of cohesive, but it’s actually six layers. And each layer either is carrying electrical signals or fluidics, or routing fluids in there. And so for this particular chip, it has a channel that’s a millimeter wide and about a hundred — about the size of a human hair — tall. And that’s actually a great size for cells. And you can flow media and cells into it. And then it has all of the components that a benchtop bioreactor or a more controllable system would have. And the way that you make these is through lithography. So these lines and all of the features that are on here — there’s a snowflake kind of pattern that’s made for that. And then they will put what’s called a resist and an etch on. And so it will keep those lines where you want them and etch away everything else. And then you make the next layer, and then you make the next layer, and then you compress all of those together. And so the way to think about it is, it’s like a 2.5D space. So you’ve got a two-dimensional plane, but you’ve got multiple two-dimensional planes. And so topologically that’s going to allow you to do things like take a spiral and get to the middle, and you need to get out of it. So you can come up and out in a way and around. And it also allows you to put electrodes or different sensors in relation to the fluid, in relation to the cells in different places. And that’s kind of abstract, but let me give you a very concrete example, which would be — if you want to have a readout of electrical signals of heart cells, cardiomyocytes, you want to read across those cells. Well, you need to be able to put electrodes above and below them normally, right? Or you can put them side to side, right? If you’re trying to do these things, that’s like a primitive — that is really, it sounds very simple. And yet I will tell you, that is, with other techniques, a difficult thing to do. And so by switching to this new substrate, a whole class of problems that are traditionally quite difficult become substantially easier.
Abhi: And sorry, I don’t have a great conception of where do the cells — on this green thing, are those holes where you put the cells?
Sterling: It is, it is. And I actually have a drawing I should send to you. You can put up a drawing on this screen.
Abhi: Yeah.
Sterling: Because that is also part of the problem — from the outside it literally looks the same as any printed circuit board. Second thing is, in biotech, a printed circuit board looks like alien technology. But yeah, it has actually small holes. There are ways of getting fluids into the actual device. And then you can run them past sensors, or you can — it’s often easier to run the fluid past the cells. And then you’re kind of reading things out on the fluid.
Abhi: And so there’s not a specific chamber here where the cells sit. They’re literally in a line formation as you run fluids through them.
Sterling: In this particular chip — this particular chip is like a year old. In newer designs, you have more like a chamber. And you’re seeding that chamber and then your cells are growing over it. But the powerful thing about using this technique for making microfluidics is that you can make a large number of variations, and it’s a difficult problem in traditional microfluidics because you would need to make new molds. And a new mold is $25,000, $40,000 — you need to get a mold maker to come in and machine it. Your economics on that mean that you need to make a lot of them. With printed circuit boards, it’s easier to make variations to them and just do it. So we have a core catalog that we’re building — these are the designs for particular applications. But every new printing, it’s relatively easy to change it to whatever the condition is.
Abhi: Sorry, is it fair to say that typically microfluidics are not built using lithography, but you are building them with lithography?
Sterling: Microfluidics historically started with lithography. They were built using similar techniques used for semiconductors. And in most research labs, when people build microfluidics, that’s still the way it’s done.
Okay.
What you’ll do is you will make a silicon mold and then you cast a polymer over it. This polymer is called PDMS. And the desirable properties of it is that it’s optically — not transparent, but you can at least see into it, and it’s gas permeable. And so that allows you to have exchange of gases without — you can put it in an incubator and you can use it there. Downside of that is you can also get evaporation. The problems with that is you end up with a fragile output, and it’s also fairly labor intensive to do that. But people like it because you can do it in your own lab. The difference comes down to the use of lithography for the sensors and fluidic channels together in this thing. And critically, for silicon fabs, you need to be really careful about contaminants. So if you need, for example, a gold-plated electrode, you cannot do that in a silicon fab because you will contaminate — it’s not allowed at all. Very bad. So with the printed circuit board as a medium, basically you can integrate many more different types of sensor modalities than are possible with silicon. And then the second thing is just — the reason to use silicon is because you want extremely fine features and detail. Once you need something on the nanometer scale, it’s kind of the only option. But our thesis is that cells themselves are more on the five-micron scale, which is a few orders of magnitude difference.
Abhi: Yeah.
Sterling: And that’s actually the domain where printed circuit boards are a better place.
Abhi: Is there — so if historically people do use lithography for microfluidics, but they only use it for the channels and not the actual electronics — what innovation allowed you to actually include electronics in the design of the microfluidic?
Sterling: Yeah, so let me state that. Microfluidics is a really broad term. For example, DNA sequencing — Illumina, right? That’s using silicon for a microfluidic system. And doing the sensors. It’s a really useful place for doing that. But it has limitations in terms of where in space you can place things. The example I gave earlier about trying to read across these cardiomyocytes — you can’t do that with silicon. There’s no way to build a channel that size that you need for the cells to go through it, but it’s buried and you have electrodes above — it just — you just can’t make it that way. So the core innovation is, first of all, just conceptually thinking about printed circuit boards as a medium for making microfluidics. I’d been working with circuit boards for 10 years or something. Never occurred to me to put fluidics into them. Been talking to people about this for three years. Never met anybody who was like, oh yeah, I’ve seen that before.
Abhi: So as of today, there’s no one combining circuit boards with microfluidics?
Sterling: Not for — there is for diagnostics.
Abhi: Oh, okay.
Sterling: Yeah. So Professor Moschou at the University of Bath — she’s really the pioneer of putting fluids into the circuit board from the fabricator. And the reason that’s so important — that I keep coming back to it — is you can do a lot of things and, academics are prone to this, you can do a lot of things by hand that does not scale if you need to make hundreds of thousands or a million of things, right? If you’re doing that, you need to pick something that is mass-manufacturable. So in terms of cost and complexity, the cheapest thing to mass-manufacture for microfluidics — it’s either paper or molded things when you build a lot of it. But if you try to make microfluidics in a PCB in a lab, you can do all kinds of weird things. Getting it so that it’s compatible with the standard fabrication process — that’s a different ask, both because they’re not terribly keen on changing their processes for the most part. But then the second thing is that when you do it by hand, you’re introducing variability from the beginning. When you have it done in a fabricator, you’re inheriting the hundreds of billions of dollars that have been spent cumulatively on printed circuit board development. It’s been around for 60 years. Entire industries are built upon it being already very good. So let’s just reuse that thing that’s already quite good and low variability.
Abhi: Could you give me some intuition for how the device is actually put together? So my mental conception of lithography is you’re able to create these very fine channels in the silicon via shining light through a mask. What’s the next step after that? Maybe you do this on multiple layers to have this multi-layered system of channeled —
Sterling: Yeah. So for traditional silicon fabrication, it really is a mask and then you etch and then it’s a mask and you etch and mask and you etch. With printed circuit boards, it’s more like each layer can be made out of different materials. So this is where there’s an enormous amount of flexibility in terms of — it’s a much richer palette to start building out of. So the foundation is what’s called FR-4, which is a fiberglass structure. That’s why they’re normally green. It’s a fiberglass structure. And on top of it, it’ll come coated in a layer of copper, layer of copper on the bottom. And that is the simplest circuit board that you will buy. The cheapest one is just that, and it’s just been etched. And then they will put down what’s called basically a protective layer on it, so that you don’t just scratch off the copper. And then you’ll silkscreen it, which is if you want to put labeling and all these things. But at its core, that’s what the process is. When you add in microfluidics, there are techniques for being able to make the fluidic channels on one layer. And then as you need, you can just stack on another layer, and then that layer has fluidics, or in between them now you can route your heaters, right? You need to put your heaters there. Or if you want to put the electrodes or whatever your end sensor is, you’ll pattern that on that layer and then you sort of build it up and then you stack them together. You close it and then —
Abhi: So in V2 of this device, you have this chamber where the cells live. You have microfluidics connecting this internal chamber — maybe it’s external — to a bunch of pipes that feed in some particular axis of variation that you want to control during the media optimization process. And you also have embedded or maybe external sensors that are connected to the circuit board to have some sort of readout of what’s going on in this chamber where the cells live as the media is being applied. And what’s the output? What do you actually — what is the output of the system? I imagine one is maybe temperature, maybe another is internal humidity. What other axes are there that you can actually get straight off the sensor and straight off the device?
[00:57:09] What sensors are on the device today?
Sterling: So the way to think about it is that if you’re going to do any kind of cell culture, there are a set of table stakes that you need to be able to do in there. And those are temperature, pH, dissolved oxygen — we’re flowing things through, so you need to be able to measure flow rate. And those together — that’s the core set of things that our system is currently reading from. The next layer are the electrochemical sensors. So being able to read impedance is actually very useful. If you can read impedance for the media itself, you can detect some changes in how the media is adapting. And if you place them in relation to the cells, you can also correlate cell growth with impedance, which is based on how these charges sort of end up hitting against cell walls at different frequencies. So that’s a core thing there. You can do conductivity through it, which is partially used for offsetting where the impedance reading is coming from, because it can get interfered with in a lot of ways. And so you sort of need a reference point in order to do that. And then you can do other electrochemical techniques, like cyclic voltammetry. But the readouts right now are the impedance, flow, dissolved oxygen, pH, and temperature.
Abhi: Theoretically, I imagine all of these sensors already had miniaturized versions of them available. Is that true? Not true?
Sterling: Not the case. Not the case. Nothing that our system can do at the moment is anything that you couldn’t have done by hand or with a very custom setup. The challenge is, how do you do more than two of those, three of those, at a time? How do you build them economically? For example, the chip that I showed you, in any kind of reasonable quantities, it’s like $4 or something, $3. And that’s actually still even — you can get it down to less than a dollar on that. So if you’re buying sensors off the shelf, the economics are going to start killing you very quickly. And then the second thing is, it’s a challenge to integrate those things. So a big idea in robotics or engineering — any kind of real system — is that interfaces and connectors are what will kill you. They’re very common points of failure. So the best solution is no connectors. When you build sensors all in the same platform, you essentially get to do it with no connectors. So that’s the trade-off — harder, more difficult engineering from the outset, but lower variability and better economics at the outset.
Abhi: I imagine you get dissolved oxygen, pH, and a few of these other parameters. I imagine there’s still some you’re missing in the sense of — is the protein that I’m expecting to produce actually being produced?
Sterling: Yeah.
[01:01:36] How do you use the Iku device to perform media optimization?
Abhi: So it sounds like you’re allowed to optimize to a threshold and then after that you need the analytical chemist to come back in and do their thing.
Sterling: So our goal is to make the analytical chemist kind of a confirmation rather than be limited by it. And the reason comes down to lessons from control theory. So the first is that any system that you’re trying to control — in this case, cells — if they move at a certain rate or certain speed, and you want to be able to dampen that or amplify it, right? You need to be able to read it fast enough that you can come in and make an intervention. Anytime that you take a sample of something and do an offline measurement, that loop is normally too long, right? Sometimes that loop is five minutes or two minutes — okay, maybe you can work with that. If you need to take something to your analytical chemist, it’s probably hours or days. That information is not useful to you in the actual control of the culture, right? So what you want are real-time sensors. You want sensors that are truly integrated into the thing. For the sensors that we’re using now — that really is just the table stakes to enable us to start building in these other sensors. If you don’t have those core sensors, you can’t even keep the cells alive. There’s just no point. But being able to have live readouts of monoclonal antibodies — that is what we’re building towards in the device. It’s being able to have the optical sensors built in. It’s being able to leverage the biological techniques or chemical biology techniques that we have right now for getting signals out of cells. All of those are compatible with our system. And that’s where I think the real value starts becoming unlocked, because there’s a large difference, sort of philosophically, between just reducing the cost of something versus what questions become askable now. And the questions that become askable and the experiments that you could run — that’s what I think is so powerful about using this substrate as a technique. You make this core thing — can you grow cells in high throughput in this dynamic way? Okay. Once you have that, every new sensor system you put in gives you more lenses into it. And this comes back to why lithography is so powerful — normally you have to make a trade-off, right? Every sensor I put in, it costs me money. And so I’m only going to put in the sensors that I need here. But if it doesn’t cost us anymore, or if it’s basically trivial, then the idea is actually let’s just instrument it. Let’s just keep instrumenting it. And classically you would say, well, I don’t really care about those features and those things. Those things don’t matter. But what we’re moving towards is more of having fewer priors and having less human interpretation on the streams of data that are coming in. And so for example, the impedance sensing does not give you a simple number that comes out. It’s a complex number that comes out. Okay, whatever. You could still deal with that, but there’s a complex number across hundreds of frequencies. So you’re getting back this large readout. And then it’s changing over time. So if you and I try to decode that, it can be difficult, right? And we can argue about this, but machine learning is getting pretty good — arguably quite good at handling those types of things. And so the way that I separate these two — they’re what are called narrow-band sensors, and then they’re broadband sensors. So a narrow-band sensor is, for example, readout on temperature. You’re gonna resolve that to some either resistor variable or some Celsius basis, and you want that to pretty much just respond to temperature, not respond to anything else, right? Very easy thing to interpret. Same way with your lactate — you want something that only responds to the lactate in the media, nothing else coming out. These are narrow-band sensors. They’re meant to reject everything else. And then there are what I’m gonna call these wider-band sensors, which is — if you take a microscope and put it on something, that’s a fairly wideband, right? There’s a lot of stuff going on in there. There’s not just one answer about what’s going on. And you can sort of select — I think these things are more relevant to the questions I’m asking, or not. And things like optical, the impedance, some of these other electrochemical techniques, the magnetic fields that are in there — when you have machine learning on the other end to interpret that, it would be surprising to me that that’s not useful.
Abhi: This is maybe a naive question, but at the end of the day, all the signal you’re able to extract from this device is gonna be some electrical property of the tiny little bioreactor you have in there. Is that correct?
Sterling: No, the big picture is that we’re integrating all of these different modalities. So we are integrating the optical modality. My dream here is to get Raman sensing into — multiplexing Raman sensing across this, right? Having that method of looking at it. It’s having those with the lactate and the glucose and the monoclonal antibody readout, right? Or whatever those domains are — in an instrument sense, that’s extremely powerful. So that’s the goal.
Abhi: Okay. Interesting. I imagine some of these variables — you mentioned — are immediately interpretable. There’s a good value you should be reaching. I imagine dissolved oxygen is one of those. For the more complicated ones where you don’t know whether this is a good value or a bad value — like glucose or some other mineral — where does the ground truth come in? Is that where the analytical chemist comes in and they give one singular data point, like what’s good? And then the purpose of the system is to correlate everything that you put into the system and all these output variables you got out to that ground truth? Or something else?
Sterling: So I think a useful lens for this is from a book called How to Measure Anything. Highly recommend. This book changed my life. And the idea is the expected value of perfect information — that any reduction in uncertainty has some cost to it. So when we’re taking a measurement, there’s an economic aspect to that and therefore a trade-off. So knowing the temperature of this room — there’s not much value to us, right? Doesn’t matter whether we’re off five degrees or 0.1 degrees. For semiconductor manufacturing, matters quite a lot, right? You need really, really tight value there. So if you take that lens and you say — certainly overall, there’s a need to have precision on the readouts of how much antibody do we get out of this, and the quality of that, right? But earlier parts of the process — do you need that level of precision?
Abhi: Well, I guess at the end of the day, I imagine the whole purpose of the process is to get to antibody production. But I guess, is part of what you’re saying that there are earlier intermediate benchmarks you want to hit before you get to the antibody?
Sterling: What I’m saying is that your ultimate readout, right, is yield, titer, quality, and stability over these things. Those are the things you care about. And pretty much in that order. Even on the yield though, you’re still going to get — there’s still variation inherent in cells, right? Every batch you run, even though they’re trying to reduce variability, you’re still going to get some variation in there. So if you take a sample and you learn to two decimal points the titer that came out of that, the yield that came out of that — okay, great. But your process variability is 1% anyway, or something, 2% anyway. So knowing it to three decimal places doesn’t really help you. And then the second part of it is — if every measurement has a cost in some sense, can you change your measurement system such that you get the information that you need in a more economical way? And part of the way of doing that is by loosening constraints when possible. So ultimately, certainly you still need — you’re still gonna run it on your benchtop and your pilot things, and you are going to characterize it there, right? Because you do need ground truth from those things. But in terms of which is the right media or conditions to get to — okay, do you need two decimal points of accuracy on that? Do you need all of those readouts to do it? No.
Abhi: Is a good way of thinking about this — you start with the Iku device at the very beginning, and then once you’re happy with what you see, then you move on to the benchtop device? Allowing you to narrow your search space down to a very small number of parameters.
Sterling: Right. It would basically be like — you’re still going to end up — the process looks pretty much the same. The difference is what is the quality and speed that you came to that answer. What’s the quality of the answer you came to? What’s the speed that you came to it? And then the second part is, how many of those benchtop experiments did you need to run? Because there’s a difference between running them in an exploratory sense versus running them in a validation sense. In a validation sense, you’re just trying to make sure that things are repeatable. So you need to run, let’s say, three to five copies of it or something. But if you’re already quite confident that you’re at the optimal point, it doesn’t make sense to do the exploratory experimentation there anymore.
[01:14:44] Does media optimization survive scale-up?
Abhi: Moving on to — okay, you’ve done the Iku optimization and now it’s time to move on to the bigger things. How worried are you that moving the cells to a physically larger space forces the media optimization to move into a completely different direction?
Sterling: It’s definitely possible, and every time that you change physical shape and geometry, you do get some variation there. The confidence comes from understanding that — first of all, empirically, every microfluidic system that has flow integrated into it ends up correlating quite well with the larger system. The reason that people have hesitation about it is because they think about microfluidics that doesn’t have flow, and the recirculation effects. And that’s actually the key thing, right? It’s a question of, do you have flow in this thing or not? And how does that flow and those shear forces and the oxygen transfer rates and the gradients that you create — how are those representative of what’s going on here? So that’s one part of it. But let’s say you don’t buy any of that. The easier way is that it actually decomposes into two broad parts. There are parameters that change with scale. So these are things like your hydrostatic pressure — definitely changes with scale, right? You’re not getting away from that. Certain mixing times — these change. You can get pockets in very large reactors, right? These change. But then there are a set of parameters that empirically don’t seem to be scale-variant. And for the most part, media optimization seems to be scale-invariant.
Abhi: Do you imagine in the ideal setting that this is a closed-loop system that just continuously tries different media optimization parameters, feeds it all into a model, it plans the next round of media optimization, and that just goes in a loop?
Sterling: Yeah. So how does the — aside from running the experiments, how do you actually interpret and decide with it? So clearly the entire zeitgeist right now is about replacing the control layer with AI and models. And whether you can do that on experimental design from reading a bunch of papers and then this is the thing I’m going to build — I’m less convinced that that’s necessarily the best way. But for these types of experiments, certainly seems the way. It’s actually key for making the whole product, because otherwise you’re handed so much information back that the problem then shifts to processing it. So one of the lessons that I’ve taken from talking to people who have tried things in media optimization, tried doing cloud labs or doing these things — there’s a lot of hesitation around sharing cell lines. Understandable. And it also comes down to information about what the result of those cell lines are. So for example, a company that was running experiments externally was not allowed to look at the results of some of these analyses. It was in their contract that they’re not allowed to actually look at the results. So it’s really hard to improve or build your own model if you cannot look at the results. What we’re building is a federated model that allows the customers on-site to run the device. They can pull the model, get a new experiment design, that runs in there, and then the model weights are updated, right? This is the same way that the Tesla self-driving was trained, right? Federated learning resolves that IP-sharing complaint or constraint. And the reason that’s so powerful is that now you have a model that is learning from diverse experiments across different cell lines, at different places, but still on the same hardware. That’s really key, because otherwise there’s too much experimental variability in the data you’re getting back. And so you’re not gonna generalize well on that. And the sort of hedged bet here is that if it’s not tractable through machine learning and models, we are still building the highest throughput, most economic, and fastest way to get to that answer through still running experiments. And if it is tractable, we’re going to have the best model for running those experiments. And I think the answer is actually going to be a blend of both. I do not believe that experimentation is going away. But I do think that we will be able to get to much better answers much faster, because that’s really the ideal, right? The ideal is, once you have that model, now you can feed it in even earlier in the process, right? When you’re doing your strain engineering. So coupling those together becomes possible once you have a model.
Abhi: What parameters does the model actually intake? I imagine it takes all the inputs you’ve given into the system, all the outputs you get out of the system, and maybe what the system is actually meant to produce, and the strain itself. Is that everything or are there others?
Sterling: That’s — I think that’s a complete view.
Abhi: Okay. If the belief is that you’ll probably still need human experimentation to help the system along, and maybe the ML won’t fix everything zero-shot — can I conceptualize this as like there are 10x media optimization engineers, and they’ll be able to iterate much faster on this model system as a result of that? Or do you imagine media — bioprocess engineering is a pretty standardized field where these are the first 10,000 things you try, and maybe in the old world you get to try like 5% of that, and in the new world you try those 10,000 things? But ultimately it’s the same set of parameters that the media optimization engineer is tuning.
Sterling: So are we tuning a different, a larger set of things rather than just the engineer?
Abhi: Yeah. Like, all the knobs that the engineer usually gets to tune — do they also get to tune in the system? Or is it a subset, or maybe even larger?
Sterling: It’s a superset.
Abhi: Superset. Okay.
Sterling: You’re getting to tune far more. And it’s a superset in a few different senses. The first is that just bringing the economics down, making it automatic, allows you to — even if you had the capability previously to change a variable, you wouldn’t have essentially the budget or the time budget or the capital budget to actually exploit it. That’s one sense. The second is that it allows you to make finer interventions, with more feedback built in. So the reason for having the real-time sensors, why that’s important — what you actually want to do is be able to anticipate what the cell wants before it needs it. Because there’s always a delay between when something gets introduced into the environment to when it gets uptaken by the cell, right? So ideally I actually want to see those signals happening before the cell needs it. Now, in order to do that, you need real-time sensors that are picking up on that and starting to match that. So that’s a domain that’s just not possible —
— in other systems.
[01:24:32] $8/lane vs. $20,000/lane: the economic utility of Iku’s device
Abhi: I’m curious about — I assume there are microfluidic bioreactor systems that at least exist in the literature. How much improvement do people generally see by going to these systems versus the Fiji-shaped benchtop?
Sterling: Right now? I would say close to zero. And the reason is economic. So the one metric or lens for looking at it is just what is the all-in cost to getting that dynamic cell culture data — that one experiment, that data. And there’s two components to that. The first is, what’s your CapEx, right? How much did it cost to actually get this device in here and use this thing? And it’s really this CapEx per experimental lane. And then the second is, what is the OpEx on that? Every time that we run the experiment, how much does that cost? And so to give an example — the benchtop reactors, depending on whether you’re going with the gold standard or some of the derivative ones now, let’s say the CapEx is between $5,000 to $15,000, $20,000 for each experimental lane. And then your OpEx is — you’ve got not just the media, you need to also take the time to grow the cells up to be able to seed it. You’ve got the human coming in and running it, and then you’ve got the actual disposable, or you’ve got cleaning the thing and sterilizing it. So it ends up being around $1,500, $2,000 every experiment that you run. The closest microfluidic system in capability — it’s only four lanes and it’s $80,000. And so that gives you a per-lane cost of still $20,000. And then the disposable costs are I think still around $500, $700 for each thing. So there’s no — there’s not much economic reason to it. The reason that that product is on the market is because it cuts down on media utilization. But that’s why I think that’s not a very successful product. What we’re building is — in philosophy, there’s a difference between changes in degree and changes in kind, right? So it’s like, okay, you take a little step, you take a little step, and it’s just, okay, it’s different, but it’s not qualitatively that different. And then when you 10x or you 100x something, right — all of a sudden new things get unlocked. And so we’re looking at a CapEx of $8 a lane, and we’re looking at an OpEx per experiment of like $20 or less, right? And so those two things together really transform what’s — and then if, as I said, you start integrating more sensor systems into it, those two parts are kind of fixed, right? The CapEx and mostly OpEx on that. But the amount of data and the amount of value that you can get out of it — that’s where I think there’s much higher place to go.
Abhi: Instinctively — if I understand correctly, both existing microfluidic systems and your system have lithography as the underlying manufacturing component. And yours has circuits integrated, so you can get these sensors. But if the underlying creation process is the same, why are microfluidics so much more expensive than your device?
Sterling: So that device I was just referencing is not made with lithography. It’s a molded device. But the key thing actually is that they don’t have active — there’s a big divide in microfluidics between passive and active microfluidics. So passive is like paper microfluidics or something, right? Your pregnancy test — that’s paper microfluidics. It just does one thing, doesn’t have feedback in it, doesn’t really have control and regulation. And then really separate is, can you come in here, can you sense things and change things as they’re going on? And most of the systems right now do not multiplex the control aspect across a large number of things, and the sensing part of it, and some of the actuation part of it. If you have to use molded plastic, there’s kind of no way to integrate sensors easily from molded plastic. It doesn’t come out of the factory with all these things into it. You still have to go and add all these things together, so then you’re adding in labor costs there, right? And all that. So even if some of the end result is, in certain capabilities, similar, the upstream manufacturing of it — because you can’t integrate everything together — really constrains your economics on it.
Abhi: And so even if the lithography-produced microfluidics device that’s potentially on the market — that alone may cost something similar to the Iku device. But all the sensors that are added on increase the cost.
Sterling: Right. Let me back up here and say that lithography as a technique does have this property where the cost doesn’t scale with how complicated you make it. The big difference is, in silicon, the base cost for making it is substantially higher than the base cost for making things in printed circuit boards. So in general — this is true of almost all forms of manufacturing, to my knowledge — as you increase precision requirements, you increase cost. And it tends to scale logarithmically, right? So if you — there are two ways that you’re using silicon and lithography, which is either you will make it as a mold — so you’re really just using the lithography as a mold, and then you’ll peel this casted thing off of it. Or people will actually use the silicon and make the channels in there. But the problem with silicon is it’s really expensive. In general, we do not make disposables out of things that are made in silicon lithography. Because to make something this size — probably $400 or something.
[01:32:05] Why PCB microfluidics didn’t exist 10 years ago
Abhi: Why — if it seems like the big innovation here is combining lithography — or doing lithography on the circuit board as opposed to doing it either in silicon or via a mold — both of which seem more expensive than the printed circuit board — was it simply a matter of realizing that you could do this on circuit boards and dramatically reduce your costs? What — why did this not exist 10 years ago?
Sterling: Right. So I think the first is that different worlds don’t talk very much, and in this case, the tool-builder world and the tool-user world are very distinct. And the second is that — to answer the question of how did I come to it — I was in my apartment in São Paulo, and I’d been really digging into biofilms. I was like, okay, so much of this is about the concentration of these things, and they’re creating these little microenvironments and all of this. And then I was really — at the time there was this concern about, are we going to have enough bioproduction capacity? And what I’d seen work before is in traditional chemical synthesis — they switched to continuous flow microreactors. So Corning Glass, that makes the glass in your iPhone, they also make chemical reactors. And the benefit of this is that you can flow things together. They react quite quickly. You can pull the heat off and things, and it’s really consistent. The reason you can’t use that in biology at the moment is because, in order to — traditional chemical synthesis, you really are pretty much just controlling flow rate. And the reactions happen really fast normally, right? You just mix them together and it’s done. But in biology, right, you need sensors in order to see what’s going on. The environment is much more tightly controlled, right? There’s more aspects to it. And cells themselves are again perturbing the environment around them. So that was the lens I was looking at — how do you bring this thing that clearly worked in chemical engineering to biology? And also thinking about these biofilms. And so I studied mathematics. I literally wrote this down as a set of axioms. I was like, what do you need? You need to be able to hold fluids apart. You need to be able to combine them together, right? You need to integrate sensors of different modalities so that you can adapt it. It needs to be small, both for mass transfer reasons — because as you get smaller, there’s more surface area around. And the limitation from any reactions is literally just how fast can you get things from the gas phase into the liquid phase. And that’s purely a function of surface area. Even in large reactors when they’re using bubbles, the bubbles are just creating surface area. And it’s about diffusion across that. So if you go small, you get that. You go small, you also get laminar flow, which is really, really nice because it takes problems that are normally chaotic and it linearizes them. So there’s a great experiment everybody should watch on YouTube of — you put a couple drops of dye into this gel, and the gel has a really high viscosity, and then they stir it up this way, right?
Abhi: And they go backwards.
Sterling: Yeah. And they go backwards, right? And that idea — well, why can you do that? You can do that because in a sense it’s linear, right? Whereas in a chaotic system, you’ll get to some point and now you can’t tell which path you were at before, right? So these are things. And then you need to be able to run a lot of them, both for — originally it was for throughput, but that throughput idea also translates to data parallelization. And then if you need a lot of them, you also need it to be manufacturable, right? Mass-manufacturable and needs to come down. Okay, those are the axioms. I was like, these are the things I need. And then I literally went through every manufacturing technique that I could find. I mean, truly everything, down to like, what are they doing with 3D-printed glass at the moment. And you can just knock these out for a variety of reasons. The molded polymers don’t work because you can’t integrate the sensors in them quickly. 3D printing doesn’t work at all — it doesn’t matter what the modality is, because the infrastructure isn’t already there, right? So if you need to make a bunch of disposables — which, great business, always make disposables — if you need to make a bunch of disposables, then you should pick something that you don’t need to have a lot of capital in order to scale, right? So you need an existing manufacturing industry for it. And all these came back, and then ultimately I was like, let me just reframe it. I was like, let’s just pick one of these and optimize for that. What’s the best way to build sensors? I was like, well, printed circuit boards are really good. And I was like, okay, can I then build the rest of this in here? Let me just take a common technique — can I just select some subset of this problem, optimize for that, and then force the other ones to fit into it? And I was like, yeah, okay. Sensors are good there. It’s good on manufacturing. Okay. And then after that, went to the literature. It was like, okay, here’s the one person who’s actually done this. Go fly to England, go work with her, and then —
Abhi: The University of Bath person.
Sterling: Yeah.
Abhi: Okay. Interesting. One person in the world has stumbled across this idea. Well, I guess if every technique seems to have its mild drawbacks and there wasn’t a single optimal one that you stumbled across, what is the drawback of going for printed circuit board?
Sterling: Okay, well, I will tell you — from a — there’s the problem that you might think, and then there’s the problem you’ll discover. The problem you would think is that it’s a kind of weird thing. You have to get people to adapt to it, or — also, you do have to design each of those sensor domains. Just because you pick a good palette to work with, you still have to do a bunch of work. You don’t — all this — those all end up actually being not that big of a deal. The harder problem is this, which is that nobody understands it.
Abhi: That’s true.
Sterling: Truly, nobody understands it.
[01:39:24] Who is the customer?
Abhi: I guess, who are you selling these to? I can imagine one customer is academic labs. Maybe — and I imagine the much bigger customer are people either preparing drugs for clinical trials or generics manufacturers. How — one, how willing are they to buy this stuff? And two, is there a customer base I’m missing?
Sterling: Yeah, so I’d say our first customer is actually the US Army.
Abhi: Oh.
Sterling: And that’s for doing something quite different from media optimization, but still within the realm of — you need to explore a larger space and current ways of doing that are insufficient. The broader answer here of who’s the customer — the customer who feels the most pain for this are the large CDMOs. I’ve spoken to people who have worked for those places. What is the thing that they talk about every year? It’s yield. That’s it. They actually don’t have — if we’re talking about degrees of freedom for them as a company, they don’t have that many, right? They don’t come up with their own products. They aren’t allowed to innovate on it once the process is set. They have extraordinary downside risk if they make a mistake. And they are in a competitive marketplace with — the pharma companies are taking the bulk of the — the pharma companies are getting the value capture, right? They ultimately own distribution. And so those features make them very desirable buyers for it. But media optimization — if you — it both happens within pharma companies for their — sometimes pharma companies manufacture their own things — but also the process of running dynamic cell experiments, that dynamic cell culture, that is pervasive. That’s where I think the largest opportunity really is — all of these problems in biology, many of them ultimately just reduce to, how many dynamic cell culture experiments can you run? And so this is true for new antibiotics discovery. It’s true for doing things in organ-on-a-chip. It’s true in cancer research. If you actually just take the lens of, what are people trying to get out of this experiment? Well, they need to be able to come in, they need to be able to perturb things over time in this, and they need to be able to read out during it. Maybe that’s too big of a lens, right? Maybe there are particular areas where our system is not going to be compatible. But there’s enough of a core there. And the justification for this empirically is that you already see it — every time that bioreactors have gotten smaller and more automated, they diffuse more into the ecosystem. It gets adopted more and people continue to want more automation, more experiments, and cheaper on it.
[01:43:14] What is the ultimate goal of Iku?
Abhi: Do you view Iku as not just a media optimization company? The hope is that whatever the final device ends up looking like, it’s useful for almost anything that’s an in vitro system where you’re trying to screen many things across it.
Sterling: Yeah. Our goal is to produce 99% of the world’s dynamic biological data. And the reason that that’s achievable is because we do not produce that much right now. And by increasing the throughput, by increasing the relevant modalities that we’re putting in and those conditions, I think that is a totally achievable thing. That’s where I started in the beginning talking about this interface between computation and biology and there being that mismatch. That interface, that layer — that’s what we want to build and that’s what we want to own.
Abhi: I’m curious — among the customers right now — maybe the military project is its own direction — for selling this to either CDMOs, pharmas, generics manufacturers — my impression is that all of these groups, like you said, don’t like variability and so they’re very hesitant to buy new technology that promises the sky and the moon. What’s the hardest part about selling to these people and how do you reassure them that things are gonna be fine?
Sterling: I certainly underestimated the importance of that aspect here. I’ve sold a lot of things in my life so far in very different domains, and I will say that not only in biotech, not only in pharma, but for biopharma manufacturing, the level of conservatism and scrutiny is extraordinarily high. So the wedge or way of getting into that distribution — there are a few examples. The first one is the kind of traditional way, which would be, who do the CDMOs look towards? The CDMOs are not going to adopt it until they’ve seen a pharma company use it. Pharma companies are not going to talk to you until you have a paper published from probably a premier lab of some sort, right? The premier lab is not going to touch anything until at least you have a white paper and some connection. In order to do that, you need to build the device. So how do you resolve this problem of getting to that end customer? The first is that there are ways of augmenting existing instruments. So the advantage of it being a standalone sensing system is that you can come in as just an add-on to something — you’re still gonna have the same economics, but now we can offer you some more data out of that same thing. And you can — that’s a lower threshold for them and it’s not involved in the actual — they can just throw that part of the data away if they don’t like it, right? If it’s not useful. So that lowers some of it.
Abhi: I guess it’s cheap enough such that it’s not a major investment to try.
Sterling: Right, right. The second is — and a big — I was just rereading Geoffrey Moore’s Crossing the Chasm, which — have you read this?
Abhi: I have not.
Sterling: Okay. I highly recommend it. It’s been on my bookshelf for eight years, 10 years. The other day I was just like, I should reread this, and — my God. And the big idea is that what counts as a market — what counts as a market is not only that people are buying something repeatedly, but critically that it’s a group of people who talk to each other and look at each other, right? And so you’ve got the academic labs who look at each other and talk to each other. And then the pharma companies look at each other and talk to each other. And then the CDMOs look at each other. But the key thing is actually the big CDMOs — they don’t talk that much. They don’t associate that much with the little CDMOs. But those are ones actually that we can sell to and get some evidence coming in there. So there’s building ancillary systems that can tack on to existing things for getting in there. And then there’s the other way, which is just — be so good they can’t ignore you, in a sense.
Abhi: I was gonna ask — I imagine the gold standard here is you show one of these CDMOs, here’s the cost and titer of expert-produced media versus the cost and titer of expert plus Iku media.
Sterling: Right. Well, actually the gold standard is not that we say it — the gold standard is that Eli Lilly says it.
Abhi: Sure. Yeah.
Sterling: Right. Because that’s their customer. And those pharma companies own those CDMOs.
[01:49:07] What does the validation evidence need to look like?
Abhi: Yeah. I guess, has any pharma done this and produced — or even you internally have done this side-by-side comparison and you have this very clean result to share to them? Or is it more like you’re still in the phase of seeing the magnitude of improvement the system gives?
Sterling: It’s more like — it will be extremely surprising if you do not get the — first of all, if you don’t get the economics. And then also, all evidence points to being able to run more and different experiments gets you to a better answer. So you can kind of work back from that.
Abhi: If you follow the trend lines, it almost necessarily has to be the case that this is better than what’s currently being used.
Sterling: Yes.
Abhi: Okay. Yeah. Has there been — in the early initial deployments of this — and like, will there be a white paper coming out in the next year of, here’s what we found by using the Iku system?
Sterling: Sure. I would say it will necessarily be more dull than that. I would separate it between two — there’s the hype marketing stuff to do, and then there’s what a CSO actually looks at, right? And from my interaction with scientists as a breed — first of all, they are a breed, and secondly, they are allergic to any hype and any kind of promotional stuff here. So what they want to see — and I don’t need to be creative here — what they need to see is your experiments running your device with some readout, and then you take the gold standard and you replicate that, and it needs to be at the same facility, right? It needs to be that. You need to show those two. And basically the graph needs to be obvious enough that it’s like, okay, I can see how these correlate and they scale. They don’t actually need to be perfect. None of them are perfect. This is true for doing scale-up from the benchtop to the pilot and all these things, right? It’s really just a series of graphs. Like, okay, this thing maps onto this, maps onto this thing. And then the next step is, actually you need that replicated at another facility. So for pharma to adopt something, it’s not even just that — you need one lab, you need it to be out of three different labs who all get — because ultimately their thing is about repeatability.
Abhi: Reducing variability. Okay. Yeah.
Sterling: Reducing variability, but then also repeatability. Yeah.
[01:52:14] What would you do with $100M equity-free?
Abhi: If someone were to hand you a hundred million dollars equity-free to push forward the mission of Iku as much as possible — one, I would be curious where you would spend the money, and two, what are the axes of improvement that still lie ahead for the future of the device?
Sterling: Yeah. I think the first thing we would do is really build this high-throughput perfusion system. I would integrate Raman sensing, and I think that’s the — I think that’s a killer app. I think if you do that, it unlocks so much. But also, if you go back through the literature, people have been talking about the value and use of having a high-throughput perfusion device for a quarter century, and that was before we had the machine learning or AI to also interpret that data. That was before the problems that we’re encountering are also getting harder to manage. So I think that’s very clearly there. Along the way there, you build organ-on-a-chip, high throughput. That’s also a constraint right now. One of the larger manufacturers — they’re moving towards it a little bit, but they still have some trade-offs as they try to move to it. Where I think actually really interesting, and I hadn’t gone down until recently, is in droplet microfluidics. So the idea of — in some sense, what we’re doing with perfusion is, okay, let’s take a benchtop bioreactor and all that control, and let’s shrink that down. The droplet microfluidics is more like, let’s just take a test tube and shrink it really small, right? And if you shrink — that’s where 10x Genomics — that’s a form of droplet microfluidics. It tends to be more of an integration of microfluidics with some chemistry, some chemical technique to help with signaling or help with the formation of particular types of droplets that allow memories that you can diffuse through and things. But I think what’s really underexplored are two things. From the customer side or from the data side, it’s higher resolution, more temporal datasets from these, right? Getting back to this idea that cells are time-varying and sensitive and highly parallel in a bunch of different ways. The ability to shrink that experimental system down that much, explore the space, but not lose the temporal element the way that it is right now for the most part — I think that’s really, really powerful. And there are a couple of techniques people have been trying to get down to it. There’s a technique for getting it down to like seven minutes now. But it’s still — there’s still trade-offs. When I look at it, I’m like, oh, they still haven’t resolved these trade-offs. So that’s one aspect I think could be enormously valuable. And then the second thing is, the droplet microfluidics right now — they’re really focused on the formation of the droplets and these things coming through. They are not really chaining things together. And in the literature there are all of these almost like transistor parts, right? Little parts that people have built. And you can see there’s this dream of building truly lab-on-a-chip, right? And the problem is that right now, as you try to build a lab on a chip, you try to do these things — there just aren’t enough of the subsystems or steps that you can link together on there. So it’s like, you do a set of these and it’s, okay, we gotta come out of the chip, right? And then you kind of lose all of it. And so I think it’s really only in the past five years, and then with our technology for being able to actively manipulate things in there and do the feedback — I think rather than conceiving of lab automation as automating manual tasks, which has a hard upper bound on how much efficiency and capability that you will get out of it — let’s just start doing what we did in other industries, which would be, no, no, no. Okay, we have to start over and we have to build some of these things in here, but we’ve already built a lot of them. Now why don’t we actually start building that lab-on-a-chip?
[01:57:31] Lab automation is in a strange place right now
Abhi: I remember, for my lab automation article, one person remarked to me that it’s a shame that liquid handlers have become so popular, because biology happens at much smaller scales than that. So you’re making a system very large when it doesn’t need to be that large.
Sterling: It’s — okay. I don’t know if you’ve ever seen these robot arms that get a cup of coffee and then — they’ve got them in the San Francisco airport. Terrible idea. And it’s like, okay, you go over, and the machine picks up the cup and then puts it over here and does the grinder and brings it to you. What that’s doing is automating a manual task. It’s taking the way that humans have just done something and then been like, I’m just going to throw an arm or an anthropomorphic thing on top of it and then duplicate it. And the result of that is honestly not that great, right? There’s a reason that those things will continue to not take off in any sense other than novelty. And compare that to your Nespresso, which still has an interface — you still need to get your cup — but far better, right? The Nespresso, they’re like, oh, actually, let’s integrate the actual keeper and the automatic dispenser and all of these things, right? And they made it much more compact. Or your coffee vending machines — also works for this, right? Neither one of those are trying to just take the human steps and then be like, literally wherever the human is, we’ll just put this thing in here. And that’s what I’m seeing happening right now in lab automation in general. And I don’t just think it’s lazy. I do think it’s lazy. I don’t just think it’s lazy. I think it’s also close to philosophically a crime. I think it’s a crime —
Abhi: Because you think for automation to truly be useful, there needs to be a new way of interacting with the underlying systems.
Sterling: Yeah. It’s like, they’re just not really thinking through the problem.
Abhi: Well, I guess one argument is that it’s easier for these things to get adoption if you are allowing them to work in the exact same environments that humans are able to work in.
Sterling: Yeah. And I think that makes sense for things like machine tending for 3D printers or for CNC machines, right? But what’s the difference? Well, the CNC part is a hundred millimeters, right? So it necessarily has to be closer to human scale. But look at what’s happened in industrial space — the most useful places for robotics — and a heuristic you can use is, if it says “robotics” in it, it’s not really that useful. Whereas if it says what it just does, then it’s successful. So a dishwasher is a very useful robot, right? Self-driving car — very useful robot. And in the industrial space, it’s mainly around logistics and moving things, right? So the really successful ways of actually leveraging automation — first, they respect the real goal, and they respect the limits of the thing you’re trying to manipulate. So if the things you’re trying to manipulate are grocery things, one way could be — let’s take a humanoid and it goes to the grocery store and picks up things off the shelf. That’s what people do, and that’s what these humanoid companies want to do. And the alternative would be — actually, if you look at the logistics companies that do the best of it, it looks nothing like that at all, right? It’s some huge grid. It has these things running around like crazy, and all they’re doing is picking up these things and setting them down. And there is no way that a humanoid system can compete with that, right? There’s no way. And you can let the economics decide that over time. I just — this idea that it’s actually pushing the lab forward — I don’t really buy. I also do not see Eli Lilly or Johnson & Johnson putting a robotic arm near their lab bench. I think what Kao’s doing with their lab automation system — those carts — right. I think that’s at least sort of a reasonable compromise in a sense. We don’t need to go and re-engineer each of these things that already exist. If we can literally just make the interface easier. But then I think the real goal should be, as much as possible, if the economics fit, just think through the problem correctly. Just put it on chip as much as possible.
Abhi: That makes sense. I think those were the last questions I had. Thank you so much for coming on.
Sterling: Thank you for having me. Yeah.








