Pydantic: The Python Darling That Loves Rust

Many of the most exciting Python tools seem to have something in common: they not so secretly rely on a lot of Rust. For this interview, I talked to Adrian Garcia Badaracco and Peter Lesty of Pydantic. Pydantic is an extremely popular type validation library for Python, but it’s also the name of the company behind the fast-growing observability platform Logfire. We talked about the Rust-accelerated Python trend, how Pydantic is using Rust to build a high performance query engine for Logfire, what it’s like working at Pydantic, and more. To see jobs available at this and other cool rust companies, check out our extensive rust job board.

Want to advertise here? Reach out! filtra@filtra.io

Listen On Apple PodcastsWatch On YouTubeListen On SpotifyListen With RSS

Drew: I use Python a lot, so I'm very familiar with the Pydantic open-source software. But Pydantic is also a company. I want to separate the open-source project from the business to help everyone understand the distinction. Let's start with the open-source software that began all of this. Can you explain what Pydantic is for people who aren't heavy Python users?

Adrian: I've been in the Pydantic Python space for a while, so I can take this one. Pydantic essentially started as runtime type checking. This was back when Python didn't have type checking, or only had a very nascent, basic type system—at one point, the original type system was in comments. Pydantic came onto the scene and let you define a class where you could put type hints or type annotations on the fields. Pydantic would then enforce at runtime that an int had to be an int. If not, it would throw an error.

Adrian: In practice, it’s used a lot like serde in Rust or Zod in TypeScript. One of the first major use cases was validating untrusted data, like JSON received in a web API. For instance, instead of writing custom checks to ensure an "age" field is a non-negative integer, you'd add a type annotation with that metadata, and Pydantic would handle the validation. Pydantic could even generate a JSON schema from that model.

Adrian: Pydantic's popularity exploded when it was integrated with FastAPI, a very popular Python web framework. FastAPI brought async to the Python web scene, and one of its key features was using Pydantic models to declare the inputs and outputs of your API. This allowed FastAPI to automatically generate JSON schema and OpenAPI docs and handle the data validation. That's how Pydantic started, and it was initially written 100% in Python.

Adrian: That approach was slow and accumulated a lot of cruft. So, at one point, Samuel, the original author and founder of Pydantic, decided to rewrite it in Rust. I was involved in the open-source work at that time. Around the same time, Samuel was approached by investors who had previously invested in other successful open-source companies and were looking for more. They approached him, and he assembled the team—I was part of that initial team—to build a company around the open-source project.

Adrian: Building the company was an open-ended question at first. The investors didn't give us a mandate, like, "You must charge for your open-source library." It was more like, "We think you're good engineers, you have a good reputation, and you can build products people like. We're interested to see if you can turn that into a commercial product." That investment catalyzed Pydantic turning into a company. That is the relationship between Pydantic the open-source library and Pydantic the company.

Drew: That clarifies things well. When a company is in that position, there’s always the leap you alluded to: "We have talented people and great open-source software. Now, how do we build a business out of this ecosystem?" Are there services that Pydantic sells around that original open-source software? I know there's something else we'll dive into soon, but do you offer services or products around the core library?

Adrian: Not really. We don't sell anything, do any consulting, or host anything around the original open-source library. We spend our time maintaining it because we believe it's a valuable piece of the ecosystem, but we don't monetize the open-source library at all. The main connection between the open-source library and the commercial company is a shared user base. Many users who rely on the library for robust data validation and APIs also care about the other products we sell. The main product we sell commercially now is observability.

Drew: Since you mentioned you were brought in when Samuel got the opportunity to start this company, what did that look like for you? How did you actually get involved? Were you working on the open-source project already?

Adrian: Yes, I was just a somewhat random contributor to Pydantic, the open-source library. I started by using it, finding bugs, and then, as a software engineer involved in open source, I would find missing features, get ideas, and start making Pull Requests. I had some contributions—I was by no means one of the biggest contributors—but I had made some pretty key contributions and ideas that I had upstreamed into the library.

Drew: I wanted to ask that because I think a lot of people underestimate making open-source contributions as an opportunity to get your name out there and get on the radar.

Drew: Well, okay, so we've alluded to it several times now. Pydantic is working on something big, new, and exciting right now: an observability product called LogFire. I think we should spend a lot of time digging into how that's shaping up. I did a lot of research on this yesterday, but our listeners are coming in cold, so give us the groundwork: Why does LogFire need to exist? How does it differ from existing observability products?

Adrian: Well, when we started the company, we had to sit down and figure out what we could sell. We didn't want to just sell or license the library. So, the logic was a mix of two things: What products do we think there isn't a good version of yet? And, what do our existing users specifically care about? Since Pydantic was heavily used in data validation and web frameworks, the same people who care about valid incoming data also care deeply about performance and correctness once the application is deployed.

Adrian: If your application is deployed and you're getting errors because of invalid data—whether you're being sent it, creating it, or pulling it out of your database—where do you go to fix that? You need some kind of observability. Pydantic helps you catch errors at runtime. Once you've deployed, all sorts of new errors can appear, and that’s where observability helps you catch them.

Drew: That's a great connection you made along the lines of correctness. It's like you have compile-time correctness with Rust, which is a huge selling point. Then you have runtime correctness with Pydantic. And now, with LogFire, you have sort of production-time correctness.

Adrian: Yes. All the wild things that happen when you scale and have users. For example, your downstream service times out, your upstream service times out, and all kinds of things go crazy. That's when you need another level of visibility into what’s going on.

Adrian: One of the things that made Pydantic very popular was the quality of the errors the library returns. For instance, if you try to parse invalid JSON using Python's built-in json.loads, you'll get some obtuse error about "expected some bracket, but got some whatever." It's not clear why it didn't parse, or you get a huge, nonsensical traceback if you wrote the parsing code yourself. Pydantic, from the beginning, did a great job of explaining exactly what went wrong. It would tell you, "This nested field, the fifth item in the list, was expected to be this struct that had this field, but I got this string," and both things would be in the error message. So, when you get an error, you can immediately tell what happened.

Adrian: That lack of clarity in error messages is one of the things that annoyed us about a lot of existing observability products. If you think about the old-school way of looking at a bunch of log lines, you're just left trying to figure out what happened. You don't have that context. You're looking at that isolated error in one place: "expected an integer, got a string, couldn't parse it." You think, "But where? I sent a one-megabyte JSON blob. Where is the issue?" A similar thing happens when you have a million log lines and errors all over the place but you don't know what called what to cause those errors.

Adrian: One of the things we wanted was to build an observability product based on tracing. The industry standard for tracing right now is OpenTelemetry. OpenTelemetry is for observability what those fancy Pydantic errors are for parsing JSON. Instead of looking at isolated, single points of failure, you get the context of where your errors happened, what was slow, and the entire flow of execution.

Drew: Could you explain the OpenTelemetry piece a little more? What does that look like concretely?

Peter: OpenTelemetry is essentially a standards platform . There are libraries for pretty much every major language—Go, Rust, Python, and so on. It provides an interface for performing tracing and sending those traces to a collection API, whether that's LogFire or another system. It provides a way of instrumenting or decorating your code so you can work out what's going on and insert little traces and span bits where you want them. It also has standard APIs—RESTful or gRPC style—which means anything using OpenTelemetry can talk to anything else. This makes it a good standard way of collecting tracing data.

Peter: When you deploy an application, you are often working in a polyglot environment with services talking to each other in different languages. Having a standard platform like OpenTelemetry allows you to connect that all together and see the entire context of a request, from the browser all the way through to the backend. It's an excellent, strong foundation for building tracing solutions.

Drew: In our preparation, you mentioned that a major project right now is building a query engine within LogFire, which is taking up a lot of your Rust development time. I want to discuss that project. You also mentioned that you're basing it on DataFusion. I looked into that and found it fascinating. Could you explain DataFusion before we talk about the full query engine?

Peter: I'd say it's like a toolkit for building your own database.

Adrian: Yeah, it's like a Lego toy. You get the toy and it comes with instructions to build a dinosaur, but you can go online and find instructions to build a car or a house with most of the same parts. DataFusion is similar. It has a command-line interface, like DuckDB, and a Python library, like Pandas, that you can use out-of-the-box to query Parquet or CSV files. But, most people use it by taking pieces of it. Some use only the SQL parsers, some use the computation part but not the data scanning, and some use only the data scanning. It's all about mixing, matching, and building your own database.

Adrian: Andrew Lamb, the main maintainer, likes to call it the LLVM of databases. It’s basically just a robust toolbox.

Drew: That makes sense. Andrew Lamb is a notable Rust programmer as well, I believe.

Drew: What I read about DataFusion was interesting. It made sense that, with everyone building or rebuilding databases, you don't need to rebuild every single component, given how much goes into a database. Why not have a shared framework so people can focus on the parts that make their project unique?

Drew: So, how does that fit into your query engine? What pieces are you reusing, what are you building, and what does that look like?

Peter: We're using DataFusion to store and query information for LogFire. Every time an OpenTelemetry trace comes in, it needs to be stored, and when we present that data to a user, we need to query it. We use DataFusion quite extensively in the LogFire backend.

Peter: We're using a lot of DataFusion's main components: the query parser, the physical scan component, and the query planning engine. We've added our own pieces to decorate that. For instance, we have our own caching layer on top of what DataFusion offers to speed up talking to cloud storage. The great thing about DataFusion is you can pull it apart and inject your own bits to increase performance. Essentially, we use DataFusion from the moment an SQL query comes in all the way through to getting the results and sending them back to the client.

Drew: I might sound a little ignorant asking this, but I guess I don't mind: I imagine DataFusion and query engines have a relational notion to them. Is the data you're storing inherently relational, or is there a mapping that occurs?

Adrian: Yes, it is relational. We essentially store metrics and traces as tables. It ends up being a kind of denormalized table, which is common in analytical systems because it's easier to manage large volumes of data.

Adrian: The "non-relational" part comes in because, for example, a Kubernetes pod is an entity that exists, but we don't have a separate Kubernetes pod table with references. Instead, if a log message or trace came from a pod, we simply copy the pod's name and ID into every row where it applies. Because we store things as Parquet files, which provide features like dictionary encodings and other compression, this denormalization isn't that bad. If you have the same Kubernetes pod name across a million rows, you might only store it in full a hundred times. This approach means we don't have to worry about maintaining foreign key reference integrity or that kind of complexity.

Drew: That makes sense. Rust clearly plays a big role in this project. How does it fit into what you're building?

Peter: DataFusion itself is written in Rust. Our query backend and a few other components are also written in Rust—that's where the Rust play comes in. A strong component of LogFire is in Python, including the backend and a lot of the administrative tasks, like managing users and organizations.

Peter: The query engine, which needs to be extremely performant, is definitely written in Rust. We've "oxidized" a couple of other bits and pieces since I've been here, moving to use Rust in areas where it makes sense. For development velocity, it doesn't make sense to use Rust everywhere, but it's a focus for us wherever performance matters.

Drew: I feel like there's a divide in the Rust community. You have people who believe that Rust needs to be everywhere versus those who are more pragmatic. They think it's good for certain things and should be used only for those. I'm definitely on the more pragmatic side.

Peter: I'd prefer to use Rust in everything because I have a strong Rust opinion on things. However, realistically, there's a huge ecosystem in Python and JavaScript with so many great libraries that it just doesn't make sense to rewrite them in Rust. You'd be rewriting millions of lines of code to get the same functionality. It makes sense to use Rust where there is a good purpose for it, but you need to be pragmatic. It's not something you can use everywhere. Especially in a system like ours, we wouldn't have the development velocity if the whole thing was written in Rust from the ground up.

Adrian: Not to mention, compile velocity would be quite slow.

Adrian: The other thing to keep in mind, and one of the reasons we don't write everything in Rust, is hiring. It’s very easy to hire someone who knows React or popular Python frameworks like Django and FastAPI. It is significantly harder to find and hire someone familiar with Rust web frameworks, even for a simple backend role.

Drew: That makes sense. Are you guys pretty much a Python shop outside of Rust?

Adrian: Yes. In terms of lines of code, we're probably split into three roughly equal parts. About a third is TypeScript, which is everything front-end related. A third is Python, which handles a lot of our more day-to-day or less performance-critical backend bits, like user management and account creation. The final third is Rust.

Drew: That makes sense.

Drew: What are the more interesting challenges you've run into while working on the query engine?

Adrian: We've run into so many challenges that I don't even know where to start.

Drew: It seems like you guys have some scars here.

Peter: It's not so much scars, but we've definitely run into some hairy problems along the way. One of the frustrating things about Rust is that you don't have garbage collection, and memory is seemingly managed magically for you. So, what do you do when things start running out of memory? How do you profile a Rust application, look at the heap, and figure out what's chewing up memory? We've hit this many times. Sometimes it's a mistake I've made, sometimes it's an external library chewing up data, or sometimes we have 10 million instances of a struct that's mostly empty but still consumes a lot of memory. For me, a big challenge is memory management. It's strange to think about because the promise of Rust is that you don't need to worry about memory, but in practice, you still do. You often need to look at it even more closely.

Adrian: I think we've also run into a lot of issues that are just things like out-of-sync HTTP connection timeouts. They're so annoying! Every now and then, you get a blip of errors from some system and you ask, "Why? When did that happen?" It's like looking for Schrödinger's cat. You look in the system, everything looks fine, then you walk away, and you come back the next morning and there was a blip of errors at 3 AM.

Adrian: The answer is usually something like your keepalives are a bit off. So, you're keeping a connection open, but the server is actually dropping it. You then have to go and tune those numbers to synchronize them. It’s a bunch of fiddly stuff that constantly pops up where you least expect it. Honestly, it's not even a Rust thing. This is just a problem you encounter when building a data-intensive, high-scale system.

Drew: Tt sounds like—just from what each of you said—the core problems you're solving with this project are what you would expect. You need a certain level of performance and you're rubbing up against the edge of what's possible, so you're going to run into those types of problems.

Adrian: Yes, I think it's the kind of thing where if your application just makes a database request every now and then—if it does what we call "light work"—then you won't run into these issues. But, if you're constantly hammering GCS, your database, and your ingestion systems are receiving millions of requests a second, any small issue can become a big one. In some cases those things can even snowball, requiring immediate attention.

Drew: I guess I didn't ask: Is this query engine in production right now, or is this something you're still working on? Where in development is this?

Peter: It's in production. Yes. We have two main instances, one in Europe and one in the US—the US is probably the main one. We also offer a self-hosting option. So it's in production. This means we not only worry about the cloud environments, but we also have to worry about on-premise, which forces us to operate with big, thick gloves on. We have to work through the integration challenges you encounter there. How long has it been an actual product, Adrian?

Adrian: I'm trying to think. I feel like we had our first commercial, paying customers maybe about a year ago now—maybe a bit more, a year and a half. That is an eternity in startup time. It feels like forever. I don't remember a time before that.

Drew: Are any components of LogFire open source, or how does that work with that product?

Adrian: Yes, our SDKs are completely open source. Essentially, any code of ours that you run is open source. But, the rest of it—the actual platform itself—is closed source because we are a commercial company that needs to make money. We want to be able to charge people for the platform.

Adrian: We do a lot of open source work, both in the Pydantic library itself and in DataFusion. We aggressively contribute to open source wherever it makes sense, but we don't just release our entire product and say, "Hey, run it yourselves for free." It's not a good experience, and it's not good for us as a company, at least not right now, financially. In terms of knowing what customers to focus on, and balancing concerns, you'd get a thousand different people opening issues, and it would be hard to filter out the noise. It also wouldn't be a good experience for customers because it's a complex system to run. They would spend so much time just trying to maintain it and getting frustrated that it just wouldn't work. We've talked about maybe open-sourcing some things in the future, but I think that will only happen once the product is less in flux and the company is more established.

Peter: Yeah. And, you can use the LogFire SDK to talk to other OpenTelemetry stuff as well if you really want. You can still instrument it. But, I think it works a lot better talking to our backend.

Drew: I'm sure. So, one of the reasons I was excited to talk to you guys is because of this theme I've been tracking: Rust-Accelerated Python. That's kind of how I've been putting these interviews together. I focus on themes I think are important in the Rust community and try to pull in the voices that are part of that. I wanted to ask you guys about this because I think you have a somewhat privileged position, as the Rust-Accelerated Python thing is very real with Pydantic. What observations do you guys have about this trend?

Adrian: Yeah, like you said, this is becoming very popular. I mean, that's exactly what the open-source Pydantic library is right now: a Python library accelerated and backed by a bunch of Rust. All of the performance-sensitive stuff happens in Rust, including a JSON parser we wrote in Rust.

Adrian: You see this trend in related companies like Polars, the DataFrame library, which is all written in Rust. They're related because they exist in the same query engine space as DataFusion. To Peter's earlier point, Python is, to some extent, a glue language—that's historically what it's been used for. It turns out that if you're going to build widely distributed libraries, it's worth investing the time upfront to use a language like Rust that forces you to build things more efficiently and reliably.

Adrian: Because there is really good interop between Python and Rust via PyO3—which is actually maintained by David Hewitt, who works with us—the developer experience (DX) of writing a Rust extension, wrapping it, and publishing it is really, really good. At this point, I think it's better than any other Python extension language. So, yeah, I think that's where a lot of this comes from.

Drew: I think that's a big deal that you said the developer experience is better than any other Python extension language. I don't know that I've heard that before, but I feel like that alone could really drive the trend.

Adrian: Yeah, look, I haven't developed anything in CPython in a long time, so take that with a grain of salt. But, the Rust ecosystem in general is known for being very strong, right? The Language Server Protocols (LSPs) are good. The package manager, Cargo, is very good. It obviously has its flaws, but overall, it's one of the better languages out there in terms of developer experience and ecosystem.

Adrian: And then the packaging for interop has been done very well. I think it's quite smooth for what it is. There are templates out there, and you can very easily build up and publish and get going with a package that's Rust under the hood but exposed with a Python API.

Drew: Yeah, you mentioned David Hewitt. I should shout him out because he was the one who told me that I needed to talk to you guys. I've kind of been bouncing these ideas about Rust and Python off of him—like, who should I talk to? So yeah, I think PyO3 is a huge benefit to this ecosystem.

Peter: If you've ever tried just writing Rust libraries for other languages, it's an uphill battle. I've written a few libraries for C#, and you can do it, but you have to wrap things in FFI. You need to worry about unsafe code in Rust that you can't guarantee. You need to ensure marshalling happens between them. It’s a lot of effort, and a lot of that is abstracted away in the most zero-cost form you can get with PyO3.

Peter: So, it is a great dev experience to have a library like that which integrates very nicely. Before I started with Pydantic, I was using PyO3 in my old job to wrap a lot of Rust libraries for Python, and it’s actually quite a boon to productivity that you can do that in an easy way.

Drew: I think it's a big deal. Are there any blockers that would keep the Rust and Python trend from accelerating?

Adrian: It's an interesting question. One difficult thing that comes to mind, which I think will be solved eventually but is a really hard engineering challenge that will take years, is essentially sharing data between two different Rust Python extensions.

Adrian: Sharing data is kind of solvable, right? You can serialize it to something like Protobuf or JSON in Python and then deserialize it on the other side. That may be a bit expensive, but it's doable. However, what's currently not easily doable is sharing behavior. For example, I can't write a custom Rust function and hand it over.

Adrian: Although Polars does this in some "magic way" that I forget, in general, it's not possible. The DataFusion Python API is written in Rust under the hood, but you can't write a custom Rust function and pass it in. You have to essentially wrap your Rust function in Python and pass that in as a Python function. It might still be efficient enough, but you're ultimately going from Rust to Python to Python to Rust or something like that.

Peter: Definitely the Application Binary Interface (ABI) story in Rust needs a bit of work at the moment. Everything has to use the C ABI to interface with other systems, and this limitation acts as one of the biggest blockers.

Peter: This isn't just an issue between Rust and Python; having Rust work with dynamic libraries is currently a pain, which affects embedding Rust in other languages. The ABI story causes friction because at that interface barrier you have to get rid of a lot of the features that make Rust productive and revert to using normal data structures to send data through.

Drew: Let's change gears a little, because I do want to ask about Pydantic as a company, moving away from the technical topics we've been discussing. Tell me about the team. Is there anything about the Pydantic team that makes this group particularly well-suited to taking on the problems you're dealing with?

Adrian: I think we have a very strong team in general. Peter is one of the best Rust engineers—I say "one of the best" only because I've also worked with David Hewitt, who is another amazing engineer. We have some really great engineers with a good mix of experience in distributed systems and deep technical bits. If I hit an almost compiler-level issue, I can go to David Hewitt, and he'll know the arcane details because he's been in all those areas.

Adrian: We also have some great DevOps people. I built a lot of the original infrastructure, and they've taken that and made it ten times more reliable and scalable, worrying about things like topology and distributing nodes across multiple zones. I may know these things exist, but they are way beyond my ability to quickly whip up. We have a really good mix of teams and a strong presence in the open-source world.

Adrian: We are also some of the biggest contributors to DataFusion, and that allows for great collaboration with a lot of other teams—major companies use DataFusion, as does the community as a whole.

Drew: One thing I started thinking about when Peter mentioned he's in Australia is that your team has people in the U.S., Europe, and Australia. How does that work? Are you using an open-source style of work to collaborate, or are you finding ways to get on calls a lot? How are you synchronizing?

Peter: To be honest, because we have a bit of an open-source background, a lot of the standard processes we use for open source, like pull requests, have carried over. We use those processes quite well.

Peter: In terms of synchronization, a few times a year, we will meet up in person for a week to write out ideas and work through things. I think that's helpful, especially with a distributed team. The way we work isn't fundamentally different from other remote teams, but having that open-source base means that a lot of the people who join the company can hit the ground running. They are already used to that workflow, and it comes naturally to most people.

Drew: Is there anything unique about the culture or the way the company works that you think is worth mentioning?

Adrian: I think the open-source culture we just discussed is a really important part of how we do things. We don't think of our code base as something private to the company. Although our internal code isn't open source, we think of our code base as encompassing the whole range of our internal code and our dependencies.

Adrian: We are pretty much constantly finding bugs, missing features, or other issues in our dependencies, and we'll go and try to upstream that work. People don't bring it up in a meeting and say, "Hey, I found this; I think I'm going to open a PR on this library." You probably just open the pull request without telling anyone, it gets merged two weeks later, then you make a PR to update the dependency, and it's all resolved. I think that's a really big part of our culture.

Drew: That may seem small, but I think that's a big deal. A lot of engineers really care about open source, upstreaming things, and being a good member of the community. I think a lot of people will like to hear that.

Drew: From a hiring perspective, what are the things that tend to make new people really successful at Pydantic?

Adrian: I think that being self-starters and being able to navigate work on their own is key. Because of the time zones and our open-source-like culture, we don't have a lot of meetings. We also don't have product managers telling people, "You need to build a button here, and this thing here."

Adrian: There is a lot of freedom—a Spider-Man style of "with great freedom comes great responsibility." You can go and make your own PRs, and if you disappear for a couple of days to work on an upstream thing, that's totally allowed and encouraged. But ultimately, that work needs to provide value and benefit to the product and the company.

Adrian: So, it's about being able to operate in this geographically distributed, pretty flat organization. You're not going to have anyone holding your hand. Since we're a startup, there's often not a clear path for what feature or product change needs to be implemented. You somewhat have to be the engineer, the product manager, the user, and the support engineer. If you build a feature, you need to be able to help customers with it, understand how they're going to use it, and follow up on requests.

Drew: Anything you would add to that, Peter?

Peter: I sort of agree with Adrian. When I first started, there was a lot of time spent getting used to the product and getting used to Data Fusion, which is a massive project. You need to be more self-reliant than you would be at another company and be able to be productive using your own means.

Peter: Because of how we work, you need to be able to unstick or unblock yourself. If you get stuck on a pull request, you don't sit there twiddling your thumbs; you find something else to do.

Peter: In a startup culture, you need to wear many hats. This means sometimes dipping into unfamiliar code or writing in languages you're not used to; for example, I've written more Python in this job than my last one.

Peter: A new starter should consider how self-reliant they are, as opposed to having someone constantly telling them what to do. You will often be left to your own devices and have to find productivity yourself, rather than having someone micromanage you. It goes back to the Spider-Man thing: with great freedom comes great responsibility.

Drew: Well, cool. I guess the last question I had in mind was about the future of the company. You guys have mentioned it's a new startup several times. Adrian says he can't remember past the last 18 months. What are you guys excited about in the future, and what do you see coming that gets you excited?

Adrian: It’s interesting. When you're in the day-to-day, sometimes it can feel like we're so far away from the goal, or that we haven't hit the hockey stick growth. I think we might be an older company than something like Cursor, and they've definitely grown a lot more than we have in the past couple of years. But at the same time, we started a commercial product a year and a half ago and had no customers. Now, we have more customers than I can count.

Adrian: At one point, I could keep track of our five key customers. Now I can't anymore. Continuing to grow will be a really interesting and challenging mix of both technical and organizational hurdles.

Adrian: We're at about 20 people right now. There are things we could do with five people that we can't do anymore with 20, and there are things we can do with 20 that we won't be able to do anymore with 50. At 20 people, I know who everyone is. Once we get to 50, you might not even know everyone who works at the company. So, there's going to be a lot of change and a lot of figuring things out. I've personally never been at this stage of a startup before, so it's new to me and new to a lot of us, and figuring it out is going to be a really interesting journey.

Drew: What would you say, Peter?

Peter: Yeah, I'd empathize with that as well. We've built a product, and we're at an inflection point now where we have a solid base. There are a few things we want to do in the database query team, like optimizing it to run really fast and ensure a really good developer experience. That's where our focus is, and it's going to be exciting to see where the product is in a year's time. When you're in the weeds, it's hard to reflect on how far we've come because you've been so involved in it. But if I look back to where I started a year ago, it's improved quite dramatically.

Peter: If we can maintain this velocity and keep adding extra features, I think it will be an even greater product in a year's time. I feel like I'm one of the biggest users of the product because I use it to troubleshoot itself—it's the snake eating its tail. This goes towards our culture of dogfooding a lot of the stuff that we do.

Adrian: Yeah, we are our main observability platform.

Adrian: One of the things that I find fascinating is a conversation we had a couple of months ago. I think we were looking at cloud spend and trying to bring it under control. We noticed it had doubled in the past few months and were worried. Then, we looked at our usage, and our usage had gone up orders of magnitude. So, we realized we've been doing a lot of good work. If we hadn't been doing all the work we'd been doing, everything would be falling over, nothing would work, and everything would cost a thousand times more than it does.

Adrian: That's one of the interesting things about being at a startup and growing your product. If you just look at the data now, you can be overwhelmed by how much work there is to be done. But then you take a step back, look at where you were six months ago and where you are now—you've done a lot. It's cool to see that and it's exciting to see us continue to do it.

Drew: Well, I'm super excited to follow along with your guys' growth. Now that I know you guys and have had this conversation, I'll be even more invested. Is there anything else you guys wish we had the chance to discuss before we finish up?

Adrian: I think this has been a great chat.

Peter: Nothing from me.

Drew: All right. Well, thank you guys.

Adrian: Thank you.

Peter: Thanks, Drew.

get rust jobs on filtra

Know someone we should interview? Let us know: filtra@filtra.io

sign up to get an email when our next interview drops