Venture investing & working with AI: Thoughts for Q2 2026
VC program launch, measuring portfolio performance, our love/hate relationship with AI, thoughts on writing, and more…
Dear reader,
Over the past few months, we’ve been focused on three threads: building our venture program, measuring portfolio performance, and improving how humans and AI work together.
On the investing side, Peter lays out how we’re approaching venture (and why), Val reflects on a quarter of conversations and commitments, and Zsolt and TJ share how we’re measuring what’s actually driving performance.
On the AI side… things get a little philosophical. TJ is thinking about how teams of humans can collaborate effectively with teams of agents. Justina is resisting the urge to outsource her thinking—while also building tools that help the rest of us think better. And Peter’s AI agents are generating delightfully bizarre office art that reflects all that we’re working on.
As always, we’re glad you’re here!
— All of us at Titanium Birch
Peter 👽
Introducing Titanium Birch’s VC programme
These past few months have been in large part about venture investing for us. We’ve defined the following categories for our VC programme:
VC funds. Val recently wrote about how we evaluate VC funds.
Direct investments into startups. We list our criteria here.
Venture building, where we’re directly involved in running the business. The first piece in this category is our recent investment in Fidaro AI, founded together with several former colleagues from ExpressVPN.
Why invest in venture capital at all?
For each sleeve in our VC program, we’ve set a target hurdle rate. When we built those targets bottoms-up—a public-equity baseline, plus a risk premium for the failure rate, plus an illiquidity premium for the lockup, plus the operating costs of our firm—the rates sat well above what we believe a typical VC fund delivers net of fees. That means we’re committing to doing the work to become a top-quartile VC investor. Otherwise, we should leave this capital in public equities.
Why bother with venture investing at all? Because it’s one of the most direct ways of expressing our mission of using capital to accelerate productive people.
We hope we’ll have a useful structural advantage: unlike a fund, we don’t have to invest. Our capital can stay in public equities, which means our timelines are set opportunistically rather than by a deployment clock.
On a lighter note: Fun office artwork
We’ve been working with increasingly capable teams of AI agents. Some of these agents have been outright philosophical, reminding me to teach the process, not just the outcome.
One new capability: they now run our “information radiators” at the office, TV screens showing live dashboards that help us share information within the team. Sometimes these screens also feature automatically generated artwork based on what’s happening in our firm. Inspired by improv comedy, the agents “yes-and” each other, turning simple prompts into evolving stories, brought to life via the FLUX API.
We then get to enjoy a cup of coffee and imagine what our mascots might be up to here:
(Editor’s note: Each of our internal tools is named for a creature, like RoadmapRaven, ProfitPelican, BrandingFerret, and DealDuck.)
Val 🌚
git status
Q1 was a quarter of learning, refining, and investing.
On the venture side, we continue to spend time in the market, speaking with GPs, founders, and others across the ecosystem. I appreciate the warm referrals and generous conversations along the way. Each one has helped sharpen how we think about the market, where we want to spend time, and what we are looking for.
We also committed to a VC fund in Q1, and recently shared a post on how we evaluate VC funds. Writing the post was a useful exercise: it helped clarify the questions we care about, the trade-offs we need to weigh, and the type of partnerships we want to build.
We’ve seen VCs support portfolio companies on talent, GTM, governance, and fundraising, etc., but what else might give a GP an edge? One area I’m wondering about is workflow design, especially if it can be applied repeatedly and tied to real outcomes. I’m still early on this, but I’m interested to see if/where/how it shows up in practice.
Looking ahead, I’m excited to keep building out the VC programme across both funds and directs as we continue to sharpen our strategy.
git add .git commit -m “Q2: learning, investing, iterating”git push
Zsolt 🌶️
In case you missed it, we recently shared a blog post about building a systematic single-stock portfolio that reflects our views of the world.
Measuring portfolio performance 📈
As we mentioned in our last newsletter, the next logical step after creating our custom index from scratch was to measure how the portfolio is actually performing.
This led us to build an internal tool (dubbed ProfitPelican) to break down the impact of the decisions we made along the way—what worked, what didn’t, and where we might need to adjust. It also enables us to run deep dives and better understand what’s really driving performance.
What’s next
While building the portfolio performance tool, we came up with a bunch of ideas to improve it—one of them being the ability to evaluate simulated portfolios. This would help us understand not just the impact of the decisions we made, but also the alternative decisions we could have made.
TJ 🤖
Allocation, selection, and overrides
Zsolt and I worked on defining our portfolio performance attribution. There are lots of thorny issues, such as how one attributes decisions when iterating on the definition of a target portfolio. We’ve gotten to something useful for identifying avenues for deep dives, broadly looking at three effects:
Allocation effect: the P&L differential generated by deviating from the global market cap–weighted portfolio, such as from allocating more or less to China, the US, or emerging markets.
Selection effect: the P&L differential generated by deviating from the broadest global market cap–weighted portfolio for each segment, such as using an index with profitability screens like the S&P 400 for US mid-caps vs just including everything else.
Override effect: the P&L differential generated by overriding certain problematic securities as identified by our “already large, but expected to grow even larger” screen. This keeps us accountable for deviating from model tilts.
Explorations in AI collaboration
Beyond writing code, we’ve been experimenting with how to use AI for various purposes. Using coding agents like Cursor and GitHub Copilot as harnesses (simply because they are very well developed), we experimented with RoadmapRaven, a tool that works within Linear to improve how we specify and prioritise our work.
I’m also interested in understanding how teams of humans best work together with teams of agents. For instance, right now, I feel that many agent workflows are single-player: an engineer works with an agent (or even a whole team of sub-agents) in their development environment, or a portfolio manager chats with a chatbot to refine strategy. This is all well and good, but what if multiple people need to see how the agent is working, provide input and steering, and see the outputs? As an example, a bot that helps research and draft an investment strategy likely requires robust and frequent interaction across the whole team. I really like where Linear goes with this in their Agent Interaction Guidelines.
What’s next?
We’re looking forward to further refining our public equities investment strategy in terms of how we reinvest income and rebalance the portfolio. And, as always, we’re constantly looking for ways to improve our work, including with AI agents.
As a reminder, we’re looking to add an engineer to our team and increase our pace of delivery across all of the above. If you know someone who might be suitable, please let us know!
Justina 🐯
Thoughts on writing
Writing is hard. But I think writing improves thinking. We spill our thoughts onto the page and play with them, interrogate them, destroy them, build them up again, wrestle with them… and, ever so slowly, our conviction comes to life.
Sometimes it’s tempting for me to chuck my rough notes into an LLM and command it to “make this sound better.” But AI outputs usually feel generic and don’t add much value (though arguably this might be fixed with better prompting). Meanwhile, Granola is constantly luring me with its sparkly “suggest questions” or “make me sound smart” prompts.
RESIST!
Increasingly, I’m fighting my urge to take the many shortcuts offered by all the AI tools at our disposal. I want to be fully engaged in my writing and conversations. I don’t want to just “sound smarter” with the help of AI—I want to actually become smarter.
On that note, here are some ways I’ve been experimenting with AI to nudge us to become better thinkers (and thus investors).
1) BrandingFerret keeps us on brand
Word by word, we’re building the Titanium Birch brand. Each piece of content we publish is a signal to potential hires, founders, GPs, and fellow investors about how we think. We can’t afford to get it wrong.
In an effort to make myself redundant, I used GitHub Copilot to create BrandingFerret, my editor alter ego who empowers anyone on the team to get early feedback on their work before I take over. (I’ll concede: LLMs can be quite capable as editors.)
Our wordsmithing rodent friend isn’t a generic editor checking for flow, coherence, and style-guide violations. I set him up with various agent skills to help writers answer critical questions about their work, like:
Does this blog post serve one of our branding goals?
Is the structure sound?
Is the content sufficiently insightful?
Could any other company have produced this?
Did we earn the authority to say this?
Where can we get more specific?
So far, he’s proven to be quite effective!
2) My experimental skill-coach agent helps me pick the highest-ROI thing to learn
At Peter’s encouragement, I created an AI agent to guide me through my learning and development. At first, I was skeptical, but now I’m a believer, because I think I’m getting results…!
The rough flow is like this:
The agent asks me what I’m working on
We try to identify the highest-leverage thing for me to improve
We come up with a plan
I go off and try stuff
I check back in and adjust
The first skill I’ve been working on: getting better at assessing people. I wrote about this learning exercise in a blog post, which BrandingFerret edited extremely aggressively (his bite is brutal and precise). I’m excited to spitball with my skill-coach agent again and figure out what to focus on next…
I feel very lucky to be part of an org where we’re each encouraged—nay, expected—to carve out time for learning and upskilling.
Who we’re hoping to hear from 📬
Software engineers. We’re hiring! If you love working at the intersection of code and capital (or know somebody who does), check out the job posting.
Founders. If you’re in Singapore or Hong Kong building something globally relevant, we’d love to connect. We write checks from $100k to $500k and work closely with founders who value high trust and direct communication. See if we’re a fit.
VCs. If you’re a VC raising capital and think there might be a fit, reach out at pitch@titaniumbirch.com.
ICYMI from the blog 💻
(These posts are all linked from above; rounding them up here just in case.)
Teach the process, not just the outcome (lessons from working with AI agents) by Peter
How we evaluate VC funds at Titanium Birch by Valerie
On training myself to make decisions under uncertainty by Justina
If you found this newsletter interesting, we’d be grateful if you could pass it along to a friend or leave a comment.
Until next time, thanks for tuning in!
Disclaimer: The content in this post should not be taken as investment advice.

