
What surprised me most when watching chemists use AI - without us guiding them
Some of the most useful product lessons I’ve learned during building @ReactWise didn’t come from building new features.
They came from watching what happens when a chemist uses the platform with no guidance.
In some of our open check-in calls, we do something very simple:
We ask users to open the platform, run through what they’d normally do, and we just observe to improve the workflow.
No steering. No “click here.” No intervention from our side.
A few things consistently surprised me.
1) Chemists don’t treat AI like an oracle.
They often treat it like a second opinion.
They compare recommendations to what they’d do instinctively - and when it differs, they don’t reject it immediately. They pause and ask: why would it do that?
2) They look for sanity before they look for optimality.
The first question isn’t “what’s best?”
It’s “does this make chemical sense?”
And that makes total sense: in the lab, a “wrong-but-confident” suggestion is worse than no suggestion at all.
3) Misunderstandings rarely come from the maths.
They come from missing context.
If uncertainty, constraints, and trade-offs aren’t obvious, users fill in the blanks with assumptions, even if the underlying model is doing the right thing.
4) Trust is built through consistency, not brilliance.
One great result is nice.
But what really builds adoption is when the system behaves reliably across boring, routine runs and doesn’t break on edge cases.
The big takeaway for me:
Adoption isn’t primarily about how advanced the algorithm is.
It’s about whether the tool fits the mental workflow of the scientist using it, especially when nobody is guiding them.