How do I validate neural network catalyst predictions with an approximate model?
#1
I’m trying to design an experiment where a neural network suggests potential catalyst materials, but I’m stuck on how to validate its predictions when the underlying physical model it learned from is itself approximate. The suggestions seem chemically plausible, but I don’t know if I’m just seeing an echo of the training data’s biases.
Reply
#2
I’ve tried something similar. The model spit out a handful of nickel based catalysts that looked plausible on paper. A quick lab test showed they weren’t any better than the baseline, so I suspect the data biases are doing the talking more than real chemistry.
Reply
#3
One practical move is to quantify uncertainty. I started using an ensemble and looked at where predictions disagreed, then I pulled a few high-uncertainty candidates into the bench for quick checks. The belt of confidence helped decide what to invest in.
Reply
#4
Could the real problem be the objective you’re optimizing rather than the model biases?
Reply
#5
There was a phase where I drifted into thinking the mismatch came from the model, but it turned out the labels or the surrogate property were shaky. We paused and cross validated against a separate dataset, then abandoned a couple features that correlated poorly.
Reply


[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Forum Jump: