-
Notifications
You must be signed in to change notification settings - Fork 20
Open
Description
While I was comparing different force convergence, I realized that even for the same DFT & Machine-learning potential evaluated data, there could be difference force error depending on how force error is calculated. The way implemented in wfl package calculates norm of atomic forces. But what I have used before is element-wise (x1, y1, z1) comparison.
In order to make force RMSE/MAE comparable within different projects, It would better to use consistent way of force evaluation method.
I wonder if calculating norm of force is better way of evaluating force error.
from sklearn.metrics import mean_squared_error as rmse
from sklearn.metrics import mean_absolute_error as mae
## WFL
all_diffs = []
for atoms in calc_in_config:
calc_quant = atoms.arrays.get("calc_forces")
ref_quant = atoms.arrays.get("DFT_forces")
calc_quant = np.asarray(calc_quant).reshape(len(atoms), -1)
ref_quant = np.asarray(ref_quant).reshape(len(atoms), -1)
diff = calc_quant - ref_quant
diff = np.linalg.norm(diff, axis=1)
all_diffs.append(diff)
all_diffs = np.asarray(all_diffs).flatten()
weights = np.ones(len(all_diffs))
RMSE = np.sqrt(np.sum((all_diffs ** 2) * weights) / np.sum(weights))
## method 1
F_mace = np.asarray([atoms.arrays["calc_forces"] for atoms in calc_in_config]).flatten()
F_dft = np.asarray([atoms.arrays["DFT_forces"] for atoms in calc_in_config]).flatten()
RMSD = rmse(F_dft, F_mace, squared=False)
MAE = mae(F_dft, F_mace)
## method 2
F_mace = [atoms.arrays["calc_forces"].flatten() for atoms in calc_in_config]
F_dft = [atoms.arrays["DFT_forces"].flatten() for atoms in calc_in_config]
RMSD = rmse(F_dft, F_mace, squared=False)
MAE = mae(F_dft, F_mace)
Metadata
Metadata
Assignees
Labels
No labels
