it become slower train when copying ML_ABN to ML_AB to continue to train

Queries about input and output files, running specific calculations, etc.


Moderators: Global Moderator, Moderator

Post Reply
Message
Author
suojiang_zhang1
Jr. Member
Jr. Member
Posts: 64
Joined: Tue Nov 19, 2019 4:15 am

it become slower train when copying ML_ABN to ML_AB to continue to train

#1 Post by suojiang_zhang1 » Sat Mar 29, 2025 9:53 am

Dear,
For the same computer to run the MLFF train, I found the training speed will be slower when I copied the ML_ABN from the first time to ML_AB to continue train,


marie-therese.huebsch
Full Member
Full Member
Posts: 237
Joined: Tue Jan 19, 2021 12:01 am

Re: it become slower train when copying ML_ABN to ML_AB to continue to train

#2 Post by marie-therese.huebsch » Mon Mar 31, 2025 10:08 am

Hi,

Great that you do some testing. Could you clarify what exactly you are observing?

For reference, the ab-initio calculation should remain at the same computational cost in any MD step unless you changed some settings. During training more and more local reference configurations are collected and that will indeed cost more computational effort to add the e.g. 15th compared to the 4th local reference configuration and apply the design matrix. However it is not an option to entirely avoid adding local reference configurations, since this is what improves the force field. A comparison of restarting a training calculation or running a training calculation for longer should not impact the computational cost significantly (minus the overhead you get from writing and reading files etc.).

Do you have a question in connection with your observation?

Marie-Therese


Post Reply