#dev 2024-08-22

2024-08-22 UTC
sp1ff, beanbrain, thegreekgeek_, thegreekgeek, bterry1 and jjuran joined the channel
#
capjamesg[d]
What is the best way to write performance benchmark tests for a Python library?
#
capjamesg[d]
I have a list of functions that I want to monitor for performance regressions.
#
capjamesg[d]
Wait, I think I have figured it out...
ttybitnik, AramZS, sp1ff, GuestZero, shoesNsocks and [fluffy] joined the channel
#
Loqi
[fluffy] has 1 karma in this channel over the last year (3 in all channels)
#
capjamesg[d]
[fluffy]++
#
capjamesg[d]
I sometimes forget about that one!
#
capjamesg[d]
I ended up adding a feature to my pytest test suite that lets me provide a CLI argument to enter benchmarking mode, where the execution time of all tests are measured. The test suite ensures that execution time of each test is not over a certain maximum using`assert`.
#
capjamesg[d]
Also, pytest has a timeout feature that prevents a test from running too long, which was nice to have.
barnaby, barnabywalters, to2ds, ttybitnik, [morganm] and Kaguneh joined the channel