Week 5 (30th June to 6th July)
At the start of the week, I explored how to complement time-based benchmarks with memory-based profiling using pytest-memray.
In parallel, I submitted a couple of PRs to improve benchmarking reproducibility. One of the key tasks was creating a METHODOLOGY.md file to lay down guidelines for future contributions. I also began restructuring the benchmark scripts and Makefile to make running tests across libraries easier.
My mentors were mostly out of office this week due to holidays, but we stayed in touch asynchronously.
Progress
-
PR #12: Added METHODOLOGY.md to establish clear benchmarking guidelines, as discussed in this issue comment.
-
- Updated
Makefilefor easier setup. - Added benchmarking scripts under
scripts/for both toqito and qutipy. - Added
setup/scripts to validate installations.
- Updated
I also experimented locally with integrating snakeviz for visualization and setting up memory profiling — though it’s not yet in a state to commit.
Issues Discovered:
While writing benchmarks, I realized there’s still ambiguity around naming conventions:
-
Function names vs. benchmark names can diverge, making results harder to track.
-
Temporary parameters inside
@pytest.mark.parametrize(e.g.,dim) need consistent naming across the suite.
This will need a cleanup pass for consistency before scaling further.