Team software engineering with AI
Testing and debugging
Use LLMs to turn exploratory testing into function testing. They can also update test cases when code is updated.
Remember to check that the generated tests accurately describe the behaviour that you're after. For example, you might want to reject the task "" rather than add it to the list.
The course recommends using timeit
and cProfile
for performance testing.
There was a practical exercise here that involved identifying and fix/handling bugs and edge cases in some algorithms. The doubly-linked list one took a few attempts, but they were generally pretty manageable.
Documentation
This mostly just discussed best practices for documentation rather than linking things back to AI, though it did suggest using LLMs to produce standardized docstrings to feed into automated documentation tools like Sphinx.
Dependency management
Like with documentation, I don't think there was a huge amount of non-obvious stuff here.
Some things to consider asking LLMs:
What do each of the dependencies do?
Are there any known security vulnerabilities? (Obviously limited by things like training cut-offs.)
Are any of them unmaintained?
How to resolve specific dependency conflicts
For some of these, getting it to suggest tools like pip-audit
or safety
will likely work better than asking it the questions directly.
The final stage was a practical exercise with two parts: updating a script from Python 2 to Python 3, and resolving dependency conflicts between Pandas and Numpy.
Last updated