Raoul Harris
  • Introduction
  • Technical books
    • Data engineering with Alteryx
    • Deep learning in Python
    • Generative AI in action
    • Generative deep learning
    • Outlier analysis
    • Understanding deep learning
    • Understanding machine learning: from theory to algorithms (in progress)
    • Review: Deep learning: foundations and concepts
  • Technical courses
    • Advanced SQL Server masterclass for data analytics
    • Building full-stack apps with AI
    • Complete Cursor
    • DataOps methodology
    • DeepLearning.AI short courses
    • Generative AI for software development
      • Introduction to generative AI for software development
      • Team software engineering with AI
      • AI-powered software and system design
    • Generative AI with large language models
    • Generative pre-trained transformers
    • IBM DevOps and software engineering
      • Introduction to agile development and scrum
      • Introduction to cloud computing
      • Introduction to DevOps
    • Machine learning in production
    • Reinforcement learning specialization
      • Fundamentals of reinforcement learning
      • Sample-based learning methods
      • Prediction and control with function approximation
  • Non-technical books
    • Management skills for everyday life (in progress)
  • Non-technical courses
    • Business communication and effective communication specializations
      • Business writing
      • Graphic design
      • Successful presentation
      • Giving helpful feedback (not started)
      • Communicating effectively in groups (not started)
    • Illinois Tech MBA courses
      • Competitive strategy (in progress)
    • Leading people and teams specialization
      • Inspiring and motivating individuals
      • Managing talent
      • Influencing people
      • Leading teams
Powered by GitBook
On this page
  • Testing and debugging
  • Documentation
  • Dependency management
  1. Technical courses
  2. Generative AI for software development

Team software engineering with AI

Testing and debugging

Use LLMs to turn exploratory testing into function testing. They can also update test cases when code is updated.

Remember to check that the generated tests accurately describe the behaviour that you're after. For example, you might want to reject the task "" rather than add it to the list.

The course recommends using timeit and cProfile for performance testing.

There was a practical exercise here that involved identifying and fix/handling bugs and edge cases in some algorithms. The doubly-linked list one took a few attempts, but they were generally pretty manageable.

Documentation

This mostly just discussed best practices for documentation rather than linking things back to AI, though it did suggest using LLMs to produce standardized docstrings to feed into automated documentation tools like Sphinx.

Dependency management

Like with documentation, I don't think there was a huge amount of non-obvious stuff here.

Some things to consider asking LLMs:

  • What do each of the dependencies do?

  • Are there any known security vulnerabilities? (Obviously limited by things like training cut-offs.)

  • Are any of them unmaintained?

  • How to resolve specific dependency conflicts

For some of these, getting it to suggest tools like pip-audit or safety will likely work better than asking it the questions directly.

The final stage was a practical exercise with two parts: updating a script from Python 2 to Python 3, and resolving dependency conflicts between Pandas and Numpy.

Last updated 8 months ago