Testing Strategies

Learning Objectives

By the end of this reading, you will be able to:

  • Understand different types of testing and when to use them
  • Write effective unit tests using Python's unittest and pytest
  • Implement Test-Driven Development (TDD)
  • Use mocking and stubbing to isolate test units
  • Measure and improve code coverage
  • Apply the testing pyramid principle
  • Design integration and end-to-end tests

Introduction

Testing is the process of evaluating software to find differences between expected and actual behavior. Good testing practices lead to higher quality software, fewer bugs in production, and more maintainable code.

Why Testing Matters

  1. Quality Assurance: Catch bugs before they reach users
  2. Confidence: Make changes without fear of breaking things
  3. Documentation: Tests document how code should behave
  4. Design: Writing testable code leads to better design
  5. Regression Prevention: Ensure old bugs don't resurface

Testing Principles

"""
FIRST Principles for Good Tests:

F - Fast: Tests should run quickly
I - Independent: Tests shouldn't depend on each other
R - Repeatable: Same result every time
S - Self-validating: Clear pass/fail, no manual interpretation
T - Timely: Written at the right time (ideally before production code)

Additional Principles:

- Tests should be deterministic (no randomness)
- Tests should test one thing
- Tests should be readable
- Tests should be maintainable
- Tests should provide clear failure messages
"""

Types of Testing

Testing Pyramid

"""
Testing Pyramid (from bottom to top):

           /\
          /E2E\         End-to-End Tests (Few)
         /------\       - Test complete user workflows
        /Integration\    - Slow, expensive to maintain
       /------------\    - Test critical paths only
      /  Unit Tests  \
     /----------------\  Integration Tests (Some)
                        - Test component interactions
                        - Moderate speed
                        - Test main integrations

                        Unit Tests (Many)
                        - Test individual units
                        - Fast, cheap
                        - High coverage

Rule: More tests at the bottom, fewer at the top
"""

Unit Testing

Unit tests verify that individual components work correctly in isolation.

Using unittest (Built-in)

import unittest

# Code to test
class Calculator:
    """Simple calculator for demonstration."""

    def add(self, a, b):
        """Add two numbers."""
        return a + b

    def subtract(self, a, b):
        """Subtract b from a."""
        return a - b

    def multiply(self, a, b):
        """Multiply two numbers."""
        return a * b

    def divide(self, a, b):
        """Divide a by b."""
        if b == 0:
            raise ValueError("Cannot divide by zero")
        return a / b

    def power(self, base, exponent):
        """Raise base to exponent."""
        return base ** exponent

# Test class
class TestCalculator(unittest.TestCase):
    """Test cases for Calculator class."""

    def setUp(self):
        """Set up test fixture - runs before each test method."""
        self.calc = Calculator()

    def tearDown(self):
        """Clean up after test - runs after each test method."""
        # Usually for cleaning up resources like files, db connections
        pass

    def test_add_positive_numbers(self):
        """Test adding positive numbers."""
        result = self.calc.add(5, 3)
        self.assertEqual(result, 8)

    def test_add_negative_numbers(self):
        """Test adding negative numbers."""
        result = self.calc.add(-5, -3)
        self.assertEqual(result, -8)

    def test_add_mixed_numbers(self):
        """Test adding positive and negative."""
        result = self.calc.add(5, -3)
        self.assertEqual(result, 2)

    def test_subtract(self):
        """Test subtraction."""
        result = self.calc.subtract(10, 3)
        self.assertEqual(result, 7)

    def test_multiply(self):
        """Test multiplication."""
        result = self.calc.multiply(4, 5)
        self.assertEqual(result, 20)

    def test_divide(self):
        """Test division."""
        result = self.calc.divide(10, 2)
        self.assertEqual(result, 5.0)

    def test_divide_by_zero_raises_error(self):
        """Test that dividing by zero raises ValueError."""
        with self.assertRaises(ValueError) as context:
            self.calc.divide(10, 0)
        self.assertEqual(str(context.exception), "Cannot divide by zero")

    def test_power(self):
        """Test exponentiation."""
        result = self.calc.power(2, 3)
        self.assertEqual(result, 8)

# Run tests
if __name__ == '__main__':
    unittest.main()

Common Assertions

class TestAssertions(unittest.TestCase):
    """Demonstrate common assertion methods."""

    def test_equality(self):
        """Test equality."""
        self.assertEqual(1 + 1, 2)
        self.assertNotEqual(1 + 1, 3)

    def test_boolean(self):
        """Test boolean values."""
        self.assertTrue(True)
        self.assertFalse(False)

    def test_none(self):
        """Test None values."""
        self.assertIsNone(None)
        self.assertIsNotNone("value")

    def test_membership(self):
        """Test membership."""
        self.assertIn(3, [1, 2, 3, 4])
        self.assertNotIn(5, [1, 2, 3, 4])

    def test_comparisons(self):
        """Test numeric comparisons."""
        self.assertGreater(5, 3)
        self.assertGreaterEqual(5, 5)
        self.assertLess(3, 5)
        self.assertLessEqual(3, 3)

    def test_approximate_equality(self):
        """Test floating point numbers with tolerance."""
        self.assertAlmostEqual(0.1 + 0.2, 0.3, places=7)

    def test_sequences(self):
        """Test sequence equality."""
        self.assertListEqual([1, 2, 3], [1, 2, 3])
        self.assertDictEqual({'a': 1}, {'a': 1})

    def test_exceptions(self):
        """Test that exceptions are raised."""
        with self.assertRaises(ZeroDivisionError):
            1 / 0

    def test_regex(self):
        """Test string matches regex pattern."""
        self.assertRegex("hello world", r"hello.*")
# Install: pip install pytest

# Code to test
class BankAccount:
    """Bank account for testing."""

    def __init__(self, initial_balance=0):
        if initial_balance < 0:
            raise ValueError("Initial balance cannot be negative")
        self.balance = initial_balance
        self.transactions = []

    def deposit(self, amount):
        """Deposit money."""
        if amount <= 0:
            raise ValueError("Deposit amount must be positive")
        self.balance += amount
        self.transactions.append(f"Deposit: +${amount}")

    def withdraw(self, amount):
        """Withdraw money."""
        if amount <= 0:
            raise ValueError("Withdrawal amount must be positive")
        if amount > self.balance:
            raise ValueError("Insufficient funds")
        self.balance -= amount
        self.transactions.append(f"Withdrawal: -${amount}")

    def get_balance(self):
        """Get current balance."""
        return self.balance

# Test file: test_bank_account.py
import pytest

class TestBankAccount:
    """Test BankAccount class using pytest."""

    def test_initial_balance(self):
        """Test account creation with initial balance."""
        account = BankAccount(100)
        assert account.get_balance() == 100

    def test_default_balance(self):
        """Test account creation with default balance."""
        account = BankAccount()
        assert account.get_balance() == 0

    def test_negative_initial_balance_raises_error(self):
        """Test that negative initial balance raises error."""
        with pytest.raises(ValueError, match="Initial balance cannot be negative"):
            BankAccount(-100)

    def test_deposit(self):
        """Test depositing money."""
        account = BankAccount(100)
        account.deposit(50)
        assert account.get_balance() == 150

    def test_deposit_updates_transactions(self):
        """Test that deposit is recorded in transactions."""
        account = BankAccount()
        account.deposit(100)
        assert "Deposit: +$100" in account.transactions

    def test_withdraw(self):
        """Test withdrawing money."""
        account = BankAccount(100)
        account.withdraw(30)
        assert account.get_balance() == 70

    def test_withdraw_insufficient_funds(self):
        """Test that withdrawing more than balance raises error."""
        account = BankAccount(50)
        with pytest.raises(ValueError, match="Insufficient funds"):
            account.withdraw(100)

    @pytest.mark.parametrize("initial,deposit,expected", [
        (0, 100, 100),
        (50, 50, 100),
        (100, 0.01, 100.01),
    ])
    def test_deposit_parametrized(self, initial, deposit, expected):
        """Test deposit with multiple parameter sets."""
        account = BankAccount(initial)
        account.deposit(deposit)
        assert account.get_balance() == expected

# Fixtures for setup/teardown
@pytest.fixture
def account():
    """Create a bank account for testing."""
    return BankAccount(100)

@pytest.fixture
def empty_account():
    """Create an empty bank account."""
    return BankAccount()

def test_using_fixture(account):
    """Test using fixture."""
    account.deposit(50)
    assert account.get_balance() == 150

# Run tests: pytest test_bank_account.py

Pytest Features

"""
Pytest advantages over unittest:

1. Simpler syntax: assert instead of self.assertEqual
2. Better error messages
3. Fixtures for setup/teardown
4. Parametrized tests
5. Powerful plugins
6. Auto-discovery of tests

Pytest conventions:
- Test files: test_*.py or *_test.py
- Test functions: test_*()
- Test classes: Test*
"""

# Markers
@pytest.mark.slow
def test_slow_operation():
    """Mark test as slow."""
    pass  # Run with: pytest -m slow

@pytest.mark.skip(reason="Feature not implemented yet")
def test_future_feature():
    """Skip this test."""
    pass

@pytest.mark.skipif(sys.version_info < (3, 8), reason="Requires Python 3.8+")
def test_python_38_feature():
    """Skip on older Python versions."""
    pass

@pytest.mark.xfail(reason="Known bug #123")
def test_known_bug():
    """Expect this test to fail."""
    pass

Test-Driven Development (TDD)

TDD is a development approach where you write tests before writing production code.

TDD Cycle: Red-Green-Refactor

"""
TDD Process:

1. RED: Write a failing test
   - Define expected behavior
   - Test fails because feature doesn't exist

2. GREEN: Write minimal code to pass test
   - Just enough to make test pass
   - Don't worry about perfect code yet

3. REFACTOR: Improve code while keeping tests green
   - Clean up code
   - Remove duplication
   - Improve design

4. REPEAT: Next feature
"""

# Example: TDD for a String Utility

# Step 1: RED - Write failing test
import pytest

def test_reverse_string():
    """Test reversing a string."""
    assert reverse_string("hello") == "olleh"

# Run test - FAILS (reverse_string doesn't exist)

# Step 2: GREEN - Minimal implementation
def reverse_string(s):
    """Reverse a string."""
    return s[::-1]

# Run test - PASSES

# Step 3: Add more tests
def test_reverse_empty_string():
    """Test reversing empty string."""
    assert reverse_string("") == ""

def test_reverse_single_character():
    """Test reversing single character."""
    assert reverse_string("a") == "a"

# Step 4: Refactor if needed (implementation is already simple)

# Next feature: Palindrome checker
# RED
def test_is_palindrome_true():
    """Test palindrome detection."""
    assert is_palindrome("racecar") == True

# GREEN
def is_palindrome(s):
    """Check if string is palindrome."""
    return s == s[::-1]

# Add more tests
def test_is_palindrome_false():
    assert is_palindrome("hello") == False

def test_is_palindrome_case_insensitive():
    assert is_palindrome("Racecar") == False  # Should be True

# Refactor to handle case
def is_palindrome(s):
    """Check if string is palindrome (case-insensitive)."""
    s_lower = s.lower()
    return s_lower == s_lower[::-1]

TDD Example: FizzBuzz

# Classic FizzBuzz with TDD approach

# Test 1: Numbers divisible by 3
def test_fizzbuzz_divisible_by_3():
    """Numbers divisible by 3 return 'Fizz'."""
    assert fizzbuzz(3) == "Fizz"
    assert fizzbuzz(6) == "Fizz"

def fizzbuzz(n):
    if n % 3 == 0:
        return "Fizz"
    return str(n)

# Test 2: Numbers divisible by 5
def test_fizzbuzz_divisible_by_5():
    """Numbers divisible by 5 return 'Buzz'."""
    assert fizzbuzz(5) == "Buzz"
    assert fizzbuzz(10) == "Buzz"

def fizzbuzz(n):
    if n % 3 == 0:
        return "Fizz"
    if n % 5 == 0:
        return "Buzz"
    return str(n)

# Test 3: Numbers divisible by both
def test_fizzbuzz_divisible_by_both():
    """Numbers divisible by 3 and 5 return 'FizzBuzz'."""
    assert fizzbuzz(15) == "FizzBuzz"
    assert fizzbuzz(30) == "FizzBuzz"

def fizzbuzz(n):
    if n % 15 == 0:  # or: n % 3 == 0 and n % 5 == 0
        return "FizzBuzz"
    if n % 3 == 0:
        return "Fizz"
    if n % 5 == 0:
        return "Buzz"
    return str(n)

# Test 4: Regular numbers
def test_fizzbuzz_regular_numbers():
    """Numbers not divisible by 3 or 5 return the number."""
    assert fizzbuzz(1) == "1"
    assert fizzbuzz(7) == "7"

# Final implementation passes all tests

Mocking and Stubbing

Mocking isolates the unit under test by replacing dependencies with controlled substitutes.

When to Use Mocks

"""
Use mocks when:
- Testing code that depends on external systems (databases, APIs, files)
- Dependencies are slow or expensive
- Dependencies have side effects
- Dependencies are hard to set up in tests
- Testing error conditions

Mock vs Stub vs Fake:
- Mock: Object with pre-programmed expectations
- Stub: Object that returns pre-defined responses
- Fake: Working implementation (simplified)
"""

Using unittest.mock

from unittest.mock import Mock, MagicMock, patch
import requests

# Code to test - depends on external API
class WeatherService:
    """Service that fetches weather data from API."""

    def __init__(self, api_key):
        self.api_key = api_key
        self.base_url = "https://api.weather.com"

    def get_temperature(self, city):
        """Get current temperature for city."""
        response = requests.get(
            f"{self.base_url}/current",
            params={'city': city, 'key': self.api_key}
        )
        if response.status_code != 200:
            raise ValueError("Failed to fetch weather data")

        data = response.json()
        return data['temperature']

# Test using mock
class TestWeatherService:
    """Test WeatherService with mocked requests."""

    @patch('requests.get')
    def test_get_temperature_success(self, mock_get):
        """Test successful temperature fetch."""
        # Configure mock response
        mock_response = Mock()
        mock_response.status_code = 200
        mock_response.json.return_value = {'temperature': 72}
        mock_get.return_value = mock_response

        # Test
        service = WeatherService("fake-api-key")
        temp = service.get_temperature("New York")

        # Assertions
        assert temp == 72
        mock_get.assert_called_once_with(
            "https://api.weather.com/current",
            params={'city': 'New York', 'key': 'fake-api-key'}
        )

    @patch('requests.get')
    def test_get_temperature_api_error(self, mock_get):
        """Test handling of API errors."""
        # Configure mock to return error
        mock_response = Mock()
        mock_response.status_code = 500
        mock_get.return_value = mock_response

        # Test
        service = WeatherService("fake-api-key")
        with pytest.raises(ValueError, match="Failed to fetch weather data"):
            service.get_temperature("New York")

Mock Objects

from unittest.mock import Mock, MagicMock, call

# Basic Mock
def test_basic_mock():
    """Demonstrate basic mock usage."""
    mock = Mock()

    # Set return value
    mock.get_data.return_value = {'key': 'value'}
    assert mock.get_data() == {'key': 'value'}

    # Verify method was called
    mock.get_data.assert_called_once()

    # Set side effect (function called when mock is called)
    mock.process.side_effect = lambda x: x * 2
    assert mock.process(5) == 10

# Mock with specifications
def test_mock_spec():
    """Mock that matches an interface."""
    class RealClass:
        def method_a(self):
            pass

        def method_b(self, x):
            pass

    # Mock with spec ensures only real methods can be called
    mock = Mock(spec=RealClass)
    mock.method_a()  # OK
    # mock.method_c()  # Would raise AttributeError

# MagicMock for magic methods
def test_magic_mock():
    """MagicMock supports magic methods."""
    mock = MagicMock()

    # Support for len, iter, contains, etc.
    mock.__len__.return_value = 3
    assert len(mock) == 3

    mock.__getitem__.return_value = 'value'
    assert mock['key'] == 'value'

# Verify call arguments
def test_call_verification():
    """Verify how mock was called."""
    mock = Mock()

    mock.method(1, 2, key='value')

    # Different ways to verify
    mock.method.assert_called_with(1, 2, key='value')
    mock.method.assert_called_once_with(1, 2, key='value')

    # Check call count
    assert mock.method.call_count == 1

    # Check all calls
    mock.method(3, 4)
    assert mock.method.call_args_list == [
        call(1, 2, key='value'),
        call(3, 4)
    ]

Patching

# Patch as decorator
@patch('module.function')
def test_with_patch_decorator(mock_function):
    """Patch using decorator."""
    mock_function.return_value = 42
    result = module.function()
    assert result == 42

# Patch as context manager
def test_with_patch_context():
    """Patch using context manager."""
    with patch('module.function') as mock_function:
        mock_function.return_value = 42
        result = module.function()
        assert result == 42

# Patch object attribute
def test_patch_object():
    """Patch object attribute."""
    obj = SomeClass()
    with patch.object(obj, 'method', return_value=42):
        assert obj.method() == 42

# Patch multiple
def test_patch_multiple():
    """Patch multiple items."""
    with patch.multiple('module',
                       func_a=Mock(return_value=1),
                       func_b=Mock(return_value=2)):
        assert module.func_a() == 1
        assert module.func_b() == 2

Real Example: Testing with Database

# Production code
class UserRepository:
    """Repository for user data."""

    def __init__(self, db_connection):
        self.db = db_connection

    def get_user(self, user_id):
        """Get user by ID."""
        cursor = self.db.cursor()
        cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
        row = cursor.fetchone()
        if row:
            return {'id': row[0], 'name': row[1], 'email': row[2]}
        return None

    def create_user(self, name, email):
        """Create new user."""
        cursor = self.db.cursor()
        cursor.execute(
            "INSERT INTO users (name, email) VALUES (?, ?)",
            (name, email)
        )
        self.db.commit()
        return cursor.lastrowid

# Tests with mock database
class TestUserRepository:
    """Test UserRepository with mocked database."""

    def test_get_user_found(self):
        """Test getting existing user."""
        # Create mock database
        mock_db = Mock()
        mock_cursor = Mock()
        mock_db.cursor.return_value = mock_cursor
        mock_cursor.fetchone.return_value = (1, 'John Doe', 'john@example.com')

        # Test
        repo = UserRepository(mock_db)
        user = repo.get_user(1)

        # Assertions
        assert user == {'id': 1, 'name': 'John Doe', 'email': 'john@example.com'}
        mock_cursor.execute.assert_called_once_with(
            "SELECT * FROM users WHERE id = ?", (1,)
        )

    def test_get_user_not_found(self):
        """Test getting non-existent user."""
        mock_db = Mock()
        mock_cursor = Mock()
        mock_db.cursor.return_value = mock_cursor
        mock_cursor.fetchone.return_value = None

        repo = UserRepository(mock_db)
        user = repo.get_user(999)

        assert user is None

    def test_create_user(self):
        """Test creating new user."""
        mock_db = Mock()
        mock_cursor = Mock()
        mock_cursor.lastrowid = 42
        mock_db.cursor.return_value = mock_cursor

        repo = UserRepository(mock_db)
        user_id = repo.create_user('Jane Doe', 'jane@example.com')

        assert user_id == 42
        mock_cursor.execute.assert_called_once()
        mock_db.commit.assert_called_once()

Code Coverage

Code coverage measures how much of your code is executed during tests.

Measuring Coverage

"""
Install coverage tool:
pip install coverage

# Run tests with coverage
coverage run -m pytest

# Generate report
coverage report

# Generate HTML report
coverage html

# View report
# Open htmlcov/index.html in browser

Coverage metrics:
- Line coverage: % of lines executed
- Branch coverage: % of branches (if/else) taken
- Function coverage: % of functions called

Target: 80%+ coverage, 100% for critical paths
"""

Coverage Example

# math_utils.py
def divide(a, b):
    """Divide a by b."""
    if b == 0:
        raise ValueError("Cannot divide by zero")
    return a / b

def is_even(n):
    """Check if number is even."""
    if n % 2 == 0:
        return True
    else:
        return False

# test_math_utils.py
def test_divide_normal():
    """Test normal division."""
    assert divide(10, 2) == 5

def test_is_even_true():
    """Test even number."""
    assert is_even(4) == True

# Coverage report shows:
# divide: 50% (missing zero division test)
# is_even: 50% (missing odd number test)

# Add missing tests
def test_divide_by_zero():
    """Test division by zero."""
    with pytest.raises(ValueError):
        divide(10, 0)

def test_is_even_false():
    """Test odd number."""
    assert is_even(5) == False

# Coverage: 100%

Coverage Configuration

# .coveragerc configuration file
"""
[run]
source = src/
omit =
    */tests/*
    */test_*
    */__init__.py

[report]
exclude_lines =
    pragma: no cover
    def __repr__
    raise AssertionError
    raise NotImplementedError
    if __name__ == .__main__.:
    if TYPE_CHECKING:

[html]
directory = htmlcov
"""

# Code with coverage pragmas
def get_debug_info():
    """Get debug information."""
    if DEBUG:  # pragma: no cover
        # This line excluded from coverage
        return detailed_debug_info()
    return basic_info()

Integration Testing

Integration tests verify that different parts of the system work together correctly.

# Example: Testing Flask API with database

from flask import Flask, jsonify, request
from flask_sqlalchemy import SQLAlchemy
import pytest

# Application code
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///test.db'
db = SQLAlchemy(app)

class User(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    username = db.Column(db.String(80), unique=True, nullable=False)
    email = db.Column(db.String(120), unique=True, nullable=False)

@app.route('/users', methods=['POST'])
def create_user():
    data = request.json
    user = User(username=data['username'], email=data['email'])
    db.session.add(user)
    db.session.commit()
    return jsonify({'id': user.id}), 201

@app.route('/users/<int:user_id>')
def get_user(user_id):
    user = User.query.get_or_404(user_id)
    return jsonify({'id': user.id, 'username': user.username, 'email': user.email})

# Integration tests
@pytest.fixture
def client():
    """Create test client with test database."""
    app.config['TESTING'] = True
    app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///:memory:'

    with app.test_client() as client:
        with app.app_context():
            db.create_all()
        yield client
        with app.app_context():
            db.drop_all()

def test_create_and_get_user(client):
    """Test creating and retrieving user (integration test)."""
    # Create user
    response = client.post('/users', json={
        'username': 'testuser',
        'email': 'test@example.com'
    })
    assert response.status_code == 201
    user_id = response.json['id']

    # Get user
    response = client.get(f'/users/{user_id}')
    assert response.status_code == 200
    assert response.json['username'] == 'testuser'
    assert response.json['email'] == 'test@example.com'

def test_get_nonexistent_user(client):
    """Test getting non-existent user returns 404."""
    response = client.get('/users/999')
    assert response.status_code == 404

Testing Best Practices

Arrange-Act-Assert (AAA) Pattern

def test_bank_withdrawal():
    """Test using AAA pattern."""

    # ARRANGE: Set up test data and conditions
    account = BankAccount(100)

    # ACT: Execute the code being tested
    account.withdraw(30)

    # ASSERT: Verify the results
    assert account.get_balance() == 70

Test Naming Conventions

"""
Good test names describe:
1. What is being tested
2. What scenario/condition
3. What is expected

Patterns:
- test_<method>_<scenario>_<expected>
- test_<feature>_when_<condition>_then_<outcome>

Examples:
"""

def test_withdraw_with_sufficient_funds_decreases_balance():
    """Clear, descriptive name."""
    pass

def test_withdraw_when_insufficient_funds_then_raises_error():
    """Behavior-driven style."""
    pass

# Avoid
def test_withdraw():  # Too generic
    pass

def test_1():  # Not descriptive
    pass

Testing Edge Cases

class TestEdgeCases:
    """Always test edge cases."""

    def test_empty_input(self):
        """Test with empty input."""
        assert process([]) == []

    def test_single_item(self):
        """Test with single item."""
        assert process([1]) == [1]

    def test_large_input(self):
        """Test with large input."""
        large_list = list(range(10000))
        result = process(large_list)
        assert len(result) == 10000

    def test_boundary_values(self):
        """Test boundary values."""
        assert is_valid_age(0) == True
        assert is_valid_age(120) == True
        assert is_valid_age(-1) == False
        assert is_valid_age(121) == False

    def test_special_characters(self):
        """Test with special characters."""
        assert sanitize("hello<script>alert('xss')</script>") == "helloalert('xss')"

Exercises

Basic Exercises

  1. Unit Tests

    • Write unit tests for a string utility class
    • Include methods: reverse, capitalize, count_words, is_palindrome
    • Achieve 100% code coverage
    • Test edge cases
  2. TDD Practice

    • Implement a shopping cart using TDD
    • Features: add_item, remove_item, get_total, apply_discount
    • Write tests first, then implementation
    • Refactor while keeping tests green
  3. Parametrized Tests

    • Create parametrized tests for a temperature converter
    • Convert between Celsius, Fahrenheit, Kelvin
    • Test multiple conversion scenarios
    • Test invalid inputs

Intermediate Exercises

  1. Mocking External Services

    • Create a weather app that uses an external API
    • Mock the API calls in tests
    • Test success and failure scenarios
    • Test rate limiting and retries
  2. Integration Tests

    • Build a simple blog API (create post, get post, list posts)
    • Write integration tests for the API
    • Use test database
    • Test complete workflows
  3. Code Coverage

    • Take an existing codebase
    • Measure current code coverage
    • Add tests to reach 80%+ coverage
    • Identify untestable code and refactor

Advanced Exercises

  1. Test Suite Organization

    • Organize tests into unit, integration, and e2e folders
    • Create pytest configuration for different test suites
    • Set up markers for slow tests
    • Create fixtures for common setup
  2. Testing Async Code

    • Write tests for async/await functions
    • Use pytest-asyncio
    • Test concurrent operations
    • Test timeout scenarios
  3. Property-Based Testing

    • Use hypothesis library for property-based testing
    • Test that reverse(reverse(s)) == s for any string
    • Test sorting properties
    • Find edge cases automatically
  4. Mutation Testing

    • Use mutpy or similar tool
    • Measure test suite quality by introducing mutations
    • Improve tests to catch more mutations
    • Achieve high mutation score

Summary

Testing is essential for software quality:

  1. Unit Tests: Test individual components in isolation (many, fast)
  2. Integration Tests: Test components working together (some, moderate)
  3. E2E Tests: Test complete workflows (few, slow)
  4. TDD: Write tests first, then implementation (Red-Green-Refactor)
  5. Mocking: Isolate units by replacing dependencies
  6. Coverage: Measure test completeness (target 80%+)
  7. Best Practices: AAA pattern, clear names, test edge cases

Testing pyramid:

  • Many unit tests (fast, cheap, focused)
  • Some integration tests (moderate speed, broader scope)
  • Few e2e tests (slow, expensive, full workflows)

Remember: Tests are code too. Keep them clean, maintainable, and readable.

Next Reading

Continue to 05-clean-code.md to learn about clean code principles including naming, functions, SOLID principles, and refactoring techniques.