---
title: Code coverage vs. test coverage in Python
published: "2023-05-04"
updated: "2026-01-27"
publisher: Honeybadger
author: Muhammed Ali
category: Python articles
tags:
  - Python
  - Testing
description: Writing tests is essential for ensuring the quality of your code. Discover the difference between code coverage and test coverage and how to use them to make your testing process more efficient and effective.
url: "https://www.honeybadger.io/blog/code-test-coverage-python/"
---

If you have been [writing tests](https://www.honeybadger.io/blog/beginners-guide-to-software-testing-in-python/) for a while, you have probably encountered code coverage and test coverage. These concepts can be difficult to differentiate because they are somewhat intertwined. In this article, you will learn what code coverage vs test coverage means, and the basis of these concepts.

You will also learn the key differences between code coverage and test coverage in Python. You would discover tools, techniques, and best practices to improve your testing strategy. Learning about these concepts will enable you to identify parts of your projects that have not been properly covered by test cases, which will, in turn, make your application more robust.

Generally, code coverage is relatively objective; once your code is executed during a test, it is considered complete code coverage. However, test coverage is subjective and can be influenced by your consideration and scope. Keep reading for further explanation and examples. When you find code not covered by tests, ask yourself:

1. Is this code reachable? (If not, remove it. Most likely doesn't serve a purpose.)
2. Is this an edge case I haven't considered?
3. Is this error handling that needs testing?
4. Is this integration code that needs mocking?

## Common misconceptions about coverage metrics

Coverage metrics are powerful tools, but several misconceptions can lead developers astray. To use coverage metrics effectively, you need to know some of these misconceptions so as not to believe any of them.

### Misconception 1: 100% coverage means bug-free code

This is perhaps the most dangerous misconception. Achieving 100% coverage simply means every line of code was executed at least once during testing. It says nothing about whether the code was tested correctly or thoroughly. Take a look at this example:

```python
def calculate_discount(price, discount_percent): if discount_percent > 100: discount_percent = 100 return price * (1 - discount_percent / 100) # Test that achieves 100% code coverage but misses bugs def test_discount(): assert calculate_discount(100, 50) == 50  # Passes, 100% coverage achieved
```

This test gives 100% code coverage, but it doesn't catch the bug when `discount_percent` is negative, when `price` is negative, or when `discount_percent` is exactly 100. The code executes, but it's not properly validated.

### Misconception 2: Low test coverage always means poor testing

While low coverage often indicates testing gaps, there are legitimate reasons for lower coverage in certain areas. Third-party library integrations, simple getters and setters, generated code, or intentionally untestable legacy code might not warrant extensive testing. The goal should be meaningful coverage of critical business logic, not arbitrary percentage targets.

### Misconception 3: All code needs to be tested

Not all code provides the same value when tested. Simple property accessors, configuration files, or straightforward utility functions might not need extensive test coverage. Focus your testing efforts on complex business logic, code with high bug risk, and areas that frequently change.

### Misconception 4: Code coverage tools catch all testing issues

Coverage tools only measure execution. They don't verify that your assertions are correct or comprehensive. A test can execute code and pass while still having weak or missing assertions. You need to manually review your tests to ensure they validate the right behavior.

## Code coverage

When writing tests, as your project gets larger, it’s almost impossible to know whether all the parts of your codebase have adequate test coverage. The same limitation occurs when you want to know the percentage of your code that isn’t covered by the test and the actual code that isn’t covered. This is where code coverage comes in. Code coverage shows you the areas of your code that aren’t covered by tests, and with such information, you can investigate and find out how to fix them.

It does so by checking the parts of the code executed during the testing process. It also provides you with a percentage of how much of your code has been covered by tests. Similar to how clay can be molded it into any form, to test code and get 100% coverage, you just need to mold an item.

### Characteristics of code coverage

1. With code coverage, you can identify the parts of your code that are not covered by a test, which makes writing tests easier.
2. It provides a percentage of the amount of code that has been tested.
3. With code coverage, when one value of the code feature is covered, the other possible values are neglected. Following our clay example, just molding a single item is enough to get 100%. It doesn’t take into account the other items that can be molded with clay.

### Code coverage in Python

In this section, you will learn how to get Python code coverage for your Python code. We will first start by writing some Python functions and then write unit tests for them using the `unittest` module. Then, we will get code coverage with [Coverage.py](https://coverage.readthedocs.io/en/6.5.0/). You can install Coverage.py by running the following command:

```python
pip install coverage
```

In the following code, the function `sum_negative()` adds only negative numbers and returns `None` otherwise. The `sum_positive ()` function only adds positive numbers and returns `None` if they are negative.

To get this started, create a Python file and paste the following code:

```python
def sum_negative(num1, num2): if num1 < 0 and num2 < 0: return num1 + num2 else: return None def sum_positive(num1, num2): if num1 > 0 and num2 > 0: return num1 + num2 else: return None
```

Now we can write test cases for the code above using the `unittest` module. Create a new file named “tests.py” and paste the following code. The following code contains assertions that the functions output what is expected. There is one assertion for each `return` statement.

```python
import unittest from sample import sum_negative, sum_positive class SumTests(unittest.TestCase): def test_sum(self): self.assertEqual (sum_negative(-5, -5), -10) self.assertEqual (sum_negative(5, 2), None) def test_sum_positive_ok(self): self.assertEqual (sum_positive(2, 2), 4) self.assertEqual (sum_positive(-5, -2), None)
```

The test cases above will give you a 100% code coverage. You can check by running the following commands.

```python
coverage run -m unittest discover coverage report -m
```

![Code coverage vs test coverage: code coverage](https://www.honeybadger.io/images/blog/posts/code-test-coverage-python/code-coverage.png)

Although we are getting 100% code coverage here, the tests above are not well-rounded because they don’t test for other scenarios in which the code can be used.

## Popular code coverage tools for Python

Several tools are available for measuring code coverage in Python, each with its own strengths and ideal use cases. Here are some of the most used code coverage tools for Python.

### Coverage.py

Coverage.py is the most widely used and comprehensive tool for measuring code coverage in Python. It serves as the foundation upon which many other tools are built. it offers detailed line-by-line coverage reports and branch coverage analysis. Coverage.py can generate HTML reports with highlighted source code, making it easy to visualize which parts of your code lack test coverage. With this, you can track coverage across multiple test runs and support parallel execution, making it suitable for complex projects.

Coverage.py works best for standalone projects using unittest, situations where you need detailed HTML reports, projects requiring fine-grained configuration options, and multi-process applications that need comprehensive coverage tracking.

**What using coverage.py would look like:**

```python
# Run tests and measure coverage coverage run -m unittest discover # Generate a terminal report coverage report -m # Generate an HTML report coverage html
```

**Configuration example (.coveragerc):**

```ini
[run] source = myapp omit = */tests/* */venv/* */__init__.py [report] exclude_lines = pragma: no cover def __repr__ raise NotImplementedError
```

### pytest-cov

pytest-cov is a pytest plugin that integrates Coverage.py seamlessly with pytest. This plugin offers easy pytest integration with a simpler command-line interface compared to using Coverage.py directly. It can show coverage during test execution with immediate feedback on your testing efforts.

pytest-cov makes sense for projects already using pytest, when you want immediate coverage feedback during development, and for teams that prefer pytest's testing style and ecosystem.

**A basic usage:**

```python
# Run tests with coverage report pytest --cov=myapp tests/ # Generate HTML report pytest --cov=myapp --cov-report=html tests/ # Show missing lines pytest --cov=myapp --cov-report=term-missing tests/ # Fail if coverage falls below threshold pytest --cov=myapp --cov-fail-under=80 tests/
```

### nose2

nose2 is the successor to the nose testing framework and includes built-in coverage support through a plugin. It features a built-in coverage plugin that requires no separate installation for basic coverage functionality. It also provides good support for projects migrating from the original nose framework.

nose2 is best suited for legacy projects using nose that need to migrate to a maintained framework.

**A basic usage:**

```python
# Run with coverage nose2 --with-coverage # Specify coverage for specific package nose2 --with-coverage --coverage myapp
```

**Configuration (unittest.cfg or .nose2.cfg):**

```ini
[coverage] coverage = myapp coverage-report = html
```

For most modern Python projects, pytest-cov is the best choice due to pytest's popularity and the plugin's ease of use. Use Coverage.py directly when you need advanced configuration or aren't using pytest. Consider nose2 only if you're maintaining legacy code that already uses nose.

## Test coverage

Test coverage is a metric of how much of a feature in the code being tested is actually covered by tests. I know that it can be confusing, so I’ll use an analogy to illustrate. Then, we will use some code to make sure it’s clear. Taking our clay example, test coverage is implemented when you use the clay to build everything that can possibly be built with it.

Here, the test that we did above, which gave us 100% code coverage, will be less when doing the test coverage evaluation. This is because many different things can be molded with clay, and they should also be considered when writing tests.

### Characteristics of test coverage

1. It helps improve the quality of the code being covered by the test. This is because different scenarios in which that section of code can be applied are covered.
2. It makes your test coverage more robust.
3. There is a lot of manual work to be done since there is no tool for test coverage. Checking out the various ways in which your code can accept and send data can be a very tedious task.
4. It is more prone to errors since it is done manually.

### Test coverage in Python

Unlike in code coverage, where we only needed four assertions, in Python test coverage, we will have more assertions. Using the sample code presented in the previous section, we have the following assertions:

```python
import unittest from sample import sum_negative, sum_positive class SumTests(unittest.TestCase): def test_sum(self): self.assertEqual (sum_negative(-5, -5), -10) self.assertEqual (sum_negative(5, 2), None) self.assertEqual (sum_negative(5, 2), None) #new self.assertEqual (sum_negative(5, 2), None) #new def test_sum_positive_ok(self): self.assertEqual (sum_positive(2, 2), 4) self.assertEqual (sum_positive(-5, -2), None) self.assertEqual (sum_positive(5, -2), None) #new self.assertEqual (sum_positive(-5, 2), None) #new self.assertEqual (sum_positive(0, 0), None) #new
```

## How to improve code and test coverage

Improving coverage isn't just about writing more tests—it's about writing better, more meaningful tests that catch real bugs. Here are practical techniques to enhance both code and test coverage.

### Boundary testing

Boundary testing focuses on values at the edges of acceptable ranges, where bugs commonly hide. For any function with numeric inputs or ranges, test the minimum, maximum, and values just inside and outside boundaries.

```python
def calculate_grade(score): if score < 0 or score > 100: return "Invalid" elif score >= 90: return "A" elif score >= 80: return "B" elif score >= 70: return "C" elif score >= 60: return "D" else: return "F" # Effective boundary tests def test_grade_boundaries(): # Invalid boundaries assert calculate_grade(-1) == "Invalid" assert calculate_grade(101) == "Invalid" # Valid boundaries assert calculate_grade(0) == "F" assert calculate_grade(100) == "A" # Grade boundaries assert calculate_grade(59) == "F" assert calculate_grade(60) == "D" assert calculate_grade(69) == "D" assert calculate_grade(70) == "C" assert calculate_grade(89) == "B" assert calculate_grade(90) == "A"
```

### Parameterized tests

Parameterized tests allow you to run the same test logic with different inputs, dramatically increasing test coverage without duplicating code. This is especially powerful with pytest's `@pytest.mark.parametrize` decorator.

```python
import pytest def is_palindrome(text): cleaned = ''.join(c.lower() for c in text if c.isalnum()) return cleaned == cleaned[::-1] # Without parameterization - repetitive def test_palindrome_basic(): assert is_palindrome("racecar") == True assert is_palindrome("hello") == False assert is_palindrome("A man a plan a canal Panama") == True # With parameterization - cleaner and more comprehensive @pytest.mark.parametrize("text,expected", [ ("racecar", True), ("hello", False), ("A man a plan a canal Panama", True), ("Was it a car or a cat I saw", True), ("", True),  # Edge case: empty string ("a", True),  # Edge case: single character ("ab", False), ("Madam", True), ("12321", True), ("12345", False), ]) def test_palindrome_parametrized(text, expected): assert is_palindrome(text) == expected
```

### Mocking external dependencies

Mocking allows you to test code that depends on external services, databases, or APIs without actually calling them. This increases test coverage for code that would otherwise be difficult to test.

```python
import requests from unittest.mock import Mock, patch def get_user_data(user_id): response = requests.get(f"https://api.example.com/users/{user_id}") if response.status_code == 200: return response.json() else: return None # Without mocking, this test would require an actual API # With mocking, we can test both success and failure scenarios @patch('requests.get') def test_get_user_data_success(mock_get): # Setup mock response mock_response = Mock() mock_response.status_code = 200 mock_response.json.return_value = {"id": 1, "name": "John"} mock_get.return_value = mock_response result = get_user_data(1) assert result == {"id": 1, "name": "John"} mock_get.assert_called_once_with("https://api.example.com/users/1") @patch('requests.get') def test_get_user_data_failure(mock_get): # Setup mock for failure scenario mock_response = Mock() mock_response.status_code = 404 mock_get.return_value = mock_response result = get_user_data(999) assert result is None
```

With this, tests become faster and independent of external services.

### Testing exception handling

Many developers forget to test error conditions, leaving exception handling code untested. Always verify that your code handles errors correctly.

```python
def divide_numbers(a, b): try: return a / b except ZeroDivisionError: return "Cannot divide by zero" except TypeError: return "Invalid input types" def test_divide_numbers(): # Happy path assert divide_numbers(10, 2) == 5 # Exception scenarios assert divide_numbers(10, 0) == "Cannot divide by zero" assert divide_numbers("10", 2) == "Invalid input types" assert divide_numbers(10, "2") == "Invalid input types" # Using pytest's exception testing def divide_strict(a, b): if b == 0: raise ValueError("Division by zero") return a / b def test_divide_strict(): assert divide_strict(10, 2) == 5 with pytest.raises(ValueError, match="Division by zero"): divide_strict(10, 0)
```

### Using Fixtures for complex setup

Fixtures help you create reusable test data and setup code, making it easier to write comprehensive tests for complex scenarios.

```python
import pytest class ShoppingCart: def __init__(self): self.items = [] def add_item(self, item, price): self.items.append({"item": item, "price": price}) def total(self): return sum(item["price"] for item in self.items) def apply_discount(self, percent): total = self.total() return total * (1 - percent / 100) @pytest.fixture def empty_cart(): return ShoppingCart() @pytest.fixture def cart_with_items(): cart = ShoppingCart() cart.add_item("Book", 20) cart.add_item("Pen", 5) cart.add_item("Notebook", 10) return cart def test_empty_cart_total(empty_cart): assert empty_cart.total() == 0 def test_cart_total(cart_with_items): assert cart_with_items.total() == 35 def test_discount_application(cart_with_items): assert cart_with_items.apply_discount(10) == 31.5 assert cart_with_items.apply_discount(20) == 28
```

## Code coverage vs test coverage: Which should you focus on?

In this article, we covered what code and test coverage are about and how to differentiate between the two when working on a project, hence the comparison (code coverage vs test coverage). One thing you should know when it comes to coverage percentages is that you should not be aiming to get 100% in test or code coverage because it doesn’t actually tell you how well-tested your program is.

As I said earlier, if your code is tested with the wrong logic, it is still possible to get 100% coverage. As far as which you should use is concerned, it is up to you. If you are concerned about finding the parts of your code that have not been tested at all, code coverage will be your best bet. However, if you care about your test covering all possible scenarios, you should consider test coverage. Aside from that, a hybrid approach where both test and code coverage are used can also be employed to get the advantages of both.

Like this article? We have plenty more where that came from. Join the [Honeybadger newsletter](https://www.honeybadger.io/newsletter/python/) to learn about more testing concepts in Python.

---

## Try Honeybadger for FREE

Intelligent logging, error tracking, and Just Enough APM™ in one dev-friendly platform. Find and fix problems before users notice.

[Start free trial](https://app.honeybadger.io/users/sign_up)

[See plans and pricing](https://www.honeybadger.io/plans/)
