All notes


from django.contrib.auth.models import User
from django.test import Client

User.objects.create_user("test1", "[email protected]", "test1")
# <User: test1>

c = Client()
c.login(username='test1', password='test1')
# True


djangoProject docs: topics - testing.

Writing tests

The preferred way to write tests in Django is using the unittest module built in to the Python standard library.

django.test.TestCase is a subclass of unittest.TestCase, that runs each test inside a transaction to provide isolation:

from django.test import TestCase
from myapp.models import Animal

class AnimalTestCase(TestCase):
    def setUp(self):
        Animal.objects.create(name="lion", sound="roar")
        Animal.objects.create(name="cat", sound="meow")

    def test_animals_can_speak(self):
        """Animals that can speak are correctly identified"""
        lion = Animal.objects.get(name="lion")
        cat = Animal.objects.get(name="cat")
        self.assertEqual(lion.speak(), 'The lion says "roar"')
        self.assertEqual(cat.speak(), 'The cat says "meow"')

When you run your tests, the default behavior of the test utility is to find all the test cases (that is, subclasses of unittest.TestCase) in any file whose name begins with test, automatically build a test suite out of those test cases, and run that suite.

Where should the tests live? The default startapp template creates a file in the new application. This might be fine if you only have a few tests, but as your test suite grows you’ll likely want to restructure it into a tests package so you can split your tests into different submodules such as,,, etc.

django.test.TestCase vs unittest.TestCase

If your tests rely on database access such as creating or querying models, be sure to create your test classes as subclasses of django.test.TestCase rather than unittest.TestCase.

Using unittest.TestCase avoids the cost of running each test in a transaction and flushing the database, but if your tests interact with the database their behavior will vary based on the order that the test runner executes them. This can lead to unit tests that pass when run in isolation but fail when run in a suite.

Running tests

./ test

You can specify particular tests to run by supplying any number of “test labels” to ./ test.

# Run all the tests in the animals.tests module
./ test animals.tests

# Run all the tests found within the 'animals' package
./ test animals

# Run just one test case
./ test animals.tests.AnimalTestCase

# Run just one test method
./ test animals.tests.AnimalTestCase.test_animals_can_speak

# You can also provide a path to a directory to discover tests below that directory:
$ ./ test animals/

# You can specify a custom filename pattern match using the -p (or --pattern) option, if your test files are named differently from the test*.py pattern:
$ ./ test --pattern="tests_*.py"

Test discovery

pythonDoc: unittest discovery.

Test discovery is based on the unittest module’s built-in test discovery. All of the test files must be modules or packages (including namespace packages) importable from the top-level directory of the project, named as "test*.py".

# -s, --start-directory directory
# Directory to start discovery (. default)
# -p, --pattern pattern
# Pattern to match test files (test*.py default)
# -t, --top-level-directory directory
# Top level directory of project (defaults to start directory)

# The -s, -p, and -t options can be passed in as positional arguments in that order:
python -m unittest discover -s project_directory -p "*"
python -m unittest discover project_directory "*"


djangoDoc: initial data.

You’ll store this data in a fixtures directory inside your app.

- model: myapp.person
  pk: 1
    first_name: John
    last_name: Lennon
- model: myapp.person
  pk: 2
    first_name: Paul
    last_name: McCartney

    "model": "myapp.person",
    "pk": 1,
    "fields": {
      "first_name": "John",
      "last_name": "Lennon"
    "model": "myapp.person",
    "pk": 2,
    "fields": {
      "first_name": "Paul",
      "last_name": "McCartney"


If you press Ctrl-C while the tests are running, the test runner will wait for the currently running test to complete and then exit gracefully. During a graceful exit the test runner will output details of any test failures, report on how many tests were run and how many errors and failures were encountered, and destroy any test databases as usual.

Pressing Ctrl-C can be very useful if you forget to pass the --failfast option, notice that some tests are unexpectedly failing and want to get details on the failures without waiting for the full test run to complete.

If you do not want to wait for the currently running test to finish, you can press Ctrl-C a second time and the test run will halt immediately, but not gracefully. No details of the tests run before the interruption will be reported, and any test databases created by the run will not be destroyed.

Test with warnings enabled

It’s a good idea to run your tests with Python warnings enabled: python -Wall test.

The -Wall flag tells Python to display deprecation warnings. Django, like many other Python libraries, uses these warnings to flag when features are going away. It also might flag areas in your code that aren’t strictly wrong but could benefit from a better implementation.

The test database

Tests that require a database (namely, model tests) will not use your “real” (production) database. Separate, blank databases are created for the tests. Regardless of whether the tests pass or fail, the test databases are destroyed when all the tests have been executed.

You can prevent the test databases from being destroyed by using the test --keepdb option. If the database does not exist, it will first be created. Any migrations will also be applied in order to keep it up to date.

The default test database names are created by prepending test_ to the value of each NAME in DATABASES. When using SQLite, the tests will use an in-memory database by default (i.e., the database will be created in memory, bypassing the filesystem entirely!). The TEST dictionary in DATABASES offers a number of settings to configure your test database. For example, if you want to use a different database name, specify NAME in the TEST dictionary for any given database in DATABASES.

On PostgreSQL, USER will also need read access to the built-in postgres database.

Aside from using a separate database, the test runner will otherwise use all of the same database settings you have in your settings file: ENGINE, USER, HOST, etc. The test database is created by the user specified by USER, so you’ll need to make sure that the given user account has sufficient privileges to create a new database on the system.

For fine-grained control over the character encoding of your test database, use the CHARSET TEST option. If you’re using MySQL, you can also use the COLLATION option to control the particular collation used by the test database.

If using an SQLite in-memory database with Python 3.4+ and SQLite 3.7.13+, shared cache will be enabled, so you can write tests with ability to share the database between threads.

Finding data from your production database when running tests?

If your code attempts to access the database when its modules are compiled, this will occur before the test database is set up, with potentially unexpected results. For example, if you have a database query in module-level code and a real database exists, production data could pollute your tests. It is a bad idea to have such import-time database queries in your code anyway - rewrite your code so that it doesn’t do this.

This also applies to customized implementations of ready().

Order in which tests are executed

In order to guarantee that all TestCase code starts with a clean database, the Django test runner reorders tests in the following way:

  1. All TestCase subclasses are run first.
  2. Then, all other Django-based tests (test cases based on SimpleTestCase, including TransactionTestCase) are run with no particular ordering guaranteed nor enforced among them.
  3. Then any other unittest.TestCase tests (including doctests) that may alter the database without restoring it to its original state are run.

The new ordering of tests may reveal unexpected dependencies on test case ordering. This is the case with doctests that relied on state left in the database by a given TransactionTestCase test, they must be updated to be able to run independently.

You may reverse the execution order inside groups using the test --reverse option. This can help with ensuring your tests are independent from each other.

Rollback emulation

Any initial data loaded in migrations will only be available in TestCase tests and not in TransactionTestCase tests, and additionally only on backends where transactions are supported (the most important exception being MyISAM).

Django can reload that data for you on a per-testcase basis by setting the serialized_rollback option to True in the body of the TestCase or TransactionTestCase, but note that this will slow down that test suite by approximately 3x. Third-party apps or those developing against MyISAM will need to set this; in general, however, you should be developing your own projects against a transactional database and be using TestCase for most tests, and thus not need this setting.

The initial serialization is usually very quick, but if you wish to exclude some apps from this process (and speed up test runs slightly), you may add those apps to TEST_NON_SERIALIZED_APPS.

To prevent serialized data from being loaded twice, setting serialized_rollback=True disables the post_migrate signal when flushing the test database.

Other test conditions

Regardless of the value of the DEBUG setting in your configuration file, all Django tests run with DEBUG=False. This is to ensure that the observed output of your code matches what will be seen in a production setting.

Caches are not cleared after each test, and running “ test fooapp” can insert data from the tests into the cache of a live system if you run your tests in production because, unlike databases, a separate “test cache” is not used. This behavior may change in the future.

The test output

You can control the level of detail of these messages with the verbosity option on the command line.

The return code for the test-runner script is 1 for any number of failed and erroneous tests. If all the tests pass, the return code is 0.

Speeding up the tests

Running tests in parallel

As long as your tests are properly isolated, you can run them in parallel to gain a speed up on multi-core hardware. See test --parallel.

Password hashing

The default password hasher is rather slow by design. If you’re authenticating many users in your tests, you may want to use a custom settings file and set the PASSWORD_HASHERS setting to a faster hashing algorithm: PASSWORD_HASHERS = [ 'django.contrib.auth.hashers.MD5PasswordHasher', ].

Don’t forget to also include in PASSWORD_HASHERS any hashing algorithm used in fixtures, if any.

Testing tools

djangoProject docs: testing tools.

The test client

The test client is a Python class that acts as a dummy Web browser, allowing you to test your views and interact with your Django-powered application programmatically.

Some of the things you can do with the test client are:

Note that the test client is not intended to be a replacement for Selenium or other "in-browser" frameworks.

# In Python interactive command line:

from django.test import Client
c = Client()
response ='/login/', {'username': 'john', 'password': 'smith'})
# 200
response = c.get('/customer/details/')
# b'<!DOCTYPE html...'

The test client does not require the Web server to be running. That’s because it avoids the overhead of HTTP and deals directly with the Django framework. This helps make the unit tests run quickly.

The test client is not capable of retrieving Web pages that are not powered by your Django project. If you need to retrieve other Web pages, use a Python standard library module such as urllib.

When retrieving pages, remember to specify the path of the URL, not the whole domain. To resolve URLs, the test client uses whatever URLconf is pointed-to by your ROOT_URLCONF setting.

# This is correct:
# This is incorrect:

Although the above example would work in the Python interactive interpreter, some of the test client’s functionality, notably the template-related functionality, is only available while tests are running.

The reason for this is that Django’s test runner performs a bit of black magic in order to determine which template was loaded by a given view. This black magic (essentially a patching of Django’s template system in memory) only happens during test running.

By default, the test client will disable any CSRF checks performed by your site.

If, for some reason, you want the test client to perform CSRF checks, you can create an instance of the test client that enforces CSRF checks.

from django.test import Client
csrf_client = Client(enforce_csrf_checks=True)


Client and GET

class Client(enforce_csrf_checks=False, **defaults)

get(path, data=None, follow=False, secure=False, **extra)

# Examples:
c = Client(HTTP_USER_AGENT='Mozilla/5.0')

c = Client()
c.get('/customers/details/', {'name': 'fred', 'age': 7})
# /customers/details/?name=fred&age=7
# Could also be posed as
# If you provide a URL with both an encoded GET data and a data argument, the data argument will take precedence.

c.get('/customers/details/', {'name': 'fred', 'age': 7},
# Will send the HTTP header HTTP_X_REQUESTED_WITH to the details view, which is a good way to test code paths that use the django.http.HttpRequest.is_ajax() method.

The values from the extra keywords arguments passed to get(), post(), etc. have precedence over the defaults passed to the class constructor.

CGI specification. The headers sent via **extra should follow CGI specification. For example, emulating a different “Host” header as sent in the HTTP request from the browser to the server should be passed as HTTP_HOST.

If you set follow to True the client will follow any redirects and a redirect_chain attribute will be set in the response object containing tuples of the intermediate urls and status codes.

If you had a URL /redirect_me/ that redirected to /next/, that redirected to /final/, this is what you’d see:

response = c.get('/redirect_me/', follow=True)
# [('http://testserver/next/', 302), ('http://testserver/final/', 302)]

If you set secure to True the client will emulate an HTTPS request.


post(path, data=None, content_type=MULTIPART_CONTENT, follow=False, secure=False, **extra)

If you don’t provide a value for content_type, the values in data will be transmitted with a content type of multipart/form-data.

Three selected values for the field named choices: {'choices': ('a', 'b', 'd')}.

Submitting files is a special case. To POST a file, you need only provide the file field name as a key, and a file handle to the file you wish to upload as a value.

c = Client()
with open('wishlist.doc') as fp:'/customers/wishes/', {'name': 'fred', 'attachment': fp})

The name attachment here is the field name.

You may also provide any file-like object (e.g., StringIO or BytesIO) as a file handle.

If the URL you request with a POST contains encoded parameters, these parameters will be made available in the request.GET data. For example, if you were to make the request:'/login/?visitor=true', {'name': 'fred', 'passwd': 'secret'})

The view handling this request could interrogate request.POST to retrieve the username and password, and could interrogate request.GET to determine if the user was a visitor.




c = Client()
c.login(username='fred', password='secret')
# Now you can access a view that's only available to logged-in users.

logint() returns True if it the credentials were accepted and login was successful.

You’ll need to remember to create user accounts before you can use this method. As we explained above, the test runner is executed using a test database, which contains no users by default.

You’ll need to create users as part of the test suite – either manually (using the Django model API) or with a test fixture.

Remember that if you want your test user to have a password, you can’t set the user’s password by setting the password attribute directly – you must use the set_password() function to store a correctly hashed password. Alternatively, you can use the create_user() helper method to create a new user with a correctly hashed password.


force_login(user, backend=None)

Use this method instead of login() when a test requires a user be logged in and the details of how a user logged in aren’t important.

Unlike login(), this method skips the authentication and verification steps: inactive users (is_active=False) are permitted to login and the user’s credentials don’t need to be provided.

The user will have its backend attribute set to the value of the backend argument (which should be a dotted Python path string), or to settings.AUTHENTICATION_BACKENDS[0] if a value isn’t provided. The authenticate() function called by login() normally annotates the user like this.

This method is faster than login() since the expensive password hashing algorithms are bypassed.



logout() method can be used to simulate the effect of a user logging out of your site.

After you call this method, the test client will have all the cookies and session data cleared to defaults. Subsequent requests will appear to come from an AnonymousUser.

Other HTTP verbs

head(path, data=None, follow=False, secure=False, **extra)

options(path, data='', content_type='application/octet-stream', follow=False, secure=False, **extra)

put(path, data='', content_type='application/octet-stream', follow=False, secure=False, **extra)

patch(path, data='', content_type='application/octet-stream', follow=False, secure=False, **extra)

delete(path, data='', content_type='application/octet-stream', follow=False, secure=False, **extra)

trace(path, follow=False, secure=False, **extra)
# data is not provided as a keyword parameter in order to comply with RFC 7231#section-4.3.8, which mandates that TRACE requests must not have a body.


Debug in unittest

SO: how to debug failing tests.

python -m pdb test yourapp
# (Pdb) b channel/
# Breakpoint 1 at c:\users\me\proj\channel\
# (Pdb) c