This is a guest article written by Patrick Altman, who is a dedicated contributor to open-source software and serves as vice president of engineering at Eldarion.
In this post, we will analyse a Django app boilerplate, which is designed to facilitate the rapid development of open-source applications that are compliant with generally accepted standards.
django-stripe-payments will serve as the foundation for the layout (or pattern) that we will be looking at. This layout has been demonstrated to be effective for well over one hundred distinct open source projects that have been released by Eldarion and Pinax.
When you are reading this article, bear in mind that the pattern described here may not apply to your particular project. Despite this, there are a few things that are fairly typical of Python projects and, to some extent, open source projects in general. We are going to concentrate on them.
A new user will find it easier to explore your source code if your project layout is well designed. In addition, it is essential to adhere to the many norms that are generally recognised as being in place since they have been established. In addition, a well-designed plan for the project is beneficial to the packing process.
Python packages, which are similar to reusable Django apps, will have somewhat different layouts than Django projects and other similar endeavours. These differences will be reflected in the project’s directory structure. Utilizing the project that we have provided as an example, we would want to emphasise certain characteristics of the layout as well as files that are included in the project’s top level.
You need to make sure that the files that include metadata that define different parts of your project, such as LICENSE, CONTRIBUTING.md, and README.rst, as well as any scripts for running tests and packaging, are stored in the root directory of your project. Also, there should be a folder at this root level that is titled what you want the Python package name to be. This folder should be located in the same directory as the Python source code. In our django-stripe-payments example it is payments . At the end, you should keep your documentation in a Sphinx-based project and save it in a folder that is named docs.
If you want your software to be used by the widest possible audience, it is often wisest to licence it under one of the more lenient licences such as the MIT or BSD licence. The more the usage, the greater the exposure in a variety of real world situations, which in turn enhances the chance for feedback and collaboration via the use of pull requests. Put the contents of the licence in a file called LICENSE and save it in the root directory of your computer.
In the root directory of every project there need to be a file named README.rst. This document’s purpose is to give a fast getting started guide, introduce the user to the project in a concise manner, and define the issue that is addressed by the solution.
If you give it the name README.rst and place it at the root of your repository, GitHub will show it on the main project page for your repository. This allows prospective users to view it and quickly scan through it to get a sense of how your programme may be able to assist them.
It is advised that you replace the use of Markdown in the readme with reStructuredText in order to ensure that it appears well on PyPI if you choose to publish your project there.
Anyone that are interested in contributing code to your project by means of Pull Requests will find the coding style guide, procedures, and guidelines discussed in a CONTRIBUTING.md file to be helpful. It is beneficial to decrease the barrier for folks who desire to contribute code, and this helps do so. Those who are contributing for the first time could be anxious about making a mistake or deviating from the norm, and the more precise this document is, the more first-time contributors can verify their work without having to ask questions that they might be too timid to ask.
The distribution of your project will benefit from good packaging. You may take use of Python’s packaging facilities to develop and publish your project on the Python Package Index (PyPI) by authoring a script called setup.py.
This is a really simple script. For instance, the essential portion of the script for is as follows:
PACKAGE = "payments" NAME = "django-stripe-payments" DESCRIPTION = "a payments Django app for Stripe" AUTHOR = "Patrick Altman" AUTHOR_EMAIL = "firstname.lastname@example.org" URL = "https://github.com/eldarion/django-stripe-payments" VERSION = __import__(PACKAGE).__version__ setup( name=NAME, version=VERSION, description=DESCRIPTION, long_description=read("README.rst"), author=AUTHOR, author_email=AUTHOR_EMAIL, license="BSD", url=URL, packages=find_packages(exclude=["tests.*", "tests"]), package_data=find_package_data(PACKAGE, only_in_packages=False), classifiers=[ "Development Status :: 3 - Alpha", "Environment :: Web Environment", "Intended Audience :: Developers", "License :: OSI Approved :: BSD License", "Operating System :: OS Independent", "Programming Language :: Python", "Framework :: Django", ], install_requires=[ "django-jsonfield>=0.8", "stripe>=1.7.9", "django>=1.4", "pytz" ], zip_safe=False )
There are a few different things occurring right now. Do you remember when we spoke about using reStructuredText to format the README.rst file?
This is due to the fact that, as you can see for the long description, we are utilising the contents of that file to fill the landing page on PyPI, and that language is used for markup on that page. In PyPI, the classifiers are a collection of metadata that assist in placing your project within the appropriate categories.
Last but not least, the install requires parameter will ensure that the dependencies that you have indicated either get installed during the installation of your package or are already installed.
You are losing out on a lot of opportunities if your project is not hosted on GitHub. While there are other web-based distributed version control system (DVCS) sites that provide free hosting for open source projects, none of them have done more for open source than GitHub.
Handling Pull Requests
Making a fantastic open source project greater than yourself is a necessary step in the process of establishing one. In order to do this, it is necessary to expand not just the user base but also the contribution base. The way that this is done has been significantly altered as a result of GitHub and git in general.
Being timely when handling pull requests is one of the keys to expanding the number of contributors. This does not imply that you have to accept every contribution that is made, but it does mean that you should have an open mind and treat answers with the same respect that you would like to receive if you were making a contribution to another project.
Do not just deny requests that you do not want accepted without first taking the time to explain why you will not accept them. If at all feasible, describe how the requests might be modified so that they can be accepted. You should go ahead and accept it and then make the modifications that you’d want to make after that if the improvements are minimal or if you can otherwise improve upon them yourself. There is a narrow line to walk between requesting changes and carrying out such adjustments on your own.
The overarching goal here is to cultivate an environment of thankfulness and friendliness. Keep in mind that your donors are giving their time and effort to help grow your organisation.
Versioning, Branching, and Releases
While developing releases, you should familiarise yourself with Semantic Versioning and follow its guidelines.
When making large releases, you should always specify precisely what changes are backward incompatible. If you make changes to your document as you are committing them, you should keep a change log file and update it as you work between releases. This will make things much simpler. This file could be nothing more than a CHANGELOG file located at the root of your project, or it might be a component of your documentation located in a file someplace like docs/changelog.rst. Because of this, putting up a quality set of Release Notes will require relatively little work on your part.
Maintain order in the master. There is never a guarantee that users won’t choose to use the code on master rather than one of the packages that have been released. Make feature branches for your work, and only merge them after everything has been tested and is pretty stable.
No project can be considered finished in its entirety unless there is at least some quantity of documentation. If your product has sufficient documentation, users will not need to study the source code in order to figure out how to use it. If you care about your users, it will show in the quality of your documentation.
With Read The Docs, you can have your documentation automatically produced and hosted without incurring any additional costs. It will automatically update itself with any change made to the master branch, which is a really convenient feature.
In order to make use of Read the Docs, you will need to initiate the creation of a Sphinx-based documentation project in the docs folder that is located at the root of your project. This actually is a really straightforward thing to do, and it comprises of a Makefile and a conf.py file, followed by a set of files structured in the reStructuredText language. You have the option of doing this task manually by copying and pasting the Makefile and conf.py file from an earlier project and adjusting the settings, or you may perform this task automatically by using a programme.
$ pip install Sphinx $ sphinx-quickstart
Automating Code Quality
You have access to a variety of tools that may assist you in maintaining a high level of quality across all of your projects. Linting, testing, and test coverage are all important steps that should be taken to assist maintain a high level of quality over the duration of the project.
To begin, start by linting with anything like as
pep8 . They each have their own advantages and disadvantages, which are not going to be covered in detail in this article. The main idea here is that maintaining a consistent style is the first step in producing a high-quality product. In addition to assisting with style, linters may help uncover certain small issues in a very short amount of time.
With the django-stripe-payments project, for instance, we have a script that combines the execution of two distinct lint tools with tailor-made exceptions for our specific needs.
# lint.sh pylint --rcfile=.pylintrc payments && pep8 --ignore=E501 payments
If you want to see some samples of the exceptions, have a look at the.pylintrc file that is located in the django-stripe-payments repository. Pylint has a tendency to be somewhat aggressive and loud with things that aren’t actually an issue in and of itself. This is one of its characteristics. You are the only person who can determine how to adjust your own.pylintrc file, however I strongly suggest that you describe the file so that you may understand in the future why you excluded certain rules.
In order to demonstrate that your code is functional, it is essential to set up a solid testing infrastructure. In addition, starting with the creation of some of your tests might assist you in thinking through your API. Even if you create the tests last, the process of creating them will reveal weak places in your API design and/or other usability concerns that you can fix before they are reported to you. This is true even if you write the tests first.
Using coverage should be included as a component of your testing infrastructure.
py in order to keep an eye on the coverage of a module. This tool will not inform you whether the code is being tested; only you are able to do that. Nevertheless, it will assist you in locating code that is never run at all, which will inform you of what code is definitely not being tested.
After you have scripts for linting, testing, and coverage incorporated into your project, you will be able to set up automation so that these tools run with every push in one or more environments (e.g. different versions of Python, different versions of Django, or both in a test matrix).
Tests and code inspections may be run automatically when an integration with Travis has been set up. Coveralls may be added to this configuration to provide Travis with historical testing coverage whenever the builds on Travis are executed. Both contain capabilities that make it possible to include a badge in the README file of your project. want to demonstrate the most recent build status and
Collaboration vs Cooperation
David Eaves delivered a keynote presentation at DjangoCon 2011 in which he brilliantly put into words the concept that despite the definitions of collaboration and cooperation are similar, there is a nuanced difference:
To my mind, the key difference between cooperation and collaboration is that the latter calls on all parties engaged in a project to work together to find solutions to issues.
He then devotes a whole subsequent article to the topic of how GitHub was the primary impetus behind the evolution of how open source software is developed, with a particular focus on the issue of community administration. According to what is said in “How GitHub Saved OpenSource” by Eaves (see Resources),
“In my opinion, open source projects are at their most successful when contributors are able to participate in cooperation with a low transaction cost, and when high transaction cost collaboration is kept to a minimum. The brilliance of open source lies in the fact that it does not need a group to discuss each and every problem and work on solutions jointly; rather the contrary, in fact.
He then continues on to discuss the benefits of forking, including how it lowers the otherwise prohibitively expensive costs of collaboration by facilitating low-cost collaboration among individuals who are able to move projects ahead regardless of whether or not they have permission. This forking delays the time at which cooperation is required until the point at which solutions are prepared to be merged in. This paves the way for far more fast and dynamic experimentation.
By adhering to the conventions and patterns described in this article, you will be able to mould your project in a manner that is analogous to the one described above, with the end goal of increasing low-cost cooperation while minimising expensive collaboration throughout the writing, maintenance, and support of your project.
This article covers a lot of ground but provided very few concrete examples. Examining the code repositories on GitHub belonging to projects that execute these patterns successfully is the most effective method for acquiring in-depth knowledge on these topics.
Pinax has provided their very own boilerplate over here, which may be put to use in order to rapidly produce a Project that adheres to the rules and patterns described in this article.
Remember that even if you choose to use our boilerplate or another boilerplate, you will still need to find a way to accomplish these things that is unique to you and your project, regardless of which boilerplate you choose. Writing the actual code for your project is in addition to all of these other tasks, but they are all things that aid in the process of creating a community of contributors.