"Pretty" Continuous Integration for Python

This is a slightly.. vain question, but BuildBot's output isn't particularly nice to look at..

For example, compared to..

..and others, BuildBot looks rather.. archaic

I'm currently playing with Hudson, but it is very Java-centric (although with this guide, I found it easier to setup than BuildBot, and produced more info)

Basically: is there any Continuous Integration systems aimed at python, that produce lots of shiny graphs and the likes?


Update: Since this time the Jenkins project has replaced Hudson as the community version of the package. The original authors have moved to this project as well. Jenkins is now a standard package on Ubuntu/Debian, RedHat/Fedora/CentOS, and others. The following update is still essentially correct. The starting point to do this with Jenkins is different.

Update: After trying a few alternatives, I think I'll stick with Hudson. Integrity was nice and simple, but quite limited. I think Buildbot is better suited to having numerous build-slaves, rather than everything running on a single machine like I was using it.

Setting Hudson up for a Python project was pretty simple:

  • Download Hudson from http://hudson-ci.org/
  • Run it with java -jar hudson.war
  • Open the web interface on the default address of http://localhost:8080
  • Go to Manage Hudson, Plugins, click "Update" or similar
  • Install the Git plugin (I had to set the git path in the Hudson global preferences)
  • Create a new project, enter the repository, SCM polling intervals and so on
  • Install nosetests via easy_install if it's not already
  • In the a build step, add nosetests --with-xunit --verbose
  • Check "Publish JUnit test result report" and set "Test report XMLs" to **/nosetests.xml

That's all that's required. You can setup email notifications, and the plugins are worth a look. A few I'm currently using for Python projects:

  • SLOCCount plugin to count lines of code (and graph it!) - you need to install sloccount separately
  • Violations to parse the PyLint output (you can setup warning thresholds, graph the number of violations over each build)
  • Cobertura can parse the coverage.py output. Nosetest can gather coverage while running your tests, using nosetests --with-coverage (this writes the output to **/coverage.xml)
32867 次浏览

Don't know if it would do : Bitten is made by the guys who write Trac and is integrated with Trac. 阿帕奇阿甘 is the CI tool used by Apache. It is written in Python.

我们已经取得了巨大的成功与 团队城市作为我们的 CI 服务器和使用鼻子作为我们的测试运行程序。用于 nosetest 的 Teamcity 插件为失败的测试(可以通过电子邮件)提供计数通过/失败,可读的显示。您甚至可以在堆栈运行时查看测试失败的详细信息。

当然,如果支持在多台机器上运行,而且设置和维护比 buildbot 简单得多。

You might want to check out 鼻子 and Xunit 输出插件. You can have it run your unit tests, and coverage checks with this command:

nosetests --with-xunit --enable-cover

如果您希望使用 Jenkins 路线,或者希望使用另一个支持 JUnit 测试报告的 CI 服务器,那么这将非常有帮助。

类似地,您可以使用 詹金斯的违规插件捕获 pylint 的输出

Buildbot 的瀑布页面可以相当漂亮

我们已经用过很多次了。它很漂亮,并且与 Trac 集成得很好,但是如果你有任何非标准的工作流程,定制它是一个痛苦的事情。此外,也没有那么多的插件,因为有更多的流行工具。目前,我们正在评估哈德森作为替代品。

信号是另一种选择。你可以知道更多关于它和观看视频也 给你

我想这条线索已经很古老了,但这是我对 Hudson 的看法:

我决定和 pip 一起创建一个 repo (工作起来很痛苦,但是看起来很漂亮) ,Hudson 会自动上传到它,并进行成功的测试。下面是我的粗略脚本,可用于 Hudson 配置执行脚本,比如:/var/lib/hudson/venv/main/bin/hudson _ script.py-w $WORKSPACE-p my.package-v $BUILD _ NUMBER,只需在配置位中放入 * */Coverage.xml、 pylint.txt 和 nosetests.xml 即可:

#!/var/lib/hudson/venv/main/bin/python
import os
import re
import subprocess
import logging
import optparse


logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(levelname)s %(message)s')


#venvDir = "/var/lib/hudson/venv/main/bin/"


UPLOAD_REPO = "http://ldndev01:3442"


def call_command(command, cwd, ignore_error_code=False):
try:
logging.info("Running: %s" % command)
status = subprocess.call(command, cwd=cwd, shell=True)
if not ignore_error_code and status != 0:
raise Exception("Last command failed")


return status


except:
logging.exception("Could not run command %s" % command)
raise


def main():
usage = "usage: %prog [options]"
parser = optparse.OptionParser(usage)
parser.add_option("-w", "--workspace", dest="workspace",
help="workspace folder for the job")
parser.add_option("-p", "--package", dest="package",
help="the package name i.e., back_office.reconciler")
parser.add_option("-v", "--build_number", dest="build_number",
help="the build number, which will get put at the end of the package version")
options, args = parser.parse_args()


if not options.workspace or not options.package:
raise Exception("Need both args, do --help for info")


venvDir = options.package + "_venv/"


#find out if venv is there
if not os.path.exists(venvDir):
#make it
call_command("virtualenv %s --no-site-packages" % venvDir,
options.workspace)


#install the venv/make sure its there plus install the local package
call_command("%sbin/pip install -e ./ --extra-index %s" % (venvDir, UPLOAD_REPO),
options.workspace)


#make sure pylint, nose and coverage are installed
call_command("%sbin/pip install nose pylint coverage epydoc" % venvDir,
options.workspace)


#make sure we have an __init__.py
#this shouldn't be needed if the packages are set up correctly
#modules = options.package.split(".")
#if len(modules) > 1:
#    call_command("touch '%s/__init__.py'" % modules[0],
#                 options.workspace)
#do the nosetests
test_status = call_command("%sbin/nosetests %s --with-xunit --with-coverage --cover-package %s --cover-erase" % (venvDir,
options.package.replace(".", "/"),
options.package),
options.workspace, True)
#produce coverage report -i for ignore weird missing file errors
call_command("%sbin/coverage xml -i" % venvDir,
options.workspace)
#move it so that the code coverage plugin can find it
call_command("mv coverage.xml %s" % (options.package.replace(".", "/")),
options.workspace)
#run pylint
call_command("%sbin/pylint --rcfile ~/pylint.rc -f parseable %s > pylint.txt" % (venvDir,
options.package),
options.workspace, True)


#remove old dists so we only have the newest at the end
call_command("rm -rfv %s" % (options.workspace + "/dist"),
options.workspace)


#if the build passes upload the result to the egg_basket
if test_status == 0:
logging.info("Success - uploading egg")
upload_bit = "upload -r %s/upload" % UPLOAD_REPO
else:
logging.info("Failure - not uploading egg")
upload_bit = ""


#create egg
call_command("%sbin/python setup.py egg_info --tag-build=.0.%s --tag-svn-revision --tag-date sdist %s" % (venvDir,
options.build_number,
upload_bit),
options.workspace)


call_command("%sbin/epydoc --html --graph all %s" % (venvDir, options.package),
options.workspace)


logging.info("Complete")


if __name__ == "__main__":
main()

当涉及到部署东西时,你可以这样做:

pip -E /location/of/my/venv/ install my_package==X.Y.Z --extra-index http://my_repo

然后人们可以使用以下方法开发产品:

pip -E /location/of/my/venv/ install -e ./ --extra-index http://my_repo

这里假设每个包都有一个回购结构,包含一个 setup.py 和所有设置好的依赖项,然后您可以检查主干并在其上运行这些东西。

我希望这能帮到别人。

——更新——

我已经添加了 epydoc,它非常适合 Hudson

Note that pip doesn't support the -E flag properly these days, so you have to create your venv separately

Atlassian's 竹子 is also definitely worth checking out. The entire Atlassian suite (JIRA, Confluence, FishEye, etc) is pretty sweet.

another one : 闪亮熊猫 is a hosted tool for python

如果您正在考虑宿主 CI 解决方案,并且正在进行开源,那么您也应该研究一下 Travis 线人——它与 GitHub 的集成非常好。虽然它最初是一个 Ruby 工具,但他们已经有了 added Python support

我会考虑 圈子-它有很好的 Python 支持,并且输出非常漂亮。

检查 Rutor.com。正如 this article解释的那样,它对每个构建都使用 Docker。因此,您可以在 Docker 映像中配置任何您喜欢的东西,包括 Python。

Continum 的 Binstar现在能够从 github 触发构建,并且可以为 linux、 osx 和 windows 编译(32/64)。最妙的是,它真的允许您将分布和持续集成紧密地耦合在一起。那就是跨越重重障碍,完善整合。站点、工作流和工具都经过了精心的打磨,AFAIK conda 是分发复杂 Python 模块的最健壮的 Python 方式,在这里您需要包装 还有分发 C/C + +/Fotran 库。

小小的免责声明,我实际上已经为一个客户机构建了一个类似的解决方案,这个客户机想要一种方法来自动测试和部署 任何代码在 git push 上,并通过 git 说明管理问题票据。这也导致了我在 AIMS 项目的工作。

可以很容易地设置一个有构建用户的裸节点系统,并通过 make(1)expect(1)crontab(1)/systemd.unit(5)incrontab(1)管理它们的构建。我们甚至可以更进一步,在分布式构建中使用 anable 和芹菜,并使用 gridfs/nfs 文件存储。

尽管如此,我并不期望除了 GrayBeardUNIX 人员或原则级别的工程师/架构师之外的任何人能够走到这一步。构建服务器只不过是一种以自动化方式任意执行脚本任务的方式,因此这是一个不错的想法和潜在的学习体验。