如何从源代码管理构建数据库?

关于是否应该对数据库对象进行版本控制,在 SO 社区 wiki 上有一些讨论

对于我的团队来说,这是一个有争议的讨论点——特别是因为开发人员和 DBA 在评估数据库部署的自动化方法的好处和风险时,通常有不同的目标、方法和关注点。

我想听听 SO 社区关于什么实践在现实世界中是有效的一些想法。

我意识到哪种做法真的是最好的,这有点主观,但我认为一个好的对话什么样的工作可以对许多人有帮助。

以下是我的一些关于这个主题中关注领域的问题。这些并不意味着一个明确的清单-而是一个起点,让人们帮助了解我在寻找什么。

  1. 测试环境和生产环境都应该从源代码管理构建吗?
    • 两者都应该使用自动化来构建——还是应该通过从稳定的最终测试环境中复制对象来构建生产?
    • 如何处理部署脚本中测试环境和生产环境之间的潜在差异?
    • 您如何测试部署脚本在生产环境中的工作效率是否与在测试环境中一样高?
  2. 应该对哪些类型的对象进行版本控制?
    • 只是代码(过程、包、触发器、 Java 等) ?
    • 索引?
    • 约束?
    • 表格定义?
    • 表更改脚本? (例如,ALTER 脚本)
    • 一切?
  3. 哪些类型的对象不应该进行版本控制?
    • 序列?
    • 格兰茨?
    • 用户帐户? ?
  4. 应该如何在 SCM 存储库中组织数据库对象?
    • 如何处理转换脚本或 ALTER 脚本这样的一次性事件?
    • 如何处理从数据库中退出的对象?
    • 从开发到测试级别,谁应该负责 宣传对象?
    • 如何协调来自多个开发人员的更改?
    • 如何处理多个系统使用的数据库对象的分支?
  5. 对于这个过程,哪些例外(如果有的话)是合理的?
    • 安全问题?
    • 涉及身份识别问题的数据?
    • 不能完全自动化的脚本?
  6. 你如何才能使这个过程具有弹性和可执行性?
    • 开发人员错误?
    • 意外的环境问题?
    • 为了灾难恢复?
  7. 你如何使决策者相信 DB-SCM 的好处确实证明了成本的合理性?
    • 轶事证据
    • 行业研究?
    • 行业最佳实践建议?
    • 向认可机构上诉?
    • 成本效益分析? ?
  8. 谁应该“拥有”这个模型中的数据库对象?
    • 开发商?
    • DBA?
    • 数据分析员?
    • 不止一个?
10168 次浏览

I treat the SQL as source-code when possible

If I can write it in standard's compliant SQL then it generally goes in a file in my source control. The file will define as much as possible such as SPs, Table CREATE statements.

I also include dummy data for testing in source control:

  1. proj/sql/setup_db.sql
  2. proj/sql/dummy_data.sql
  3. proj/sql/mssql_specific.sql
  4. proj/sql/mysql_specific.sql

And then I abstract out all my SQL queries so that I can build the entire project for MySQL, Oracle, MSSQL or anything else.

Build and test automation uses these build-scripts as they are as important as the app source and tests everything from integrity through triggers, procedures and logging.

By asking "teaser questions" you seem to be more interested in a discussion than someone's opinion of final answers. The active (>2500 members) mailing list agileDatabases has addressed many of these questions and is, in my experience, a sophisticated and civil forum for this kind of discussion.

We have our Silverlight project with MSSQL database in Git version control. The easiest way is to make sure you've got a slimmed down database (content wise), and do a complete dump from f.e. Visual Studio. Then you can do 'sqlcmd' from your build script to recreate the database on each dev machine.

For deployment this is not possible since the databases are too large: that's the main reason for having them in a database in the first place.

I strongly believe that a DB should be part of source control and to a large degree part of the build process. If it is in source control then I have the same coding safe guards when writing a stored procedure in SQL as I do when writing a class in C#. I do this by including a DB scripts directory under my source tree. This script directory doesn't necessarily have one file for one object in the database. That would be a pain in the butt! I develop in my db just a I would in my code project. Then when I am ready to check in I do a diff between the last version of my database and the current one I am working on. I use SQL Compare for this and it generates a script of all the changes. This script is then saved to my db_update directory with a specific naming convention 1234_TasksCompletedInThisIteration where the number is the next number in the set of scripts already there, and the name describes what is being done in this check in. I do this this way because as part of my build process I start with a fresh database that is then built up programatically using the scripts in this directory. I wrote a custom NAnt task that iterates through each script executing its contents on the bare db. Obviously if I need some data to go into the db then I have data insert scripts too. This has many benefits too it. One, all of my stuff is versioned. Two, each build is a fresh build which means that there won't be any sneaky stuff eking its way into my development process (such as dirty data that causes oddities in the system). Three, when a new guy is added to the dev team, they simply need to get latest and their local dev is built for them on the fly. Four, I can run test cases (I didn't call it a "unit test"!) on my database as the state of the database is reset with each build (meaning I can test my repositories without worrying about adding test data to the db).

This is not for everyone.

This is not for every project. I usually work on green field projects which allows me this convenience!

We use continuous integration via TeamCity. At each checkin to source control, the database and all the test data is re-built from scratch, then the code, then the unit tests are run against the code. If you're using a code-generation tool like CodeSmith, it can also be placed into your build process to generate your data access layer fresh with each build, making sure that all your layers "match up" and do not produce errors due to mismatched SP parameters or missing columns.

Each build has its own collection of SQL scripts that are stored in the $project\SQL\ directory in source control, assigned a numerical prefix and executed in order. That way, we're practicing our deployment procedure at every build.

Depending on the lookup table, most of our lookup values are also stored in scripts and run to make sure the configuration data is what we expect for, say, "reason_codes" or "country_codes". This way we can make a lookup data change in dev, test it out and then "promote" it through QA and production, instead of using a tool to modify lookup values in production, which can be dangerous for uptime.

We also create a set of "rollback" scripts that undo our database changes, in case a build to production goes screwy. You can test the rollback scripts by running them, then re-running the unit tests for the build one version below yours, after its deployment scripts run.

Here are some some answers to your questions:

  1. Should both test and production environments be built from source control? YES
    • Should both be built using automation - or should production by built by copying objects from a stable, finalized test environment?
    • Automation for both. Do NOT copy data between the environments
    • How do you deal with potential differences between test and production environments in deployment scripts?
    • Use templates, so that actually you would produce different set of scripts for each environment (ex. references to external systems, linked databases, etc)
    • How do you test that the deployment scripts will work as effectively against production as they do in test?
    • You test them on pre-production environment: test deployment on exact copy of production environment (database and potentially other systems)
  2. What types of objects should be version controlled?
    • Just code (procedures, packages, triggers, java, etc)?
    • Indexes?
    • Constraints?
    • Table Definitions?
    • Table Change Scripts? (eg. ALTER scripts)
    • Everything?
    • Everything, and:
      • Do not forget static data (lookup lists etc), so you do not need to copy ANY data between environments
      • Keep only current version of the database scripts (version controlled, of course), and
      • Store ALTER scripts: 1 BIG script (or directory of scripts named liked 001_AlterXXX.sql, so that running them in natural sort order will upgrade from version A to B)
  3. Which types of objects shouldn't be version controlled?
    • Sequences?
    • Grants?
    • User Accounts?
    • see 2. If your users/roles (or technical user names) are different between environments, you can still script them using templates (see 1.)
  4. How should database objects be organized in your SCM repository?
    • How do you deal with one-time things like conversion scripts or ALTER scripts?
    • see 2.
    • How do you deal with retiring objects from the database?
    • deleted from DB, removed from source control trunk/tip
    • Who should be responsible for promoting objects from development to test level?
    • dev/test/release schedule
    • How do you coordinate changes from multiple developers?
    • try NOT to create a separate database for each developer. you use source-control, right? in this case developers change the database and check-in the scripts. to be completely safe, re-create the database from the scripts during nightly build
    • How do you deal with branching for database objects used by multiple systems?
    • tough one: try to avoid at all costs.
  5. What exceptions, if any, can be reasonable made to this process?
    • Security issues?
    • do not store passwords for test/prod. you may allow it for dev, especially if you have automated daily/nightly DB rebuilds
    • Data with de-identification concerns?
    • Scripts that can't be fully automated?
    • document and store with the release info/ALTER script
  6. How can you make the process resilient and enforceable?
    • To developer error?
    • tested with daily build from scratch, and compare the results to the incremental upgrade (from version A to B using ALTER). compare both resulting schema and static data
    • To unexpected environmental issues?
    • use version control and backups
    • compare the PROD database schema to what you think it is, especially before deployment. SuperDuperCool DBA may have fixed a bug that was never in your ticket system :)
    • For disaster recovery?
  7. How do you convince decision makers that the benefits of DB-SCM truly justify the cost?
    • Anecdotal evidence?
    • Industry research?
    • Industry best-practice recommendations?
    • Appeals to recognized authorities?
    • Cost/Benefit analysis?
    • if developers and DBAs agree, you do not need to convince anyone, I think (Unless you need money to buy a software like a dbGhost for MSSQL)
  8. Who should "own" database objects in this model?
    • Developers?
    • DBAs?
    • Data Analysts?
    • More than one?
    • Usually DBAs approve the model (before check-in or after as part of code review). They definitely own performance related objects. But in general the team own it [and employer, of course :)]

Rather than get into white tower arguments, here's a solution that has worked very well for me on real world problems.

Building a database from scratch can be summarised as managing sql scripts.

DBdeploy is a tool that will check the current state of a database - e.g. what scripts have been previously run against it, what scripts are available to be run and therefore what scripts are needed to be run.

It will then collate all the needed scripts together and run them. It then records which scripts have been run.

It's not the prettiest tool or the most complex - but with careful management it can work very well. It's open source and easily extensible. Once the running of the scripts is handled nicely adding some extra components such as a shell script that checks out the latest scripts and runs dbdeploy against a particular instance is easily achieved.

See a good introduction here:

http://code.google.com/p/dbdeploy/wiki/GettingStarted

You might find that Liquibase handles a lot of what you're looking for.

I basically agree with every answer given by van. Fore more insight, my baseline for database management is K. Scott Allen series (a must read, IMHO. And Jeff's opinion too it seems).

  • Database objects can always be rebuilt from scratch by launching a single SQL file (that can itself call other SQL files) : Create.sql. This can include static data insertion (lists...).
  • The SQL scripts are parameterized so that no environment-dependent and/or sensitive information is stored in plain files.
  • I use a custom batch file to launch Create.sql : Create.cmd. Its goal is mainly to check for pre-requisites (tools, environment variables...) and send parameters to the SQL script. It can also bulk-load static data from CSV files for performance issues.
  • Typically, system user credentials would be passed as a parameter to the Create.cmd file.

IMHO, dynamic data loading should require another step, depending on your environment. Developers will want to load their database with test, junk or no data at all, while at the other end production managers will want to load production data. I would consider storing test data in source control as well (to ease unit testing, for instance).

Once the first version of the database has been put into production, you will need not only build scripts (mainly for developers), but also upgrade scripts (based on the same principles) :

  • There must be a way to retrieve the version from the database (I use a stored procedure, but a table would do as well).
  • Before releasing a new version, I create an Upgrade.sql file (that can call other ones) that allows upgrading version N-1 to version N (N being the version being released). I store this script under a folder named N-1.
  • I have a batch file that does the upgrade : Upgrade.cmd. It can retrieve the current version (CV) of the database via a simple SELECT statement, launch the Upgrade.sql script stored under the CV folder, and loop until no folder is found. This way, you can automatically upgrade from, say, N-3 to N.

Problems with this are :

  • It is difficult to automatically compare database schemas, depending on database vendors. This can lead to incomplete upgrade scripts.
  • Every change to the production environment (usually by DBAs for performance tuning) should find its way to the source control as well. To make sure of this, it is usually possible to log every modification to the database via a trigger. This log is reset after every upgrade.
  • More ideally, though, DBA initiated changes should be part of the release/upgrade process when possible.

As to what kind of database objects do you want to have under source control ? Well, I would say as much as possible, but not more ;-) If you want to create users with passwords, get them a default password (login/login, practical for unit testing purposes), and make the password change a manual operation. This happens a lot with Oracle where schemas are also users...

Every developer should have their own local database, and use source code control to publish to the team. My solution is here : http://dbsourcetools.codeplex.com/ Have fun, - Nathan

+1 for Liquibase: LiquiBase is an open source (LGPL), database-independent library for tracking, managing and applying database changes. It is built on a simple premise: All database changes (structure and data) are stored in an XML-based descriptive manner and checked into source control. The good point, that DML changes are stored semantically, not just diff, so that you could track the purpose of the changes.

It could be combined with GIT version control for better interaction. I'm going to configure our dev-prod enviroment to try it out.

Also you could use Maven, Ant build systems for building production code from scripts.

Tha minus is that LiquiBase doesnt integrate into widespread SQL IDE's and you should do basic operations yourself.

In adddition to this you could use DBUnit for DB testing - this tool allows data generation scripts to be used for testing your production env with cleanup aftewards.

IMHO:

  1. Store DML in files so that you could version them.
  2. Automate schema build process from source control.
  3. For testing purposes developer could use local DB builded from source control via build system + load testing Data with scripts, or DBUnit scripts (from Source Control).
  4. LiquiBase allows you to provide "run sequence" of scripts to respect dependences.
  5. There should be DBA team that checks master brunch with ALL changes before production use. I mean they check trunk/branch from other DBA's before committing into MASTER trunk. So that master is always consistent and production ready.

We faced all mentioned problems with code changes, merging, rewriting in our billing production database. This topic is great for discovering all that stuff.