我正在做一个web应用程序,我需要为一些主要的更改做一个分支,事情是,这些更改需要更改数据库模式,所以我想把整个数据库放在git下。

我怎么做呢?是否有一个特定的文件夹,我可以保存在git存储库下?我怎么知道是哪个?我如何确定我放入了正确的文件夹?

我需要确定,因为这些更改是不向后兼容的;我可不能搞砸。

在我的例子中,数据库是PostgreSQL

编辑:

有人建议进行备份并将备份文件置于版本控制之下,而不是将数据库置于版本控制之下。说实话,我觉得这真的很难接受。

肯定有更好的办法。

更新:

好吧,没有更好的方法了,但我还是不太相信,所以我要稍微改变一下问题:

我想将整个数据库置于版本控制之下,我可以使用什么数据库引擎来将实际数据库置于版本控制之下,而不是转储?

sqlite是git友好的吗?

因为这只是开发环境,所以我可以选择任何我想要的数据库。

Edit2:

我真正想要的不是跟踪我的开发历史,而是能够从我的“新的根本性变化”分支切换到“当前稳定的分支”,并且能够用当前稳定的分支修复一些错误/问题等。这样,当我切换分支时,数据库就会自动地与我当前所在的分支兼容。 我不太关心实际数据。


当前回答

以下是我在项目中尝试做的事情:

分离数据、模式和默认数据。

数据库配置存储在不受版本控制的配置文件中(.gitignore)

数据库默认值(用于设置新项目)是一个受版本控制的简单SQL文件。

对于数据库模式,在版本控制下创建数据库模式转储。

最常见的方法是使用包含SQL语句的更新脚本(ALTER Table..或更新)。你还需要在你的数据库中有一个地方来保存你的模式的当前版本)

看看其他大型开源数据库项目(piwik,或者你最喜欢的cms系统),它们都使用updatescripts (1.sql,2.sql,3.sh,4.php.5.sql)

但这是一项非常耗时的工作,您必须创建并测试更新脚本,还需要运行一个通用的更新脚本来比较版本并运行所有必要的更新脚本。

所以理论上(这就是我正在寻找的)你可以 在每次更改后转储数据库模式(手动,conjob, git钩子(可能在提交之前)) (只有在一些非常特殊的情况下才创建更新脚本)

之后,在您的普通updatescript中(对于特殊情况,运行正常的updatescript),然后比较模式(转储和当前数据库),然后自动生成必要的ALTER语句。已经有一些工具可以做到这一点,但还没有找到一个好的工具。

其他回答

我遇到过这个问题,因为我有一个类似的问题,其中一些近似于基于DB的目录结构,存储“文件”,我需要git来管理它。它是分布式的,在云上使用复制,因此它的接入点将通过MySQL。

上述答案的要点,似乎类似地提出了一个问题的替代解决方案,使用Git来管理数据库中的一些东西,这有点错过了重点,所以我将尝试回答这个问题。

Git是一个系统,它在本质上存储了一个增量(差异)数据库,可以对其进行重新组装,以重现上下文。git的正常使用假设上下文是一个文件系统,而那些增量是该文件系统中的diff,但实际上所有git都是一个增量的分层数据库(分层,因为在大多数情况下,每个增量都是一个至少有一个父级的提交,以树状排列)。

理论上,只要你能生成一个增量,git就可以存储它。问题是git通常期望它生成delta的上下文是一个文件系统,类似地,当您签出git层次结构中的一个点时,它期望生成一个文件系统。

If you want to manage change, in a database, you have 2 discrete problems, and I would address them separately (if I were you). The first is schema, the second is data (although in your question, you state data isn't something you're concerned about). A problem I had in the past, was a Dev and Prod database, where Dev could take incremental changes to the schema, and those changes had to be documented in CVS, and propogated to live, along with additions to one of several 'static' tables. We did that by having a 3rd database, called Cruise, which contained only the static data. At any point the schema from Dev and Cruise could be compared, and we had a script to take the diff of those 2 files and produce an SQL file containing ALTER statements, to apply it. Similarly any new data, could be distilled to an SQL file containing INSERT commands. As long as fields and tables are only added, and never deleted, the process could automate generating the SQL statements to apply the delta.

The mechanism by which git generates deltas is diff and the mechanism by which it combines 1 or more deltas with a file, is called merge. If you can come up with a method for diffing and merging from a different context, git should work, but as has been discussed you may prefer a tool that does that for you. My first thought towards solving that is this https://git-scm.com/book/en/v2/Customizing-Git-Git-Configuration#External-Merge-and-Diff-Tools which details how to replace git's internal diff and merge tool. I'll update this answer, as I come up with a better solution to the problem, but in my case I expect to only have to manage data changes, in-so-far-as a DB based filestore may change, so my solution may not be exactly what you need.

Irmin (branching + time travel) Flur.ee (immutable + time travel + graph query) XTDB (formerly called 'CruxDB') (time travel + query) TerminusDB (immutable + branching + time travel + Graph Query!) DoltDB (branching + time-travel + SQL query) Quadrable (branching + remote state verification) EdgeDB (no real time travel, but migrations derived by the compiler after schema changes) Migra (diffing for Postgres schemas/data. Auto-generate migration scripts, auto-sync db state) ImmuDB (immutable + time-travel)

我们曾经在一个标准的LAMP配置上运行一个社交网站。我们有一个活动服务器、测试服务器和开发服务器,以及本地开发人员机器。所有这些都使用GIT进行管理。

On each machine, we had the PHP files, but also the MySQL service, and a folder with Images that users would upload. The Live server grew to have some 100K (!) recurrent users, the dump was about 2GB (!), the Image folder was some 50GB (!). By the time that I left, our server was reaching the limit of its CPU, Ram, and most of all, the concurrent net connection limits (We even compiled our own version of network card driver to max out the server 'lol'). We could not (nor should you assume with your website) put 2GB of data and 50GB of images in GIT.

To manage all this under GIT easily, we would ignore the binary folders (the folders containing the Images) by inserting these folder paths into .gitignore. We also had a folder called SQL outside the Apache documentroot path. In that SQL folder, we would put our SQL files from the developers in incremental numberings (001.florianm.sql, 001.johns.sql, 002.florianm.sql, etc). These SQL files were managed by GIT as well. The first sql file would indeed contain a large set of DB schema. We don't add user-data in GIT (eg the records of the users table, or the comments table), but data like configs or topology or other site specific data, was maintained in the sql files (and hence by GIT). Mostly its the developers (who know the code best) that determine what and what is not maintained by GIT with regards to SQL schema and data.

When it got to a release, the administrator logs in onto the dev server, merges the live branch with all developers and needed branches on the dev machine to an update branch, and pushed it to the test server. On the test server, he checks if the updating process for the Live server is still valid, and in quick succession, points all traffic in Apache to a placeholder site, creates a DB dump, points the working directory from 'live' to 'update', executes all new sql files into mysql, and repoints the traffic back to the correct site. When all stakeholders agreed after reviewing the test server, the Administrator did the same thing from Test server to Live server. Afterwards, he merges the live branch on the production server, to the master branch accross all servers, and rebased all live branches. The developers were responsible themselves to rebase their branches, but they generally know what they are doing.

如果测试服务器上有问题,例如。合并有太多冲突,然后代码被恢复(将工作分支指向'live'), SQL文件永远不会执行。在执行sql文件时,这被认为是一个不可逆的操作。如果SQL文件不能正常工作,则使用Dump恢复DB(开发人员被告知,因为提供了测试不佳的SQL文件)。

今天,我们同时维护一个sql-up和sql-down文件夹,它们具有相同的文件名,开发人员必须测试这两个正在升级的sql文件是否可以同样降级。这最终可以用bash脚本来执行,但是如果有人一直监视升级过程,这是个好主意。

虽然不是很好,但还是可以控制的。希望这能让你深入了解一个真实的、实用的、相对高可用性的站点。也许它有点过时,但仍然被遵循。

我希望它能简单一些。检入模式作为文本文件是捕获DB结构的良好开端。然而,对于内容,我还没有找到比CSV文件更干净、更好的git方法。一张桌子一个。然后,DB可以在多个分支上编辑,并非常好地合并。

我推荐neXtep(链接被删除-域名被nsfw网站接管)来控制数据库的版本,它有一组很好的文档和论坛,解释了如何安装和遇到的错误。我已经对postgreSQL 9.1和9.3进行了测试,我能够让它在9.1中工作,但在9.3中似乎无法工作。