背景

我是计算机科学一年级的学生,我在我爸爸的小公司兼职。我没有任何实际应用程序开发的经验。我用Python写过脚本,用C写过一些课程,但没有像这样的。

我爸爸有一家小型培训公司,目前所有的课程都是通过外部网络应用程序安排、录制和跟踪的。有一个导出/“报告”功能,但它是非常通用的,我们需要特定的报告。我们无法访问实际的数据库来运行查询。我被要求建立一个自定义报告系统。

我的想法是每天晚上创建通用的CSV导出,并将它们导入(可能使用Python)到办公室托管的MySQL数据库中,从那里我可以运行所需的特定查询。我没有数据库方面的经验,但了解最基本的知识。我读了一些关于数据库创建和标准表单的知识。

我们可能很快就会有国际客户,所以我希望数据库不会爆炸,如果/当这种情况发生时。我们目前也有一些大公司作为客户,他们有不同的部门(例如ACME母公司、ACME医疗保健部门、ACME身体护理部门)。

我想到的模式如下:

From the client perspective: Clients is the main table Clients are linked to the department they work for Departments can be scattered around a country: HR in London, Marketing in Swansea, etc. Departments are linked to the division of a company Divisions are linked to the parent company From the classes perspective: Sessions is the main table A teacher is linked to each session A statusid is given to each session. E.g. 0 - Completed, 1 - Cancelled Sessions are grouped into "packs" of an arbitrary size Each packs is assigned to a client

我在一张纸上“设计”(更像是乱写)模式,试图将其正常化到第三种形式。然后我把它插入MySQL工作台,它使它对我来说都很漂亮:(点击这里查看全尺寸图形)

(来源:maian.org)

我将运行的示例查询

哪些仍有信用的客户是不活跃的(未来没有安排课程的客户) 每个客户/部门/部门的出勤率是多少(以每次会议的状态id来衡量) 一个老师一个月上几节课 标记出勤率低的客户 人力资源部门的自定义报告,包括其部门人员的出勤率

问题(s)

这是设计过度还是我的方向正确? 对于大多数查询,需要连接多个表是否会导致很大的性能损失? 我给客户端添加了一个“lastsession”列,因为这可能是一个常见的查询。这是一个好主意,还是我应该保持数据库严格规范化?

谢谢你的宝贵时间


当前回答

这不是过度设计,这是我处理问题的方式。加入很好,不会有太大的性能损失(这是完全必要的,除非你对数据库进行反规格化,这是不建议的!)对于状态,查看是否可以使用枚举数据类型来优化该表。

其他回答

我在培训/学校领域工作过,我想我要指出的是,在你所说的“会话”(给定课程的实例)和课程本身之间通常存在M:1的关系。换句话说,你的目录提供了这门课程(“西班牙语101”之类的),但你可能在一个学期中有两个不同的实例(Tu-Th由Smith教授,wed - friday由Jones教授)。

除此之外,这看起来是个不错的开始。我敢打赌,您会发现客户端域(指向“客户端”的图形)比您所建模的要复杂得多,但在获得一些实际数据来指导您之前,不要过于复杂。

我想到了几件事:

The tables seemed geared to reporting, but not really running the business. I would think when a client signs up, there's essentially an order being placed for the client attending a list of sessions, and that order might be for multiple employees in one company. It would seem an "order" table would really be at the center of your system and driving your data capture and eventual reporting. (Compare the paper documents you've been using to run the business with your database design to see if there's a logical match.) Companies often don't have divisions. Employees sometimes change divisions/departments, maybe even mid-session. Companies sometimes add/delete/rename divisions/departments. Make sure the possible realtime changing contents of your tables doesn't make subsequent reporting/grouping difficult. With so much contact data split over so many tables, you might have to enforce very strict data entry validation to keep your reports meaningful and inclusive. Eg, when a new client is added, making sure his company/division/department/city match the same values as his coworkers. The "packs" concept isn't clear at all. Since you indicate it's a small business, it would be surprising if performance would be an issue, considering the speed and capacity of current machines.

I want to address only the concern that joining to mutiple tables will casue a performance hit. Do not be afraid to normalize because you will have to do joins. Joins are normal and expected in relational datbases and they are designed to handle them well. You will need to set PK/FK relationships (for data integrity, this is important to consider in designing) but in many databases FKs are not automatically indexed. Since they wil be used in the joins, you will definitelty want to start by indexing the FKS. PKs generally get an index on creation as they have to be unique. It is true that datawarehouse design reduces the number of joins, but usually one doesn't get to the point of data warehousing until one has millions of records needed to be accessed in one report. Even then almost all data warehouses start with a transactional database to collect the data in real time and then data is moved to the warehouse on a schedule (nightly or monthly or whatever the business need is). So this is a good start even if you need to design a data warehouse later to improve report performance.

我不得不说你的设计对于一个计算机科学的大一学生来说是令人印象深刻的。

以下是关于你问题的更多答案:

1)对于第一次遇到这种问题的人来说,你说得很对。我认为到目前为止,其他人在这个问题上的建议几乎涵盖了这个问题。好工作!

2 & 3)性能的下降很大程度上取决于为特定的查询/过程设置和优化正确的索引,更重要的是记录的数量。除非您的主表中有超过一百万条记录,否则您似乎已经走上了一条足够主流的设计道路,在合理的硬件上,性能不会成为问题。

That said, and this relates to your question 3, with the start you have you probably shouldn't really be overly worried about performance or hyper-sensitivity to normalization orthodoxy here. This is a reporting server you are building, not a transaction based application backend, which would have a much different profile with respect to the importance of performance or normalization. A database backing a live signup and scheduling application has to be mindful of queries that take seconds to return data. Not only does a report server function have more tolerance for complex and lengthy queries, but the strategies to improve performance are much different.

For example, in a transaction based application environment your performance improvement options might include refactoring your stored procedures and table structures to the nth degree, or developing a caching strategy for small amounts of commonly requested data. In a reporting environment you can certainly do this but you can have an even greater impact on performance by introducing a snapshot mechanism where a scheduled process runs and stores pre-configured reports and your users access the snapshot data with no stress on your db tier on a per request basis.

所有这些都是为了说明根据所创建的db的角色不同,所采用的设计原则和技巧可能会有所不同。我希望这对你们有帮助。

顺便说一句,值得注意的是,如果您已经在生成csv并希望将它们加载到mySQL数据库中,load DATA LOCAL INFILE是您最好的朋友:http://dev.mysql.com/doc/refman/5.1/en/load-data.html。Mysqlimport也值得一看,它是一个命令行工具,基本上是一个装载数据文件的漂亮包装器。