Task 1: Types of Database Models Assignment
1.1 Different Types of Database Models
In Types of Database Models Assignment, we will discuss many different types of database models. Some simple, others extremely complex. Companies and individuals access these databases every day. If you look up a number in the telephone book, you are simply looking at a printed report from a database. If you make an online purchase, you are accessing a database. Databases have become ubiquitous in our lives.
There are six commonly recognized models (types) of database that are useful for different types of data or information. Depending upon your specific needs, one of these models should be appropriate for your migration from paper based systems to a database.
The six recognized models are:
1.1.1 Relational database
The relational database model is the best known and in today’s DBMS most often implemented database model. It defines a database as a collection of tables (relations) which contain all data. This module deals predominantly with the relational database model and the database systems based on it.
Fig: Relational Database Model
1.1.2 Flat-file database
Flat file database tables can be set in various application types, including HTML documents, simple word processors or worksheets in spreadsheet applications. The tables within a flat file database can be sorted based on column values. These tables serve as a solution for simple database tasks.
Fig: Flat-file Database Model
1.1.3 Hierarchical & Network database
The network model and the hierarchical model are the predecessors of the relational model. They build upon individual data sets and are able to express hierarchical or network like structures of the real world.
Fig: Network Model and Hierarchical Model
1.1.4 Object-oriented database
Object-oriented models define a database as a collection of objects with features and methods. A detailed discussion of object-oriented databases follows in an advanced module.
Fig: Schematic Representation of an Object-oriented Database Model
1.1.5 Object-relational database
Object-oriented models are very powerful but also quite complex. With the relatively new object-relational database model is the wide spread and simple relational database model extended by some basic object-oriented concepts. These allow us to work with the widely known relational database model but also have some advantages of the object-oriented model without its complexity.
Fig: Schematic Representation of the object-relational Database Model
1.2 Database model suitable for the “ULKOM BICYCLE HIRE”
The relational database model is the most commonly used today. Data management is easier through the use of referential integrity. As well, retrieving data is quick and relatively easy through the usage of the international standard Structured Query Language (SQL). Changes made in one portion of the database are propagated throughout the database through the usage of integrity constraints and relational links.
Three types of relational links are allowed between database tables’ (the primary storage unit of a relational database) which allow for relational databases to be used for even the most complex database operations.
The relational database overcomes the limitations found in the other database models already discussed. Some of the benefits of a relational model are:
- Quick access to data
- Easily implemented data integrity
- Utilizes a standard language (SQL)
- Easy to develop and modify applications
The most used database model is the relational database model. Though this will gradually migrate towards the object-relational model, industry standards must be determined prior to widespread acceptance.
1.3 Three levels of the database schema required in creating a database
The three-schema approach offers three types of schema with schema techniques based on formal language descriptions:
1.3.1 External schema for user views or Physical Data Level
In the relational model, the external schema also presents data as a set of relations. An external schema specifies a view of the data in terms of the conceptual level. It is tailored to the needs of a particular category of users. Portions of stored data should not be seen by some users and begins to implement a level of security and simplifies the view for these users.
1.3.2 Conceptual schema integrates external schema or the Logical level
Hides details of the physical level, In the relational model, the conceptual schema presents data as a set of tables.
1.3.3 Internal schema that defines physical storage structures
The physical schema describes details of how data is stored: files, indices, etc. on the random access disk system. It also typically describes the record layout of files and type of files (hash, b-tree, flat).
1.4 Design with Microsoft SQL Server 2008 Express
The MS SQL express database engine is a version tailored for redistribution and embedding. SQL Server Express includes 10GB of storage per database, easy backup and restores functionality, and compatibility with all editions of SQL Server and Microsoft Azure SQL Database.
Features of MS SQL:
- Build web and mobile applications for multiple data types:
Support structured and unstructured data while storing business data with native support for relational data, XML, and spatial data. Add geographical information to business applications and build location-aware applications. Increase granularity of temporal data with date and time data types.
- Easily manage SQL Server instances:
Manage SQL Server Express databases with SQL Server Management Studio Express. Connect to local SQL Server Express databases and manage objects with full Object Explorer integration. Write, execute, and test queries by using visual query plans, and access management and maintenance options.
- Basic reporting services:
Visualize data through basic Reporting Services (available with SQL Server Express with Advanced Services) and create readable reports that answer complex user questions. Enable users to share reports by using Microsoft Word and Microsoft Excel.
- Simplify and speed development with rich developer tools:
Take advantage of existing Transact-SQL skills, and incorporate technologies like Microsoft ADO.NET Entity Framework and LINQ. Develop applications faster through deep integration with Visual Studio, Visual Web Developer, and SQL Server Management Studio. Take advantage of rich table and query designers and drag-and-drop IDE support.
- Embed lightweight database into basic desktop applications:
SQL Server Express LocalDB is a lightweight deployment option for SQL Server Express with fewer prerequisites. This makes it ideal for developers who need an embedded SQL Server database for desktop applications or tools. LocalDB runs in-process with applications and not as a service.
The purpose of the relational model is to provide a declarative method for specifying data and queries: users directly state what information the database contains and what information they want from it, and let the database management system software take care of describing data structures for storing the data and retrieval procedures for answering queries.
Besides defining how the data are to be structured as discussed above, the relational model also lays down a set of rules to enforce data integrity, known as integrity constraints. It also defines how the data are to be manipulated (relational calculus). In addition, the model defines a special feature termed normalization to ensure efficient data storage.
2 Task 2 (LO2: 2.1 and D1)
2.1 Entity types and Key attributes and their types in a data model
|Relationship Type||Entity Class||Entity Class||Cardinality Ratio||Attributes|
2.2 Database Schema in the DATA MODEL
|Entity Class||Attribute||Constraints or other Description||Data Type|
|CUST_ID||Primary Key – DB Sequence||VARCHAR(10)|
|EMP_ID||Primary Key – DB Sequence||VARCHAR(10)|
|LOC_ID||Foreign Key – Reference to Location_details table||VARCHAR(10)|
|LOC_ID||Primary Key – DB Sequence||VARCHAR(10)|
|SERIAL_NUMBER||Primary Key – DB Sequence||VARCHAR(10)|
|LOC_ID||Foreign Key, reference to Location_details table||VARCHAR(10)|
|BOOKING_ID||Primary Key – DB Sequence||VARCHAR(10)|
|CUST_ID||Foreign Key, reference to customer_details table||VARCHAR(10)|
|HIRE_LOC_ID||Foreign Key, reference to Location_details table||VARCHAR(10)|
|RETURN_LOC_ID||Foreign Key, reference to Location_details table||VARCHAR(10)|
|BICYCLE_ID||Foreign Key, reference to bicycle_details table||VARCHAR(10)|
|STATUS||Not Null – waiting (as default), received or cancelled||VARCHAR(10)|
|BOOKING_EMP_ID||Foreign Key, reference to EMPLOYEE_details table||VARCHAR(10)|
|RECEVING_EMP_ID||Foreign Key, reference to EMPLOYEE_details table||VARCHAR(10)|
|PAY_ID||Primary Key – DB Sequence||VARCHAR(10)|
|BOOKING_ID||Foreign Key, reference to bicycle_details table||VARCHAR(10)|
|RATE_ID||Primary Key – DB Sequence||VARCHAR(2)|
2.3 Design for the “ULKOM BICYCLE HIRE” system using a Data Flow Diagram (DFD)
The flow of the process is explained below:
- Customer visiting ULKOM BICYCLE HIRE (UBH) Company needs to register with the company to place an order. Updated in table CUSTOMER_DETAILS
- All employees working with UBH are also registered in database table EMPLOYEE_DETAILS.
- The stock details like type of bicycle, number of bicycle of each type, their current condition, etc. all such details are updated in database in BICYCLE_DETAILS
- Customer visits one of the three (3) locations in Cambridgeshire.
- One of three (3) employees at the location attends the customer for his requirement.
- As per the requirement, employee verifies if the customer is already registered with the company, else it first registers the customer and handover the custid generated.
- Employee validates the stocks of the bicycles in BICYCLE_DETAILS.
- If the required numbers of bicycles are available in the stocks, employee places an order for the requirement and BOOKING_DETAILS table is updated.
- A unique booking id is generated and database table BICYCLE_DETAILS, LOCATION_DETAILS are updated with stock details.
- Once the customer has used the bicycle to its purpose and visits of the company’s branches to return the bicycle, he would be required to provide any of details like BOOKING_ID, CUSTOMER_ID to retrieve the booking details.
- Employee at the branch location will generate the receipt of payment using the stored procedure (table: PAYMENT_DETAILS) to evaluate the amount required to be paid by the customer.
- It also calculates PENALTY, COMMISSION details.
An entity–relationship model (ER model) is a data model for describing the data or information aspects of a business domain or its process requirements, in an abstract way that lends itself to ultimately being implemented in a database such as a relational database. The main components of ER models are entities (things) and the relationships that can exist among them.
An ER model is typically implemented as a database. In the case of a relational database, this stores data in tables, which represent the entities. Some data fields in these tables point to indexes in other tables; such pointers represent the relationships.
3 Task 3 (LO2: 2.2, 2.3, M2 and D2)
3.1 Creating Relational database system
- Create database and tables required to handle the BICYCLE HIRE system for the company.
Queries used to create table objects:
3.2 Purpose of Normalisation
Database normalization, or data normalization, is a technique to organize the contents of the tables for transactional databases and data warehouses. Normalization is part of successful database design; without normalization, database systems can be inaccurate, slow, and inefficient, and they might not produce the data you expect.
Following a successful SQL Server installation, you’ll have to create a database to hold the data. After you’ve created the database framework, you must organize the files in such a way that you can easily manage them. The primary tool to help organize the data is the table, which looks like a two-dimensional structure that encompasses rows and columns.
When you normalize a database, you have four goals: arranging data into logical groupings such that each group describes a small part of the whole; minimizing the amount of duplicate data stored in a database; organizing the data such that, when you modify it, you make the change in only one place; and building a database in which you can access and manipulate the data quickly and efficiently without compromising the integrity of the data in storage.
Data normalization helps you design new databases to meet these goals or to test databases to see whether they meet the goals. Sometimes database designers refer to these goals in terms such as data integrity, referential integrity, or keyed data access. Ideally, you normalize data before you create database tables. However, you can also use these techniques to test an existing database.
Data normalization is primarily important in the transactional, or online transactional processing (OLTP), database world, in which data modifications (e.g., inserts, updates, deletes) occur rapidly and randomly throughout the stored data. In contrast, a data warehouse contains a large amount of denormalized and summarized data—precalculated to avoid the performance penalty of ad hoc joins. In a data warehouse, updates happen periodically under extremely controlled circumstances. End users’ updates to data in data warehouses are uncommon.
3.3 Normalisation in database for UBH
This schema is normalized up to 3NF.
Schema don’t have repeating Elements or Groups of Elements, all tables have single tuple and each cell of tables have a single value of attribute hence it is following atomicity and fulfill all requirement of 1NF.
In some table multiple making a primary key, No partial dependency and transitivity dependency so it also follows 2NF and 3NF.
3.4 Data in the database for UBH
We used both the Graphical User Interface and SQL command to create a new Database and a new Table. Values were inserted into the new table and then retrieved.
4 Task 4 (LO3: 3.1, 3.2, 3.3)
4.1 Implementing query language to generate reports
4.1.1 Total number of bicycles in the three (3) branches
WHERE LOC_ID = ‘LOC1’;
WHERE LOC_ID = ‘LOC2’;
WHERE LOC_ID = ‘LOC3’;
SELECT V_LOC1 + V_LOC2 + V_LOC3 INTO TOTAL FROM DUAL;
‘BICYCLE STOCK AT CAMBRIDGESHIRE (EAST):’ || ‘ ‘ || V_LOC1);
‘BICYCLE STOCK AT CAMBRIDGESHIRE (WEST):’ || ‘ ‘ || V_LOC2);
‘BICYCLE STOCK AT CAMBRIDGESHIRE (SOUTH):’ || ‘ ‘ || V_LOC3);
‘TOTAL NUMBER OF BICYCLE AT ALL LOCATION IS:’ || ‘ ‘ || TOTAL);
4.2 A monthly output of the amount the company made for hiring 30 bicycles
TOTAL_BICYCLE_HIRE NUMBER : = 30;
TOTAL_DURATION NUMBER : = 30;
SELECT ROUND (TOTAL_DURATION / 7, 1) INTO TOTAL_WEEKS FROM DUAL;
SELECT WEEKLY_HIRE_RATE INTO WEEKLY_RATE FROM RATE_CHART;
SELECT TOTAL_WEEKS * WEEKLY_RATE INTO EACH_BICYCLE FROM DUAL;
SELECT ROUND (TOTAL_WEEKS * WEEKLY_RATE * TOTAL_BICYCLE_HIRE,2) INTO TOTAL_REVENUE FROM DUAL;
‘BICYCLE HIRED FOR:-‘ || ‘ ‘ || TOTAL_WEEKS || ‘ ‘|| ‘WEEKS’);
‘WEEKLY HIRE RATE OF A BICYCLE HIRE:-‘ || ‘ ‘ || WEEKLY_RATE ||’ POUNDS’);
‘COST OF EACH BICYCLE HIRE FOR TOTAL DURATION IS:-‘ || ‘ ‘ || EACH_BICYCLE ||’ POUNDS’);
‘REVENUE GENERATED FOR THE MONTH:-‘ || ‘ ‘ || TOTAL_REVENUE||’ POUNDS’);
4.3 Total amount a customer has to pay for hiring a bicycle for 3 week and return the bicycle 3 days late
TOTAL_BICYCLE_HIRE NUMBER : = 1;
TOTAL_DURATION NUMBER : = 3;
LATE_BY NUMBER : = 3;
SELECT WEEKLY_HIRE_RATE, LATE_RETURN_CHARGES
INTO WEEKLY_RATE, LATE_RETURN
SELECT LATE_BY * LATE_RETURN INTO PENALTY FROM DUAL;
SELECT ROUND ((TOTAL_DURATION * WEEKLY_RATE * TOTAL_BICYCLE_HIRE) + PENALTY,2)
‘BICYCLE HIRED FOR:-‘
|| ‘ ‘
|| ‘ ‘
‘WEEKLY HIRE RATE :-‘ || ‘ ‘ || WEEKLY_RATE || ‘ POUNDS’);
‘PENALTY CHARGES:-‘ || ‘ ‘ || PENALTY || ‘ POUNDS’);
‘TOTAL COST FOR THE BICYCLE HIRE IS:-‘
|| ‘ ‘
|| ‘ POUNDS’);
4.4 The total amount employees may earn at the end of the month + commission, for hiring 30 bicycles.
TOTAL_BICYCLE_HIRE NUMBER := 30;
TOTAL_DURATION NUMBER := 30;
SELECT EMP_RATE, COMMISSION
INTO DAILY_RATE, EACH_COMMN
SELECT TOTAL_BICYCLE_HIRE * EACH_COMMN INTO TOTAL_COMMN FROM DUAL;
SELECT ROUND ((TOTAL_DURATION * DAILY_RATE) + TOTAL_COMMN, 1)
‘TOTAL BICYCLE HIRED :-‘
|| ‘ ‘
‘COMMISSION EARNED:-‘ || ‘ ‘ || TOTAL_COMMN || ‘ POUNDS’);
‘TOTAL EMPLOYEE MONTHLY EARNINGS:-‘
|| ‘ ‘
|| ‘ POUNDS’);
The required reports are generated using the stored procedures. The logic is implemented depending upon the details given in the case study. The Stored procedures are executed and can be modified and used different scenarios and inputs.
All the procedures can be used as it is to generate reports for different input values.
5 Task 5 (LO4: 4.1, 4.2, 4.3 and M3)
In order to be able to add tests gradually and easily, we need a sound testing framework. However, setting up the framework is somewhat involved, and must be done before we can run our first test. To run automated tests, we need the ability to:
- Create a database and populate it with test data;
- Run tests and determine if they succeeded.
5.1 Requirements for the test databases
First of all, every developer needs his or her own test database in order to avoid interfering with each other’s activities. Overall, it is simpler and cheaper to let everyone create and drop their own sandboxes, as needed. We’ve found it most convenient for each member of our team to just run SQL Server Developer’s Edition right on their workstation. This allows each developer to work off different Git branches, and possibly different versions of database schema, and to use different test data, if needed.
Once each developer has a sandbox database, it is time to start using it, and to start benefiting from the time and effort invested in creating it, even if the sandbox database is not quite complete, as yet. We can begin covering with tests all the new complex modules in TEST schema, as well as covering existing modules as we need to change them for whatever reason.
5.2 Adding the first tests: Clarifying the requirements
Developing tests should begin with clarifying the requirements. At the risk of stating the obvious: as we develop tests for a module, we have to make sure that it is doing the right thing.
Incomplete, incorrect, and ambiguous communication between customers and developers, resulting in incomplete or incorrect requirements, is a serious problem that frequently slows down software development. Improving communication is especially important in agile teams, where speed of development is essential.
Clearly the requirements are incomplete, and as we develop the module, we need answers to the following questions:
- If first name is not provided, should we match on last name only?
- How do we order the result set?
- What should we do if first name is supplied as zero-length string (which is different from NULL)?
5.3 Matching actual test results against expected
At this point, let’s assume that we have clear and complete requirements in the form of a commented script, and the output of the script looks correct. However, to have a complete automated test, we need to match its output against expected results every time we run it.
Everything must be verified in all cases
When we explicitly test a stored procedure that returns result sets, we should always verify the structure of the result sets, as well as all values in all returned rows, match the expected results. If a column name or type changes, this may break some other code. Our tests are more useful if they detect all potentially-breaking changes.
Some database testing frameworks allow us to choose whether or not we want to verify the structure of the result sets. There might be cases when we do not want to detect such breaking changes, but so far I have not encountered a single such case in my practice. So, in order to keep things simple, and to avoid wasting time making the same choice over and over again, our framework simply does not allow such a choice. Instead, our framework always verifies that the structure of results sets is as expected
When a test fails, it must provide full details
All too many automated tests fail too early, without outputting the full details of what exactly went wrong. As a result, we end up spending more time than necessary determining what exactly is broken.
If there is any discrepancy between actual and expected results, this test will surely indicate failure, which is good. The problem, however, is that after the first failure no more checks are executed, so we do not get very much information about why the test failed. Suppose, for instance, that the very first assertion failed. Is the first row missing altogether from the result set? Have the rows returned in the wrong order? Is this the only wrong value in the whole result set? We do not know, and we have to spend extra effort to understand what exactly is going on.
5.4 Implementing the testing framework
Choosing an existing testing tool vs. implementing our own
As Agile developers, we spend a lot of time every day working with our automated tests, so getting them right is essential; inefficiencies that could be easily tolerated in less dynamic environments get in our way all-too-often in Agile development. Such inefficiencies are like a small piece of rock in our shoe; the pain can be tolerated if we are going to walk a dog around the block, but it can cause a serious problem if we are running a marathon.
In our environment, we knew developing and maintaining unit tests was going to be a marathon, not a short jaunt round the block, so our stance was that the tools should do exactly what we want without unnecessary complications, and absolutely without any bugs.
Read about Database Development Assignment Help
A harness of automated tests is more useful if we spend less time creating and maintaining it, and more time using it. Over four years of testing database code, we’ve learned that we should:
- Gradually add test coverage to our existing systems
- Use our tests as means of precise communication with the customers
- Verify everything, so that no breaking change goes unnoticed
- In case of failure provide complete information on what went wrong
- Generate expected results and store them separately
As our test harness gets bigger, we should expect some growing pains. They are quite real and need to be dealt with, but they are beyond the scope of this article. Before spending time and effort on improving our test harness, we need to know that it is actually worth it, which is the whole point of this article.
Programming Assignment Help UK is committed to plagiarism free assignments. This free Types of Database Models Assignment is complete solution based on case study.