CSE202 Database Management Systems Lecture #5 Prepared &

CSE202 Database Management Systems Lecture #5 Prepared &

CSE202 Database Management Systems Lecture #5 Prepared & Presented by Asst. Prof. Dr. Samsun M. BAARICI Learning Objectives Understand the role of information systems in organizations Recognize the DB design and implementation process Using UML diagrams as an aid to DB design specification Example for a UML-based design tool Rational Rose: A UML-based design tool Differentiate and apply automated DB design tools

2 Outline The role of information systems in organizations The DB design and implementation process Use of UML diagrams as an aid to DB design specification Rational Rose: A UML-based design tool Automated DB design tools

Practical Database Design Methodology and Use of UML Diagrams Design methodology Target database managed by some type of database management system Various design methodologies Large database Several dozen gigabytes of data and a schema with more than 30 or 40 distinct entity types The Role of Information Systems in Organizations

Organizational context for using database systems Organizations have created the position of database administrator (DBA) and database administration departments Information technology (IT) and information resource management (IRM) departments Key to successful business management The Role of Information Systems in Organizations (cont.) Database systems are integral components in computer-

based information systems Personal computers and database system-like software products Utilized by users who previously belonged to the category of casual and occasional database users Personal databases gaining popularity Databases are distributed over multiple computer systems Better local control and faster local processing

The Role of Information Systems in Organizations (cont.) Data dictionary systems or information repositories Mini DBMSs Manage meta-data High-performance transaction processing systems require around-the-clock nonstop operation Performance is critical The Information System Life Cycle

Information system (IS) Resources involved in collection, management, use, and dissemination of information resources of organization The Information System Life Cycle Macro life cycle Feasibility analysis Requirements collection and analysis Design Implementation Validation and acceptance testing Requirements collection and analysis

The Information System Life Cycle (cont.) The database application system life cycle: micro life cycle System definition Database design Database implementation Loading or data conversion The Information System Life Cycle (cont.) Application conversion Testing and validation Operation

Monitoring and maintenance The Database Design and Implementation Process Design logical and physical structure of one or more databases Accommodate the information needs of the users in an organization for a defined set of applications Goals of database design Very hard to accomplish and measure Often begins with informal and incomplete requirements

The Database Design and Implementation Process (cont.) Main phases of the overall database design and implementation process: 1. Requirements collection and analysis 2. Conceptual database design 3. Choice of a DBMS 4. Data model mapping (also called logical database design) 5. Physical database design 6. Database system implementation and tuning The Database Design and Implementation Process (cont.)

Parallel activities Data content, structure, and constraints of the database Design of database applications Data-driven versus process-driven design Feedback loops among phases and within phases are common The Database Design and Implementation Process (cont.) Heart of the database design process Conceptual database design (Phase 2) Data model mapping (Phase 4) Physical database design (Phase 5)

Database system implementation and tuning (Phase 6) Phase 1: Requirements Collection and Analysis Activities Identify application areas and user groups Study and analyze documentation Study current operating environment Collect written responses from users Phase 1 (cont.) Requirements specification techniques Oriented analysis (OOA) Data flow diagrams (DFDs

Refinement of application goals Computer-aided Phase 2: Conceptual Database Design Phase 2a: Conceptual Schema Design Important to use a conceptual high-level data model Approaches to conceptual schema design Centralized (or one shot) schema design approach View integration approach Phase 2: (cont.) Strategies for schema design

Top-down strategy Bottom-up strategy Inside-out strategy Mixed strategy Schema (view) integration Identify correspondences/conflicts among schemas: Naming conflicts, type conflicts, domain (value set) conflicts, conflicts among constraints Modify views to conform to one another

Merge of views and restructure Phase 2: (cont.) Strategies for the view integration process Binary ladder integration N-ary integration Binary balanced strategy Mixed strategy Phase 2b: Transaction Design

In parallel with Phase 2a Specify transactions at a conceptual level Identify input/output and functional behavior Notation for specifying processes Phase 3: Choice of a DBMS Costs to consider Software acquisition cost Maintenance cost Hardware acquisition cost Database creation and conversion cost Personnel cost Training cost

Operating cost Consider DBMS portability among different types of hardware Phase 4: Data Model Mapping (Logical Database Design) Create a conceptual schema and external schemas In data model of selected DBMS Stages System-independent mapping Tailoring schemas to a specific DBMS

Phase 5: Physical Database Design Choose specific file storage structures and access paths for the database files Achieve good performance Criteria used to guide choice of physical database design options: Response time Space utilization Transaction throughput

Phase 6: Database System Implementation and Tuning Typically responsibility of the DBA Compose DDL Load database Convert data from earlier systems Database programs implemented by application programmers Most systems include monitoring utility to collect performance statistics Use of UML Diagrams as an Aid to Database Design Specification

Use UML as a design specification standard Unified Modeling Language (UML) approach Combines commonly accepted concepts from many object- oriented (O-O) methods and methodologies Includes use case diagrams, sequence diagrams, and statechart diagrams UML for Database Application Design Advantages of UML Resulting models can be used to design relational, objectoriented, or object-relational databases Brings traditional database modelers, analysts, and designers together with software application developers

Different Types of Diagrams in UML Structural diagrams Class diagrams and package diagrams Object diagrams Component diagrams Deployment diagrams Different Types of Diagrams in UML (cont.) Behavioral diagrams Use case diagrams Sequence diagrams Collaboration diagrams

Statechart diagrams Activity diagrams Different Types of Diagrams in UML (cont.) Different Types of Diagrams in UML (contd.) Modeling and Design Example: UNIVERSITY Database Rational Rose: A UML-Based Design Tool Rational Rose for database design Modeling tool used in the industry to develop information

systems Rational Rose data modeler Visual modeling tool for designing databases Provides capability to: Forward engineer a database Reverse engineer an existing implemented database into conceptual design Data Modeling Using Rational Rose Data Modeler Reverse engineering Allows the user to create a conceptual data model based on

an existing database schema specified in a DDL file Forward engineering and DDL generation Create a data model directly from scratch in Rose Generate DDL for a specific DBMS Data Modeling Using Rational Rose Data Modeler (cont.) Conceptual design in UML notation Build ER diagrams using class diagrams in Rational Rose Identifying relationships Object in a child class cannot exist without a corresponding parent object

Non-identifying relationships Specify a regular association (relationship) between two independent classes Data Modeling Using Rational Rose Data Modeler (cont.) Converting logical data model to object model and vice versa Logical data model can be converted to an object model Allows a deep understanding of relationships between

conceptual and implementation models Data Modeling Using Rational Rose Data Modeler (cont.) Synchronization between the conceptual design and the actual database Extensive domain support Create a standard set of user-defined data types Easy communication among design teams Application developer can access both the object and data models

Automated Database Design Tools Many CASE (computer-aided software engineering) tools for database design Combination of the following facilities Diagramming Model mapping Design normalization Automated Database Design Tools (cont.) Characteristics that a good design tool should possess: Easy-to-use interface

Analytical components Heuristic components Trade-off analysis Display of design results Design verification Automated Database Design Tools (cont.) Variety of products available Some use expert system technology Additional Material: Part 1 Logical DB Design

44 Chapter 7 Logical Database Design Fundamentals of Database Management Systems, 2nd ed. by Mark L. Gillenson, Ph.D. University of Memphis John Wiley & Sons, Inc.

Chapter Objectives Describe the concept of logical database design. Design relational databases by converting entity-relationship diagrams into relational tables.

Describe the data normalization process. 7-46 Chapter Objectives Perform the data normalization process. Test tables for irregularities using the data normalization process.

7-47 Logical Database Design The process of deciding how to arrange the attributes of the entities in the business environment into database structures, such as the tables of a relational database.

The goal is to create well structured tables that properly reflect the companys business environment. 7-48 Logical Design of Relational Database Systems (1) The conversion of E-R diagrams into relational tables.

(2) The data normalization technique. (3) The use of the data normalization technique to test the tables resulting from the E-R diagram conversions. 7-49 Converting E-R Diagrams into Relational Tables

Each entity will convert to a table. Each many-to-many relationship or associative entity will convert to a table. During the conversion, certain rules must be followed to ensure that foreign keys

appear in their proper places in the tables. 7-50 Converting a Simple Entity The table simply contains the attributes that were specified in the entity box. Salesperson Number is underlined to indicate that it is

the unique identifier of the entity and the primary key of the table. 7-51 Converting Entities in Binary Relationships: One-to-One There are three options for designing tables to represent this data. 7-52

One-to-One: Option #1 The two entities are combined into one relational table. 7-53 One-to-One: Option #2

Separate tables for the SALESPERSON and OFFICE entities, with Office Number as a foreign key in the SALESPERSON table. 7-54 One-to-One: Option #3 Separate tables for the

SALESPERSON and OFFICE entities, with Salesperson Number as a foreign key in the OFFICE table. 7-55 Converting Entities in Binary Relationships: One-to-Many

The unique identifier of the entity on the one side of the one-to-many relationship is placed as a foreign key in the table representing the entity on the many side. So, the Salesperson Number attribute is placed in the CUSTOMER table as a foreign key. 7-56 Converting Entities in Binary Relationships: One-to-Many

7-57 Converting Entities in Binary Relationships: Many-to-Many E-R diagram with the many-to-many binary relationship and the equivalent diagram using an associative entity. 7-58 Converting Entities in Binary

Relationships: Many-to-Many An E-R diagram with two entities in a many-tomany relationship converts to three relational tables. Each of the two entities converts to a table with its own attributes but with no foreign keys (regarding this relationship).

In addition, there must be a third many-tomany table for the many-to-many relationship. 7-59 Converting Entities in Binary Relationships: Many-to-Many The primary key of SALE is the combination of the unique identifiers of the two entities in the manyto-many relationship. Additional attributes are

the intersection data. 7-60 Converting Entities in Unary Relationships: One-to-One With only one entity type involved and with a one-toone relationship, the conversion requires only one table.

7-61 Converting Entities in Unary Relationships: One-to-Many Very similar to the oneto-one unary case. 7-62 Converting Entities in Unary Relationships: Many-to-Many

This relationship requires two tables in the conversion. The PRODUCT table has no foreign keys. 7-63 Converting Entities in Unary Relationships: Many-to-Many

A second table is created since in the conversion of a many-to-many relationship of any degree unary, binary, or ternary the number of tables will be equal to the number of entity types (one, two, or three, respectively) plus one more table for the many-to-many relationship. 7-64 Converting Entities in Ternary Relationships

The primary key of the SALE table is the combination of the unique identifiers of the three entities involved, plus the Date attribute. 7-65 Designing the General Hardware Company Database 7-66

Designing the Good Reading Bookstores Database 7-67 Designing the World Music Association Database 7-68 Designing the Lucky Rent-A-Car Database

7-69 The Data Normalization Process A methodology for organizing attributes into tables so that redundancy among the nonkey attributes is eliminated.

The output of the data normalization process is a properly structured relational database. 7-70 The Data Normalization Technique Input:

all the attributes that must be incorporated into the database a list of all the defining associations between the attributes (i.e., the functional dependencies). a means of expressing that the value of one particular attribute is associated with a single, specific value of another attribute. If we know that one of these attributes has a particular value, then the other attribute must have some other value. 7-71

Functional Dependence Salesperson Number Salesperson Name Salesperson Number is the determinant. The value of Salesperson Number determines

the value of Salesperson Name. Salesperson Name is functionally dependent on Salesperson Number. 7-72 General Hardware Environment: SALESPERSON and PRODUCT 7-73

Steps in the Data Normalization Process First Normal Form Second Normal Form Third Normal Form

7-74 The Data Normalization Process Once the attributes are arranged in third normal form, the group of tables that they comprise is a wellstructured relational database with no data redundancy. A group of tables is said to be in a particular normal form

if every table in the group is in that normal form. The data normalization process is progressive. For example, if a group of tables is in second normal form, it is also in first normal form. 7-75 General Hardware Company:

Unnormalized Data Records contain multivalued attributes. 7-76 General Hardware Company: First Normal Form

The attributes under consideration have been listed in one table, and a primary key has been established. The number of records has been increased so that every attribute of every record has just one value. The multivalued attributes have been eliminated. 7-77

General Hardware Company: First Normal Form 7-78 General Hardware Company: First Normal Form First normal form is merely a starting point in the normalization process.

First normal form contains a great deal of data redundancy. Three records involve salesperson 137, so there are three places in which his name is listed as Baker, his commission percentage is listed as 10, and so on. Two records involve product 19440 and this products name is listed twice as Hammer and its unit price is

listed twice as 17.50. 7-79 General Hardware Company: Second Normal Form No Partial Functional Dependencies Every nonkey attribute must be fully functionally dependent on the entire key of that table.

A nonkey attribute cannot depend on only part of the key. 7-80 General Hardware Company: Second Normal Form

In SALESPERSON, Salesperson Number is the sole primary key attribute. Every nonkey attribute of the table is fully defined just by Salesperson Number. Similar logic for PRODUCT and QUANTITY tables. 7-81 General Hardware Company: Second Normal Form 7-82

General Hardware Company: Third Normal Form Does not allow transitive dependencies in which one nonkey attribute is functionally dependent on another. Nonkey attributes are not allowed to define other nonkey attributes.

7-83 General Hardware Company: Third Normal Form 7-84 General Hardware Company: Third Normal Form 7-85

General Hardware Company: Third Normal Form Important points about the third normal form structure are: It is completely free of data redundancy. All foreign keys appear where needed to logically tie

together related tables. It is the same structure that would have been derived from a properly drawn entity-relationship diagram of the same business environment. 7-86 Candidate Keys as Determinants

There is one exception to the rule that in third normal form, nonkey attributes are not allowed to define other nonkey attributes. The rule does not hold if the defining nonkey attribute is a candidate key of the table. Candidate keys in a relation may define other nonkey attributes without violating third normal

form. 7-87 General Hardware Company: Functional Dependencies 7-88 General Hardware Company: First Normal Form 7-89

Good Reading Bookstores: Functional Dependencies 7-90 World Music Association: Functional Dependencies 7-91 Lucky Rent-A-Car: Functional Dependencies

7-92 Data Normalization Check The basic idea in checking the structural worthiness of relational tables, created through E-R diagram conversion, with the data normalization rules is to: Check to see if there are any partial functional dependencies.

Check to see if there are any transitive dependencies. 7-93 Creating a Table with SQL CREATE TABLE SALESPERSON (SPNUM CHAR(3) PRIMARY KEY, SPNAME CHAR(12)

COMMPERCT DECIMAL(3,0) YEARHIRE CHAR(4) OFFNUM CHAR(3) ); Dropping a Table with SQL DROP TABLE SALESPERSON; 7-94 Creating a View with SQL CREATE VIEW EMPLOYEE AS SELECT SPNUM, SPNAME, YEARHIRE

FROM SLAESPERSON; Dropping a View with SQL DROP VIEW EMPLOYEE ; 7-95 The SQL Update, Insert, and Delete Commands UPDATE SALESPERSON SET COMMPERCT = 12 WHERE SPNUM = 204; INSERT INTO SALESPERSON VALUES

(489, Quinlan, 15, 2011, 59); DELETE FROM SALESPERSON WHERE SPNUM = 186; 7-96 Additional Material: Part 2 Physical DB Design 97 Chapter 8 Physical Database Design

Fundamentals of Database Management Systems, 2nd ed. by Mark L. Gillenson, Ph.D. University of Memphis John Wiley & Sons, Inc. Chapter Objectives Describe the concept of physical database design.

Describe how a disk device works. Describe the principles of file organizations and access methods. 8-99 Chapter Objectives

Describe how simple linear indexes and B+-tree indexes work. Describe how hashed files work. List and describe the inputs to the physical database design process. 8-100

Chapter Objectives Perform physical database design and improve database performance using a variety of techniques ranging from adding indexes to denormalization. 8-101 Database Performance Factors Affecting Application and Database Performance

Application Factors Need for Joins Need to Calculate Totals Data Factors Large Data Volumes Database Structure Factors Lack of Direct Access Clumsy Primary Keys Data Storage Factors

Related Data Dispersed on Disk Business Environment Factors Too Many Data Access Operations Overly Liberal Data Access 8-102 Physical Database Design The process of modifying a database structure to improve the performance of

the run-time environment. We are going to modify the third normal form tables produced by the logical database design techniques to make the applications that will use them run faster. 8-103 Disk Storage

Primary (Main) Memory - where computers execute programs and process data Very fast Permits direct access Has several drawbacks relatively expensive not transportable is volatile

8-104 Disk Storage Secondary Memory - stores the vast volume of data and the programs that process them Data is loaded from secondary memory into primary memory when required for

processing. 8-105 Primary and Secondary Memory When a person needs some particular information thats not in her brain at the moment, she finds a book in the library that has the information and, by reading it, transfers the information from the book into her brain.

8-106 How Disk Storage Works Disks come in a variety of types and capacities Multi-platter, aluminum or ceramic disk units

Removable, external hard drives. Provide a direct access capability to the data. 8-107 How Disk Storage Works Several disk platters are stacked together, and mounted on a

central spindle, with some space in between them. Referred to as the disk. 8-108 How Disk Storage Works

The platters have a metallic coating that can be magnetized, and this is how the data is stored, bit-bybit. 8-109 Access Arm Mechanism The basic disk drive has one access arm mechanism with arms that

can reach in between the disks. At the end of each arm are two read/write heads. The platters spin, all together as a single unit, on the central spindle, at a high velocity. 8-110 Tracks

Concentric circles on which data is stored, serially by bit. Numbered track 0, track 1, track 2, and so on. 8-111 Cylinders

A collection of tracks, one from each recording surface, one directly above the other. Number of cylinders in a disk = number of tracks on any one of its recording surfaces. 8-112 Cylinders

The collection of each surfaces track 76, one above the other, seem to take the shape of a cylinder. This collection of tracks is called cylinder 76. 8-113 Cylinders

Once we have established a cylinder, it is also necessary to number the tracks within the cylinder. Cylinder 76s tracks. 8-114 Steps in Finding and Transferring Data

Seek Time - The time it takes to move the access arm mechanism to the correct cylinder from whatever cylinder its currently positioned. Head Switching - Selecting the read/write head to access the required track of the cylinder.

Rotational Delay - Waiting for the desired data on the track to arrive under the read/write head as the disk is spinning. 8-115 Steps in Finding and Transferring Data Transfer Time - The time to actually move the data from the disk to primary memory once the previous 3 steps have been completed.

8-116 File Organizations and Access Methods File Organization - the way that we store the data for subsequent retrieval. Access Method - The way that we retrieve

the data, based on it being stored in a particular file organization. 8-117 Achieving Direct Access An index tool. Hashing Method - a way of storing and retrieving

records. If we know the value of a field of a record that we want to retrieve, the index or hashing method will pinpoint its location in the file and instruct the hardware mechanisms of the disk device where to find it. 8-118 The Index

Principal is the same as that governing the index in the back of a book. 8-119 The Index The items of interest are copied over into the index, but the original text is not disturbed in any

way. The items in the index are sorted. Each item in the index is associated with a pointer. 8-120

Simple Linear Index Index is ordered by Salesperson Name field. The first index record shows Adams 3 because the record of the Salesperson file with salesperson name Adams is at relative record location 3 in the Salesperson file.

8-121 Simple Linear Index An index built over the City field. An index can be built over a field with nonunique values. 8-122

Simple Linear Index An index built over the Salesperson Number field. Indexed sequential file - the file is stored on the disk in order based on a set of field values (salesperson numbers), and an index is built over that same field. 8-123

Simple Linear Index 8-124 Simple Linear Index French 8, would have to be inserted between the index records for Dickens and Green to maintain the crucial alphabetic sequence.

Would have to move all of the index records from Green to Taylor down one record position. Not a good solution for indexing the records of a file. 8-125 B+-tree Index

The most common data indexing system in use today. Unlike simple linear indexes, B+-trees are designed to comfortably handle the insertion of new records into the file and to handle record deletion. 8-126 B+-tree Index

An arrangement of special index records in a tree. A single index record, the root, at the top, with branches leading down from it to other nodes.

8-127 B+-tree Index The lowest level nodes are called leaves. Think of it as a family tree.

8-128 B+-tree Index Each key value in the tree is associated with a pointer that is the address of either a lower level index record or a cylinder containing the salesperson records.

The index records contain salesperson number key values copied from certain of the salesperson records. 8-129 B+-tree Index 8-130 B+-tree Index Each index record, at every level of the

tree, contains space for the same number of key value/pointer pairs. Each index record is at least half full. The tree index is small and can be kept in main memory indefinitely for a frequently accessed file. 8-131

B+-tree Index This is an indexed-sequential file, because the file is stored in sequence by the salesperson numbers and the index is built over the Salesperson Number field. B+-tree indexes can also be used to index nonkey, nonunique fields.

In general, the storage unit for groups of records can be the cylinder or any other physical device subunit. 8-132 B+-tree Index Say that a new record with salesperson

number 365 must be inserted. Suppose that cylinder 5 is completely full. 8-133 B+-tree Index The collection of records

on the entire cylinder has to be split between cylinder 5 and an empty reserve cylinder, say cylinder 11. There is no key value/pointer pair representing cylinder 11 in the tree index. 8-134

B+-tree Index The index record, into which the key for the new cylinder should go, which happens to be full, is split into two index records. The now five key values and their associated pointers are divided between them.

8-135 Indexes Can be built over any field (unique or nonunique) of a file. Can also be built on a combination of fields.

In addition to its direct access capability, an index can be used to retrieve the records of a file in logical sequence based on the indexed field. 8-136 Indexes Many separate indexes into a file can exist simultaneously. The indexes are quite independent of each other.

When a new record is inserted into a file, an existing record is deleted, or an indexed field is updated, all of the affected indexes must be updated. 8-137 Hashed Files The number of records in a file is

estimated, and enough space is reserved on a disk to hold them. Additional space is reserved for additional overflow records. 8-138 Hashed Files

To determine where to insert a particular record of the file, the records key value is converted by a hashing routine into one of the reserved record locations on the disk. To find and retrieve the record, the same hashing routine is applied to the key value during the search. 8-139 Division-Remainder Method

Divide the key value of the record that we want to insert or retrieve by the number of record locations that we have reserved. Perform the division, discard the quotient, and use the remainder to tell us where to locate the record. 8-140

A Hashed File Storage area for 50 records plus overflow records. Collision - more than one key value hashes to the same location.

The two key values are called synonyms. 8-141 Hashed Files Hashing disallows any sequential storage based on a set of field values.

A file can only be hashed once, based on the values of a single field or a single combination of fields. If a file is hashed on one field, direct access based on another field can be achieved by building an index on the other field. 8-142

Hashed Files Many hashing routines have been developed. The goal is to minimize the number of collisions, which can slow down retrieval performance. In practice, several hashing routines are tested

on a file to determine the best fit. Even a relatively simple procedure like the division-remainder method can be fine-tuned. 8-143 Hashed Files A hashed file must occasionally be reorganized after so many collisions have

occurred that performance is degraded to an unacceptable level. A new storage area with a new number of storage locations is chosen, and the process starts all over again. 8-144 Inputs to Physical Database Design

Physical database design starts where logical database design ends. The well structured relational tables produced by the conversion from ERDs or by the data normalization process form the starting point for physical database design. 8-145

More Inputs to Physical Database Design Inputs Into the Physical Database Design Process The Tables Produced by the Logical Database Design Process Business Environment Requirements Data Characteristics

Application Data Requirements Application Priorities Operational Requirements

Data Volume Assessment Data Volatility Application Characteristics Response Time Requirements Throughput Requirements

Data Security Concerns Backup and Recovery Concerns Hardware and Software Characteristics DBMS Characteristics Hardware Characteristics 8-146

The Tables Produced by the Logical Database Design Process Form the starting point of the physical database design process. Reflect all of the data in the business environment.

Are likely to be unacceptable from a performance point of view and must be modified in physical database design. 8-147 Business Environment Requirements Response Time Requirements

Throughput Requirements 8-148 Business Environment Requirements: Response Time Requirements Response time is the delay from the time that the Enter Key is pressed to execute a

query until the result appears on the screen. What are the response time requirements? 8-149 Business Environment Requirements: Throughput Requirements

Throughput is the measure of how many queries from simultaneous users must be satisfied in a given period of time by the application set and the database that supports it. 8-150 Data Characteristics Data Volume Assessment

How much data will be in the database? Roughly how many records is each table expected to have? Data Volatility Refers to how often stored data is updated.

8-151 Application Characteristics What is the nature of the applications that will use the data? Which applications are the most important to the company?

Which data will be accessed by each application? 8-152 Application Characteristics Application Data Requirements

Application Priorities 8-153 Application Characteristics: Data Requirements Which database tables does each application require for its processing?

Do the applications require that tables be joined? How many applications and which specific applications will share particular database tables? Are the applications that use a particular table run frequently or infrequently?

8-154 Application Characteristics: Priorities When a modification to a table proposed during physical design thats designed to help the performance of one application hinders the performance of another application, which of the two applications is the more critical to the company?

8-155 Operational Requirements: Data Security, Backup and Recovery Data Security

Protecting data from theft or malicious destruction and making sure that sensitive data is accessible only to those employees of the company who have a need to know. Backup and Recovery Being able to recover a table or a database that has been corrupted or lost due to hardware or software failure to the recovery of an entire information system after a natural disaster. 8-156

Hardware and Software Characteristics DBMS Characteristics For example, exact nature of indexes, attribute data type options, and SQL query features, which must be known and taken into

account during physical database design. Hardware Characteristics Processor speeds and disk data transfer rates. 8-157 Physical Database Design Techniques Physical Design Categories and Techniques That DO NOT Change the Logical Design

Adding External Features Adding Indexes Adding Views Reorganizing Stored Data Clustering Files Splitting a Table into Multiple Tables Horizontal Partitioning Vertical Partitioning Splitting-Off Large Text Attributes 8-158 Physical Database Design

Techniques Physical Design Categories and Techniques That DO Change the Logical Design Changing Attributes in a Table Substituting Foreign Keys Adding Attributes to a Table Creating New Primary Keys Storing Derived Data Combining Tables Combine Tables in One-to-One relationships Alternative for Repeating Groups Denormalization Adding New Tables

Duplicating Tables Adding Subset Tables 8-159 Adding External Features Doesnt change the logical design at all.

There is no introduction of data redundancy. 8-160 Adding External Features Adding Indexes Adding Views

8-161 Adding External Features: Adding Indexes Which attributes or combinations of attributes should you consider indexing in order to have the greatest positive impact on the application environment?

Attributes that are likely to be prominent in direct searches Primary keys Search attributes Attributes that are likely to be major players in operations, such as joins, SQL SELECT ORDER BY clauses and SQL SELECT GROUP BY clauses. 8-162 Adding External Features:

Adding Indexes What potential problems can be caused by building too many indexes? Indexes are wonderful for direct searches. But when the data in a table is updated, the system must take the time to update the tables indexes, too.

8-163 General Hardware Company With Some Indexes 8-164 Adding External Features: Adding Views Doesnt change the logical design.

No data is physically duplicated. An important device in protecting the security and privacy of data. 8-165 Reorganizing Stored Data

Doesnt change the logical design. No data is physically duplicated. Clustering Files Houses related records together on a disk.

8-166 Reorganizing Stored Data: Clustering Files The salesperson record for salesperson 137, Baker, is followed on the disk by the customer records for customers 0121, 0933, 1047, and 1826. 8-167

Splitting a Table Into Multiple Tables Horizontal Partitioning Vertical Partitioning Splitting-Off Large Text Attributes

8-168 Splitting a Table Into Multiple Tables: Horizontal Partitioning The rows of a table are divided into groups, and the groups are stored separately on different areas of a disk or on different disks.

Useful in managing the different groups of records separately for security or backup and recovery purposes. Improve data retrieval performance. Disadvantage: retrieval of records from more than one partition can be more complex and slower. 8-169

Splitting a Table Into Multiple Tables: Horizontal Partitioning 8-170 Splitting a Table Into Multiple Tables: Vertical Partitioning The separate groups, each made up of

different columns of a table, are created because different users or applications require different columns. Each partition must have a copy of the primary key. 8-171 Splitting a Table Into Multiple Tables: Vertical Partitioning

The Salesperson table 8-172 Splitting a Table Into Multiple Tables: Splitting Off Large Text Attributes A variation on vertical partitioning involves splitting off large text attributes into separate partitions.

Each partition must have a copy of the primary key. 8-173 Changing Attributes in a Table Changes the logical design.

Substituting a Foreign Key Substitute an alternate key (Salesperson Name, assuming it is a unique attribute) as a foreign key. Saves on the number of performance-slowing joins. 8-174

Adding Attributes to a Table Creating New Primary Keys Storing Derived Data 8-175 Adding Attributes to a Table:

Creating New Primary Keys Changes the logical design. In a table with no single attribute primary key, indexing a multi-attribute key would likely be clumsy and slow.

Create a new serial number attribute primary key for the table. 8-176 Adding Attributes to a Table: Creating New Primary Keys The current two-attribute primary key of the CUSTOMER EMPLOYEE table can be replaced by one, new attribute. 8-177

Adding Attributes to a Table: Storing Derived Data Calculate answers to certain queries once and store them in the database. 8-178 Combining Tables

If two tables are combined into one, then there must surely be situations in which the presence of the new single table allows us to avoid joins that would have been necessary when there were two tables. Combination of Tables in One-to-One Relationships Alternatives for Repeating Groups

Denormalization 8-179 Combining Tables: Combination of Tables in One-to-One Relationships

Advantage: if we ever have to retrieve detailed data about a salesperson and his office in one query, it can now be done without a join. 8-180 Combining Tables: Combination of Tables in One-to-One Relationships Disadvantages: the tables are no longer logically as well as physically independent.

retrievals of salesperson data alone or of office data alone could be slower than before. storage of data about unoccupied offices is problematic and may require a reevaluation of which field should be the primary key.8-181 Combining Tables: Alternatives for Repeating Groups

If repeating groups are well controlled, they can be folded into one table. 8-182 Combining Tables: Denormalization It may be necessary to take pairs of related, third normal form tables and to

combine them, introducing possibly massive data redundancy. Unsatisfactory response times and throughput may mandate eliminating runtime joins. 8-183 Combining Tables: Denormalization

Since a salesperson can have several customers, a particular salespersons data will be repeated for each customer he has. 8-184 Combining Tables: Denormalization 8-185 Adding New Tables

Duplicating Tables Duplicate tables and have different applications access the duplicates. Adding Subset Tables

Duplicate only those portions of a table that are most heavily accessed. Assign subsets to different applications to ease the performance crunch. 8-186 Good Reading Bookstores: Problem

Assume that Good Readings headquarters frequently needs to quickly find the details of a book, based on either its book number or its title, together with details about its publisher. If a join takes too long, resulting in unacceptable response times, throughput, or both, what are the possibilities in terms of physical design that can improve the situation? 8-187

Good Reading Bookstores: Solutions The Book Number attribute and the Book Title attributes in the PUBLISHER table can each have an index built on them to provide direct access, since the problem says that books are going to be searched for based on one of these two attributes.

The two join attributesthe Publisher Name attribute of the PUBLISHER table and the Publisher Name attribute of the BOOK table can each have an index built on them to help speed up the join operation. 8-188 Good Reading Bookstores: Solutions If the DBMS permits it, the two tables can be

clustered, with the book records associated with a particular publisher stored near that publishers record on the disk. The two tables can be denormalized, with the appropriate publisher data being appended to each book record (and the PUBLISHER table being eliminated). 8-189 Next Lecture

Object & Object-Relational DB 190 References Ramez Elmasri, Shamkant Navathe; Fundamentals of Database Systems, 6th Ed., Pearson, 2014 Mark L. Gillenson; Fundamentals of Database Management Systems, 2nd Ed., John Wiley, 2012 Universitt Hamburg, Fachbereich Informatik, Einfhrung in Datenbanksysteme, Lecture Notes,

1999 191

Recently Viewed Presentations

  • Regional Meeting  Who we are as an organization

    Regional Meeting Who we are as an organization

    About FSIPP. 84% of our society are Anesthesia trained. 14% are trained in Physical Medicine and Rehabilitation. 80% devote 100% of their time to the practice of pain management
  • Shale Drilling and Completions - Colorado Law

    Shale Drilling and Completions - Colorado Law

    Shale Drilling and Completions. PETROLEUM. ENGINEERING. DEPARTMENT. William Fleckenstein, P.E. BP Adjunct Professor. Colorado School of Mines
  • What might we discover in a new neighborhood?

    What might we discover in a new neighborhood?

    up us bug cut fun tub jump stuck truck buzz hunt must elephant CVC - When there is only one vowel at the beginning or in the middle of a word or syllable, it usually stands for its short sound....
  • Genetics Part 1 - Mrs. Endsley's Biology

    Genetics Part 1 - Mrs. Endsley's Biology

    States that every individual has two alleles of each gene and when gametes are produced, each gamete receives one of these alleles. During the formation of gametes (eggs or sperm), the two alleles responsible for a trait separate from each...
  • Welcome to Briscoe Jr. High

    Welcome to Briscoe Jr. High

    2 doses of Measles vaccine on or after the first birthday (separated by at least 1 month) 1 dose of Mumps vaccine. 1 dose of Rubella vaccine. 3 doses of DTP/DTaP/DT/Td vaccine (with one dose after the fourth birthday) and....
  • The Age of Enlightenment 1690-1789

    The Age of Enlightenment 1690-1789

    , encouraging women to aspire to the life of the mind and proposed creation of a women's college and criticized the institution of marriage. Women produced some 15% of published novels in 2nd half of the 18th C. Key element...
  • Adaptive IIR Filter Terry Lee EE 491D May

    Adaptive IIR Filter Terry Lee EE 491D May

    Arial Tahoma Wingdings Book Antiqua Calisto MT 宋体 Bradley Hand ITC Lucida Calligraphy Comic Sans MS Times New Roman Ocean Adaptive IIR Filter Outline Linear Filters IIR Filter Least-Mean-Square (LMS) Algorithm Least-Mean-Square (LMS) Algorithm Summary of (LMS) Algorithm Important Factors...
  • Local Descent - Stanford University

    Local Descent - Stanford University

    Performing regression on noisy data can result in models that capture noise in addition to the underlying function. Preference for smoothness can be encoded in the regression problem by adding a regularization term to the objective function. where ?≥0 is...