Datawarehousing - Powerpoint Canadien Cs.sfu.ca

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Datawarehousing - Powerpoint Canadien Cs.sfu.ca as PDF for free.

More details

  • Words: 2,653
  • Pages: 36
Database Systems I Data Warehousing

CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

300

Introduction Increasingly, organizations are analyzing current and historical data to identify useful patterns and support business strategies (Decision Support). Emphasis is on complex, interactive, exploratory analysis of very large datasets created by integrating data from across all parts of an enterprise; data is fairly static. Contrast such On-Line Analytic Processing (OLAP) with traditional On-line Transaction Processing (OLTP): mostly long queries, instead of short update transactions. CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

301

DBS for Decision Support Data Warehouse: Consolidate data from many sources in one large repository. Loading, periodic synchronization of replicas. Semantic integration.

OLAP: Complex SQL queries and views. Queries based on “multidimensional” view of data and spreadsheet-style operations. Interactive and “online” (manual) analysis.

Data Mining: Automatic discovery of interesting trends and other patterns. CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

302

Data Warehousing A Data Warehouse is a subject oriented, integrated, time variant, non volatile collection of data for the purpose of decision support. Integrates data from several operational (OLTP) databases. Keeps (relevant part of the) history of the data. Views data at a more abstract level than OLTP systems (aggregate over many detail records).

CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

303

Data Warehouse Architecture EXTERNAL DATA SOURCES

EXTRACT INTEGRATE TRANSFORM LOAD / REFRESH

DATA WAREHOUSE

Metadata Repository

SUPPORTS

OLAP

DATA MINING

CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

304

Data Warehousing Integrated data spanning long time periods, often augmented with summary information. Data warehouse keeps the history. Therefore, several gigabytes to terabytes common. Interactive response times expected for complex queries. On the other hand, ad-hoc updates uncommon.

CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

305

Data Warehousing Issues Semantic Integration: When getting data from multiple sources, must eliminate mismatches, e.g., different currencies, DB schemas. Heterogeneous Sources: Must access data from a variety of source formats and repositories. Replication capabilities can be exploited here.

Load, Refresh, Purge: Must load data, periodically refresh it, and purge too-old data. Metadata Management: Must keep track of source, loading time, and other information for all data in the warehouse. CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

306

Multidimensional Data Model Consists of a collection of dimensions (independent variables) and (numeric) measures (dependent variables). Each entry (cell) aggregates the value(s) of the measure(s) for all records that fall into that cell, i.e. for all records that in each dimension have attribute values corresponding to the value of the cell in this dimension. Example: dimensions Product (pid), Location (locid), and Time (timeid) and measure Sales.

CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

307

timeid locid sales

pid

Multidimensional Data Model

11 1 1 25 11 2 1 8

Tabular representation Multidimensional representation

11 3 1 15 12 2 1 20 12 3 1 50 13 1 1 8 13 2 1 10 13 3 1 10 11 1 2 35

pid 11 12 13

12 1 1 30

8 30 25 1

10 20

Slice locid=1 is shown

10 50

8 15 2 3 timeid

locid

CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

308

Multidimensional Data Model For each dimension, the set of values can be organized in a concept hierarchy (subset relationship), e.g. PRODUCT

TIME

LOCATION

year quarter category pname

week

month date

CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

country state city 309

Multidimensional Data Model Multidimensional data can be stored physically in a (disk-resident, persistent) array; called MOLAP systems. Alternatively, can store as a relation; called ROLAP systems. The main relation, which relates dimensions to a measure, is called the fact table. Each dimension can have additional attributes and an associated dimension table. E.g., fact table Transactions(pid, locid, timeid, sales) and (one of the) dimension table Products(pid, pname, category, price) Fact tables are much larger than dimensional tables. CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

310

OLAP Queries Influenced by SQL and by spreadsheets. A common operation is to aggregate a measure over one or more dimensions. Find total sales. Find total sales for each city, or for each state. Find top five products ranked by total sales.

Roll-up: Aggregating at different levels of a dimension hierarchy. E.g., given total sales by city, we can roll-up to get sales by state.

CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

311

OLAP Queries Drill-down: The inverse of roll-up. E.g., given total sales by state, can drill-down to get total sales by city. E.g., can also drill-down on different dimension to get total sales by product for each state.

Pivoting: Aggregation on selected dimensions. E.g., pivoting on Location and Time WI CA Total yields this cross-tabulation: 1995 63 81 144

Slicing and Dicing: Equality and range selections on one or more dimensions.

1996

38 107 145

1997

75

Total

176 223 339

CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

35 110

312

Comparison with SQL Queries The cross-tabulation obtained by pivoting can also be computed using a collection of SQL queries, e.g. SELECT SUM(S.sales) FROM Sales S, Times T, Locations L WHERE S.timeid=T.timeid AND S.timeid=L.timeid GROUP BY T.year, L.state SELECT SUM(S.sales) FROM Sales S, Times T WHERE S.timeid=T.timeid GROUP BY T.year

SELECT SUM(S.sales) FROM Sales S, Location L WHERE S.timeid=L.timeid GROUP BY L.state

SELECT SUM(S.sales) FROM

Sales S

CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

313

The Cube Operator Generalizing the previous example, if there are d dimensions, we have 2d possible SQL GROUP BY queries that can be generated through pivoting on a subset of dimensions (without considering selections of specific values for certain dimensions). A Data Cube is a multi-dimensional model of a datawarehouse where the domain of each dimension is extended by the special value „ALL“ with the semantics of aggregating over all values of the corresponding dimension. CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

314

The Cube Operator An entry of a data cube is called a cell. The number of cells of a datacube with d dimensions is d

∏ (| Domain | +1) i =1

i

Each SQL group corresponds to a datacube cell. A single of the 2d different SQL GROUP BY queries can compute the measures for multiple datacube cells. CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

315

The Cube Operator The Cube Operator computes the measures for all cells (evaluates all possible GROUP BY queries) at the same time. It can be much more efficiently processed than the set of all corresponding (independent) SQL GROUP BY queries. Observation: The results of more generalized queries (with fewer GROUP BY attributes) can be derived from more specialized queries (with more GROUP BY attributes) by aggregating over the irrelevant GROUP BY attributes. CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

316

The Cube Operator Process more specialised queries first and, based on their results, determine the outcome of more generalised queries. Significant reduction of I/O cost, since intermediate results are much smaller than original (fact) table.

CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

317

The Cube Operator Lattice of GROUP-BY queries of a CUBE query w.r.t. derivability of the results Example {pid, locid, timeid} {pid, locid} {pid}

{pid, timeid}

{locid, timeid}

{locid}

{timeid}

{} {A,B,. . .}: set of GROUP BY attributes,

X

Y: Y derivable from X

CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

318

Implementation Issues In the following, adopting the ROLAP implementation. Fact table normalized (redundancy free). Dimension tables un-normalized. Dimension tables are small; updates/inserts/deletes are rare. So, anomalies less important than query performance. This kind of schema is very common in OLAP applications, and is called a star schema; computing the join of all these relations is called a star join. CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

319

Implementation Issues Example star schema TIMES

timeid date week month quarter year holiday_flag

pid timeid locid sales SALES PRODUCTS

LOCATIONS

pid pname category price

locid

city

state

country

Fact table: Sales Dimension tables: Times, Products, Locations CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

320

Bitmap Indexes New indexing techniques: Bitmap indexes, Join indexes, array representations, compression, precomputation of aggregations, etc. Example Bitmap index: Bit-vector: M 1 bit for each possible value. F

One row per record.

sex

10 10 01 10

custid name sex rating

112 115 119 112

Joe Ram Sue Woo

M M F M

3 5 5 4

CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

rating

00100 00001 00001 00010 321

Bitmap Indexes Selections can be processed using (efficient!) bit-vector operations. Example 1: Find all male customers sex

10 10 01 10

custid name sex rating

112 115 119 112

Joe Ram Sue Woo

M M F M

3 5 5 4

rating

00100 00001 00001 00010

Example 2: Find all male customer with a rating of 3  AND the relevant bit-vectors from the bitmap indexes for sex and rating CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

322

Join Indexes Consider the join of Sales, Products, Times, and Locations, possibly with additional selection conditions (e.g., country=“USA”). A join index can be constructed to speed up such joins (in a relatively static data warehouse). It basically materializes the result of a join. The index contains [s,p,t,l] if there are tuples with sid s in Sales, pid p in Products, timeid t in Times and locid l in Locations that satisfy the join (and selection) conditions. CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

323

Join Indexes Problem: Number of join indexes can grow rapidly. In order to efficiently support all possible selections in a data cube, you need one join index for each subset of the set of dimensions. E.g, one join index each for [s,p,t,l], [s,p,t], [s,p,l], [s,t,l], [s,p], [s,t], [s,l]

CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

324

Bitmapped Join Indexes A variation of join indexes addresses this problem, using the concept of Bitmap indexes. For each attribute of each dimension table with an additional selection (e.g., country), build a Bitmap index. Index contains, e.g., entry [c,s] if a dimension table tuple with value c in the selection column joins with a Sales tuple with sid s. Note that s denotes the compound key of the fact table, e.g. [pid, timeid, locid]. The Bitmap index version is especially efficient (Bitmapped Join Index). CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

325

Bitmapped Join Indexes TIMES

timeid date week month quarter year holiday_flag

pid timeid locid sales SALES PRODUCTS

LOCATIONS

pid pname category price

locid

city

state

country

Consider a query with conditions price=10 and country=“USA”. Suppose tuple (with sid) s in Sales joins with a tuple p with price=10 and a tuple l with country =“USA”. There are two (Bitmap) join indexes; one containing [10,s] and the other [USA,s]. Intersecting these indexes tells us which tuples in Sales are in the join and satisfy the given selection. CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

326

Sequences in SQL SQL-92 supports only (unordered) sets of tuples. Trend analysis is difficult to do in SQL-92, e.g.: Find the % change in monthly sales Find the top 5 product by total sales Find the trailing n-day moving average of sales

The first two queries can be expressed with difficulty, but the third cannot even be expressed in SQL-92 if n is a parameter of the query. The WINDOW clause in SQL:1999 allows us to formulate such queries over a table viewed as a sequence of tuples (implicitly, based on userspecified sort keys). CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

327

The WINDOW Clause A window is an ordered group of tuples around each (reference) tuple of a table. The order within a window is determined based on an attribute specified by the SQL statement. The width of the window is also specified by the SQL statement. The tuples of the window can be aggregated using the standard (set-oriented) SQL aggregate functions (SUM, AVG, COUNT, . . .). SQL:1999 also introduces some new (sequenceoriented) aggregate functions, in particular RANK, DENSE_RANK, PERCENT_RANK. CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

328

The WINDOW Clause SELECT L.state, T.month, AVG(S.sales) OVER W AS movavg FROM Sales S, Times T, Locations L WHERE S.timeid=T.timeid AND S.locid=L.locid WINDOW W AS (PARTITION BY L.state ORDER BY T.month RANGE BETWEEN INTERVAL `1’ MONTH PRECEDING AND INTERVAL `1’ MONTH FOLLOWING);

Let the result of the FROM and WHERE clauses be “Temp”. Conceptually, Temp is partitioned according to the PARTITION BY clause. Similar to GROUP BY, but the answer has one tuple for each tuple in a partition, not one tuple per partition! Each partition is sorted according to the ORDER BY clause. CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

329

The WINDOW Clause SELECT L.state, T.month, AVG(S.sales) OVER W AS movavg FROM Sales S, Times T, Locations L WHERE S.timeid=T.timeid AND S.locid=L.locid WINDOW W AS (PARTITION BY L.state ORDER BY T.month RANGE BETWEEN INTERVAL `1’ MONTH PRECEDING AND INTERVAL `1’ MONTH FOLLOWING);

For each tuple in a partition, the WINDOW clause creates a “window” of nearby (preceding or succeeding) tuples. Definition of window width can be value-based, as in example, using RANGE. Can also be based on number of tuples to include in the window, using ROWS clause.

The aggregate function is evaluated for each tuple in the partition based on the corresponding window. CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

330

Top N Queries Sometimes, want to find only the „best“ answers (e.g., web search engines). If you want to find only the 10 (or so) cheapest cars, the DBMS should avoid computing the costs of all cars before sorting to determine the 10 cheapest. Idea: Guess a cost c such that the 10 cheapest cars all cost less than c, and that not too many other cars cost less than c. Then add the selection cost
331

Top N Queries SELECT TOP 10 P.pid, P.pname, S.sales FROM Sales S, Products P WHERE S.pid=P.pid AND S.locid=1 AND ORDER BY S.sales DESC SELECT P.pid, P.pname, S.sales FROM Sales S, Products P WHERE S.pid=P.pid AND S.locid=1 AND

S.timeid=3

S.timeid=3

AND S.sales > c ORDER BY S.sales DESC „Cut-off value“ c is chosen by query optimizer CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

332

Online Aggregation Consider an aggregate query, e.g., finding the average sales by state. If we do not have a corresponding (materialized) data cube, processing this query from scratch can be very expensive. In general, we have to scan the entire fact table. But the user expects interactive response time. An approximate result may be acceptable to the user.

CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

333

Online Aggregation Can we provide the user with some approximate results before the exact average is computed for all states? Can show the current “running average” for each state as the computation proceeds. Even better, we can use statistical techniques and sample tuples to aggregate instead of simply scanning the aggregated table. E.g., we can provide bounds such as “the average for Wisconsin is 2000 ± 102 with 95% probability“. CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

334

Summary Decision support is an emerging, rapidly growing subarea of database systems. Involves the creation of large, consolidated data repositories called data warehouses. Warehouses exploited using sophisticated analysis techniques: complex SQL queries and OLAP “multidimensional” queries (or automatic data mining methods). New techniques for database design, indexing, view maintenance, and interactive (online) querying need to be developed. CMPT 354, Simon Fraser University, Fall 2005, Martin Ester

335

Related Documents