IBM DB2 Information Integrator
Getting Started with Classic Federation Version 8.2
GC18-9155-02
IBM DB2 Information Integrator
Getting Started with Classic Federation Version 8.2
GC18-9155-02
Before using this information and the product it supports, be sure to read the general information under “Notices” on page 101.
This document contains proprietary information of IBM. It is provided under a license agreement and copyright law protects it. The information contained in this publication does not include any product warranties, and any statements provided in this manual should not be interpreted as such. You can order IBM publications online or through your local IBM representative: v To order publications online, go to the IBM Publications Center at www.ibm.com/shop/publications/order v To find your local IBM representative, go to the IBM Directory of Worldwide Contacts at www.ibm.com/planetwide When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright International Business Machines Corporation 2003, 2004. All rights reserved. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. © CrossAccess Corporation 1993, 2003.
Contents Chapter 1. Introduction . . . . . . . . 1 Product overview . . . . Operational components . Data server . . . . . Clients . . . . . . Connectors . . . . . Enterprise server . . . Application components . Administrative components Data Mapper . . . . Mainframe utilities . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . .
1 2 2 5 6 7 7 7 7 . 11
Chapter 2. Concepts . . . . . . . . . 13 Relational access to data . . . . . . . . . Client/server architecture. . . . . . . . . Data sources . . . . . . . . . . . . . Nonrelational data mapping . . . . . . . . Data server and client components . . . . . Data server components . . . . . . . . Data server system exits . . . . . . . . Clients . . . . . . . . . . . . . . Configuration methodology . . . . . . . . Data server configuration (CACDSCF) . . . Query processor configuration (CACQPCF). . Administrator configuration (CACADMIN) . . Methods for updating configuration members .
. . . . . . . . . . . . .
13 14 15 15 16 17 18 20 20 21 21 21 21
Chapter 3. Setting up Adabas . . . . . 23 Overview . . . . . . . . . . . . . . Setting up the Adabas environment . . . . . Accessing file definition information and creating logical tables . . . . . . . . . . . . . Loading the metadata catalog . . . . . . . Validating the setup . . . . . . . . . . Accessing Adabas data with SQL locally . . . Accessing Adabas data with SQL remotely . .
. 23 . 23 . . . . .
24 24 26 26 27
Chapter 4. Setting up CA-Datacom . . . 29 Overview . . . . . . . . . . . . . . Exporting the CA-Datacom file definition into COBOL copybook format . . . . . . . . . Mapping the source language member to logical tables . . . . . . . . . . . . . . . Loading the metadata catalog . . . . . . . Validating the setup . . . . . . . . . . Accessing CA-Datacom data with SQL locally . Accessing CA-Datacom data with SQL remotely
. 29 . 29 . . . .
30 35 37 37 39
Chapter 5. Setting up CA-IDMS . . . . 41 Overview . . . . . . . . . . . . . Punching the schema and subschema . . . . Mapping the CA-IDMS schema and subschema logical tables . . . . . . . . . . . . Loading the metadata catalog . . . . . . © Copyright IBM Corp. 2003, 2004
. . to . .
. 41 . 41 . 42 . 46
Validating the setup . . . . . . . . . . Accessing CA-IDMS data with SQL locally . . Accessing CA-IDMS data with SQL remotely .
. 48 . 48 . 49
Chapter 6. Setting up DB2 Universal Database for z/OS . . . . . . . . . . 51 Overview . . . . . . . . . . . . . . Setting up the interface to DB2 Universal Database for z/OS . . . . . . . . . . . . . . Mapping the DB2 Universal Database table definitions to logical tables . . . . . . . . Loading the metadata catalog . . . . . . . Validating the setup . . . . . . . . . . Accessing DB2 Universal Database data with SQL locally . . . . . . . . . . . . Accessing DB2 Universal Database data with SQL remotely . . . . . . . . . . . .
Chapter 7. Setting up IMS
. 51 . 51 . 52 . 52 . 53 . 54 . 55
. . . . . . 57
Overview . . . . . . . . . . . . . Mapping the sample IMS DBD to logical tables Loading the metadata catalog . . . . . . Validating the setup . . . . . . . . . Establishing the interface to DBCTL/DRA . Accessing IMS data with SQL locally . . . Accessing IMS data with SQL remotely . .
Chapter 8. Setting up Sequential
. . . . . . .
. . . . . . .
. . . 71
Overview . . . . . . . . . . . . . . . Mapping the sample Sequential copybook to logical tables . . . . . . . . . . . . . . . . Loading the metadata catalog . . . . . . . . Validating the setup . . . . . . . . . . . Accessing the Sequential data with SQL locally Accessing Sequential data with SQL remotely . .
Chapter 9. Setting up VSAM
57 57 65 67 67 68 69
71 71 75 76 76 77
. . . . . 79
Overview . . . . . . . . . . . . . . Mapping VSAM data . . . . . . . . . . Mapping the sample VSAM copybook (native) Mapping the sample VSAM copybook (CICS) . Loading the metadata catalog . . . . . . . Validating the setup . . . . . . . . . . Accessing VSAM data through CICS . . . . Accessing VSAM data with SQL locally . . . Accessing VSAM data with SQL remotely . .
. 79 . 79 79 . 81 . 85 . 86 . 86 . 89 . 90
Appendix. DB2 Information Integrator documentation . . . . . . . . . . . 91 Accessing DB2 Information Integrator documentation . . . . . . . . . . . . . 91 Documentation about replication function on z/OS 93 Documentation about event publishing function for DB2 Universal Database on z/OS . . . . . . . 94
iii
Documentation about event publishing function for IMS and VSAM on z/OS . . . . . . . . . Documentation about event publishing and replication function on Linux, UNIX, and Windows Documentation about federated function on z/OS Documentation about federated function on Linux, UNIX, and Windows . . . . . . . . . . Documentation about enterprise search function on Linux, UNIX, and Windows . . . . . . . . Release notes and installation requirements . . .
iv
DB2 II Getting Started with Classic Federation
Notices . . . . . . . . . . . . . . 101 . 94
Trademarks .
. 95 96
Index . . . . . . . . . . . . . . . 105
. 96 . 98 . 98
.
.
Contacting IBM
.
.
.
.
.
.
.
.
.
.
. 103
. . . . . . . . . . 107
Product information . . . . . Comments on the documentation.
. .
. .
. .
. .
. .
. 107 . 107
Chapter 1. Introduction This chapter provides an overview of IBM® DB2® Information Integrator Classic Federation for z/OS®. It contains the following sections: v “Product overview” on page 1 v “Operational components” on page 2 v “Application components” on page 7 v “Administrative components” on page 7
Product overview DB2 Information Integrator Classic Federation for z/OS is a powerful, efficient, and easy-to-implement mainframe data integration solution. It provides Windows® and UNIX® tools and applications with direct, real-time SQL access to mainframe databases and files. Tools and applications issue JDBC or ODBC SQL statements to read and write data that is stored in VSAM and Sequential files, as well as IMS™, CA-IDMS, CA-Datacom, Adabas, and DB2 Universal Database™ for z/OS databases. The following figure demonstrates how IBM DB2 Information Integrator Classic Federation for z/OS (DB2 II Classic Federation) accesses data.
User DB2 Information Integrator
Nonrelational Data
Relational Data Warehouse/Datamart
Figure 1. Accessing data with DB2 Information Integrator Classic Federation for z/OS
DB2 II Classic Federation is a complete, high-powered solution that delivers: v SQL access to relational and legacy data v A scalable, high-performance, easy-to-use product v A standards-based solution introducing no new application interfaces (APIs) v A modular solution that integrates easily with existing environments DB2 II Classic Federation contains the following major components: v Data server © Copyright IBM Corp. 2003, 2004
1
v Java™ Database Connectivity (JDBC), Microsoft® Open Database Connectivity (ODBC), and Call Level Interface (CLI) clients v Connectors v Enterprise server v Data Mapper and metadata utilities These components are grouped into three functional areas: v Operational, which process requests for data and deliver the results to the client tool or application. v Application-enabling, which provide a 3GL hook into the data access and federation server. v Administrative, which configure components and manage system data. This chapter provides an overview and introduction to each of these components.
Operational components Operational components provide the processing required to connect tools and applications with data. They are responsible for: v Accepting and validating SQL statements from a server, client, desktop tool, or desktop application v Communicating SQL and result sets between distributed tools and applications and mainframe data sources v Accessing the appropriate data using native file and database access aids such as indexes and keys v Translating results into a consistent relational format regardless of source data type The operational components include: v Data server v Clients (Java Database Connectivity (JDBC), Microsoft Open Database Connectivity (ODBC), and Call Level Interface (CLI)) v Connectors v Enterprise server These modules are discussed in the sections that follow.
Data server The core of the operational environment is the data server. The data server processes SQL statements that are sent from tools and applications through the Java Database Connectivity (JDBC), Microsoft Open Database Connectivity (ODBC), and Call Level Interface (CLI) clients. The data server: v Accepts SQL queries from client applications, via ODBC, JDBC, or CLI clients. The data server runs in its own address space and waits for requests from clients. It typically listens for requests from clients on a TCP/IP socket. WebSphere® MQ is also supported for communication with ODBC and JDBC clients.
2
DB2 II Getting Started with Classic Federation
v Determines the types of files and databases to be accessed. The data server uses platform and database specific data connectors for data access. The connectors use native database I/O commands. v Rewrites each SQL query into one or more native file or database commands. One SQL command can translate into multiple native data requests. For joins across multiple tables, the SQL may translate into multiple native data accesses to different types of files and databases. For example, a join between a logical tables representing VSAM file data and IMS DB data would be performed using the VSAM connector and the IMS connector.
v
v
v v v v v
The data server uses the documented API interfaces for the particular file system or DBMS to be accessed. It does not rely on internal database control block organization. Using documented APIs reduces the likelihood of returning invalid results and minimizes the potential for errors during processing. It also ensures the integrity and security of the underlying databases and files. Optimizes native data access calls based on the SQL statements and data source characteristics. Optimization is built in at several levels of the system to ensure that the least amount of data is read and that all database aids are utilized. Joins are optimized based on index statistics held in the metadata catalog. Constructs a standard relational result set. For nonrelational data stores such as IMS DB, the data server restructures the data, as it is accessed, into columns and rows, and it translates data types. This restructuring may result in normalization of the data when segments contain embedded recurring data items (OCCURS and OCCURS DEPENDING ON constructs). Filters the data to satisfy the SQL qualification. Typically this is to satisfy WHERE clauses. Sorts result sets as needed. For example, it may perform ORDER BY or GROUP BY operations. Processes catalog queries using the metadata catalog. Processes INSERT, UPDATE, and DELETE commands with support for commit, rollback, and autocommit, as well as two-phase commit for IMS DB. Translates result sets into a consistent relational format. For nonrelational data sources this involves restructuring data into columns and rows. For relational data sources, result sets are translated into a single relational format.
The data server also can invoke stored procedures for mainframe algorithm reuse. Stored procedures are defined to run within the data server address space. The data server also can use the APPC bridge to access programs running in other regions such as CICS®. Finally, the data server can integrate IMS DC transactions by using DB2 Information Integrator Classic Federation transaction services and the data server’s stored procedure mechanisms. There are five types of tasks (services) that run in the data server: v v v v
Region controller, which includes an MTO Operator Interface Connection handlers Query processors Logger
v Initialization services These tasks are described in the following sections.
Chapter 1. Introduction
3
Region controller The data server has multiple tasks running within it. The main task is the region controller. The region controller is responsible for starting, stopping, and monitoring the other tasks running within the data server. The region controller determines which tasks to start based on configuration parameter settings. See IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing for more information about configuration parameters. The region controller also supplies a z/OS MTO (Master Terminal Operator) interface that can be used to monitor and control a data server address space.
Connection handlers A connection handler (CH) task is responsible for listening for connection requests from client applications and routing them to the appropriate query processor task. DB2 Information Integrator Classic Federation for z/OS contains modules for standard transport layers. These modules can be loaded by the connection handler task: v TCP/IP v Cross memory services v WebSphere MQ A local z/OS client application can connect to a data server using any of these methods. (The recommended approach is to use z/OS cross memory services). Remote client applications (running under Windows or a UNIX platform) use TCP/IP or WebSphere MQ to communicate with a remote data server.
UNIX CLI Client
TCP/IP
Microsoft Windows
z/OS
ODBC or JDBC Client
Data Server TCP/IP
Figure 2. Sample communication implementation
Query processor Embedded in the data server is a query processor that acts as a relational engine. The query processor has no knowledge of the physical databases or files being referenced in a SELECT, INSERT, UPDATE or DELETE statement. For each table referenced in an SQL statement, the query processor invokes a connector that is specific to the database or file type of the source data. The query processor treats the different database or file system as a single data source and is capable of processing SQL statements that access either a single type of database or file system or reference multiple types of databases or file systems.
4
DB2 II Getting Started with Classic Federation
Figure 3 on page 5, shows the operational processing flow. To process SQL data access requests, data definitions must be mapped to logical tables. This information is stored in generated metadata catalogs, which emulate DB2 Universal Database system catalogs. (The Data Mapper tool is used in conjunction with the metadata utility to perform this mapping. See “Data Mapper” on page 7, for more information.)
DB2 Information Integrator Client Result Rows Native Result DB2 Information Integrator Data Server
Data Management System Native Dialect
Data
Metadata Catalog
Figure 3. Operational processing flow
Logger A single logger task can be running within a data server. The logger reports on data server activities and is also used in error diagnosis situations.
Initialization services Initialization services are special tasks used to prepare the data server execution environment to access relational and nonrelational data, initialize high level language environments for use by exits, or allow the data server to use the z/OS Workload Manager (WLM) services to process queries in WLM goal mode. For more information about initialization services, see IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing.
Clients Desktop tools and applications can issue SQL data access requests to a DB2 Information Integrator Classic Federation for z/OS data server through a DB2 Information Integrator Classic Federation for z/OS ODBC JDBC, or Call Level Interface (CLI) client.
Chapter 1. Introduction
5
The DB2 Information Integrator Classic Federation for z/OS ODBC, JDBC, and CLI clients provide a single interface between end-user tools, applications (Java and C), and other DB2 Information Integrator Classic Federation for z/OS operational components. High-speed performance and application integrity are provided by the 32-bit thread safe ODBC, JDBC, and CLI clients. A single client can access all data sources on all platforms. The DB2 Information Integrator Classic Federation for z/OS client serves as both an ODBC, JDBC, and CLI driver and a connection handler to other platforms. All clients can leverage the underlying TCP/IP communications backbone; ODBC and JDBC clients also can leverage the WebSphere MQ communications backbone. Depending on the type of client that is being used, up to four software components interact to enable client data access: v A platform-specific ODBC driver manager that loads clients on behalf of an application. This component is delivered with the operating system for all Windows platforms (for ODBC only). v The DB2 Information Integrator Classic Federation for z/OS ODBC, JDBC, and CLI client that processes function calls, submits SQL requests to a specific data source, and returns results to the application. v Data source definitions that consist of the name and location of the data the user wants to access. The required data source definitions consist of a data source name and communications parameters (TCP/IP or WebSphere MQ). The data source name is used to identify a specific data server or enterprise server that will be used to service data access requests. v The DB2 Information Integrator Classic Federation for z/OS connection handler that is used to communicate with a data server or enterprise server. DB2 Information Integrator Classic Federation for z/OS supplies a connection handler that supports TCP/IP implementations and WebSphere MQ. For more information on the DB2 Information Integrator Classic Federation for z/OS clients, see IBM DB2 Information Integrator Client Guide for Classic Federation and Classic Event Publishing.
Connectors The data server uses platform and database specific data connectors for data access. For each table referenced in an SQL statement, the data server invokes a database or file type specific data connector. The data connectors are reentrant so only a single copy is loaded even though multiple load requests may be issued based on the number of tables referenced in a statement and the number of concurrent users. This maximizes throughput while minimizing the operational footprint. The connectors use native database I/O commands. DB2 II Classic Federation data connectors have been developed to use the most efficient, yet standard (supplied with the database) multi-user environments available. DB2 II Classic Federation does not rely on internal database control block organization. This ensures the integrity of the result set while leveraging the performance profile of the underlying database.
6
DB2 II Getting Started with Classic Federation
Enterprise server The enterprise server can be used to manage a large number of concurrent users across multiple data sources. An enterprise server contains the same tasks that a data server uses, with the exception of the query processor and the initialization services. Like a data server, the enterprise server’s connection handler is responsible for listening for client connection requests. However, when a connection request is received, the enterprise server does not forward the request to a query processor task for processing. Instead, the connection request is forwarded to a data source handler (DSH) and then to a data server for processing. The enterprise server maintains the end-to-end connection between the client application and the target data server. It is responsible for sending messages to and receiving messages from the client application and the data server. The enterprise server is also used to perform load balancing. Using configuration parameters, the enterprise server determines the locations of the data servers that it will be communicating with and whether those data servers are running on the same platform as the enterprise server. The enterprise server can automatically start a local data server if there are no instances active. It can also start additional instances of a local data server when the currently active instances have reached the maximum number of concurrent users they can service, or the currently active instances are all busy.
Application components Application-enabling components provide developers with a means of using the DB2 Information Integrator Classic Federation for z/OS data delivery capabilities within 3GL applications. The DB2 Information Integrator Classic Federation for z/OS clients provide standard interfaces to C and Java programs to access heterogeneous data sources through a single API. An application is written as if all data being accessed is in a single relational database. Multiple heterogeneous data sources that reside on local or remote platforms can be accessed by the application without regard to location or the type of file or database to be accessed.
Administrative components Administrative components are tools and utilities used to perform the housekeeping and data administration required to define an installation’s environment and to define the data to be accessed by DB2 Information Integrator Classic Federation for z/OS. This section discusses the Data Mapper, which is used to create mappings between nonrelational data and logical relational tables, and the mainframe utilities, which are used to manage the metadata catalog.
Data Mapper The Data Mapper is a Microsoft Windows-based application that automates many of the tasks required to create logical table definitions for nonrelational data structures. The objective is to view a single file or portion of a file as one or more relational tables. The mapping must be accomplished while maintaining the structural integrity of the underlying database or file.
Chapter 1. Introduction
7
The Data Mapper interprets existing physical data definitions that define both the content and the structure of nonrelational data. The tool is designed to minimize administrative work, using a definition-by-default approach. The Data Mapper accomplishes the creation of logical table definitions for nonrelational data structures by creating metadata grammar from existing nonrelational data definitions (such as COBOL copybooks, IMS DBDs, and CA-IDMS schemas/subschemas). The metadata grammar is used as input to the metadata utility to create a metadata catalog that defines how the nonrelational data structure is mapped to an equivalent logical table. The metadata catalogs are used by query processor tasks to facilitate both the access and translation of the data from the nonrelational data structure into relational result sets. The Data Mapper import utilities create initial logical tables from COBOL copybooks. You refine these initial logical tables in a graphical environment to match site- and user-specific requirements. You can utilize the initial table definitions automatically created by Data Mapper, or customize those definitions as needed. A sample mapping of the Fields within the Segments of a hierarchical IMS database to two logical tables are shown in Figure 4 on page 9
8
DB2 II Getting Started with Classic Federation
Customer Master
Customer Orders
Ordered Items
Customer Locations
Shipping Locations
Stocking Warehouse
Table-1:
Customer Receivables
Customer Invoices
Cash Postings
Table-2:
Customer Master, Name
Customer Master, Name
Customer Master, Address
Customer Master, Phone
Customer Orders, Order-Date
Customer Receivables, Amount Due
Customer Orders, Order-Amount
Customer Receivables, Outstanding
Customer Orders, Order-Number
Customer Invoices, Invoice-Number
Ordered Items, Item-Number
Customer Invoices, Invoice-Date
Ordered Items, Item-Description
Customer Invoices, Order-Number
Figure 4. Sample mapping from hierarchical IMS DB to logical table
Multiple logical tables can be created that map to a single physical file or database. For example, a site may choose to create multiple table definitions that all map to an employee VSAM file: v One table is used by department managers who need access to information about the employees in their departments. v Another table is used by HR managers who have access to all employee information. v Another table is used by HR clerks who have access to information that is not considered confidential. v Another table is used by the employees themselves who can query information about their own benefits structure.
Chapter 1. Introduction
9
Customizing these table definitions to the needs of the user is not only beneficial to the end-user, but recommended. Figure 5 shows the data administration workflow with Data Mapper.
Desktop
DBD Source
COBOL Copybooks
Mainframe
Metadata Utility
Data Mapper
Metadata Catalogs
Figure 5. Data Mapper workflow
Note: The Data Mapper contains embedded FTP support to facilitate file transfer to and from the mainframe. To create a relational model of your data using the Data Mapper, you perform the following steps: 1. Import existing descriptions of your nonrelational data into Data Mapper. COBOL copybooks, IMS-DL/I Database Definitions (DBDs), and CA-IDMS schema/subschema can all be imported into the Data Mapper. The Data Mapper creates default logical table definitions from the COBOL copybook information. 2. If these default table definitions are suitable for end users, go to the next step. If not, refine or customize the default table definitions as needed. For example, importing the record layout for the VSAM customer master file creates the default Customer_Table. Two additional tables can also be created from the original: v Marketing_Customer_Table, which contains only those data items required by the marketing department. v Service_Customer_Table, which contains only those data items required by support representatives. 3. Export the logical table definitions to the mainframe where the database/file resides. These definitions are then used as input to the metadata utility, which creates the metadata catalogs. After completing these steps, you are ready to use the operational components with your tools and applications to access your nonrelational data.
10
DB2 II Getting Started with Classic Federation
For step-by-step information on how to use the Data Mapper to map nonrelational data to logical tables, see the tutorial chapters of IBM DB2 Information Integrator Data Mapper Guide for Classic Federation and Classic Event Publishing.
Mainframe utilities A variety of utilities are provided on the mainframe. Below is a list of some of the key utilities: 1. CACCATLG contains sample JCL to allocate the metadata catalog 2. CACMETAU (metadata utility) contains sample JCL to load the metadata catalog 3. CACGRANT to grant access rights to users
Chapter 1. Introduction
11
12
DB2 II Getting Started with Classic Federation
Chapter 2. Concepts This chapter describes the components that make up the core DB2 Information Integrator Classic Federation for z/OS system and key concepts as they pertain to these components. It describes the data server’s capabilities and also defines many of the concepts and terminology that DB2 Information Integrator Classic Federation for z/OS employs. Information about how other platforms interface with the DB2 Information Integrator Classic Federation for z/OS components is also included. This chapter introduces the key concepts and components so that you have the context that is required to perform the setup steps in the chapters that follow. For more information about the discussed in this chapter, see IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing. This chapter discusses the following topics: v “Relational access to data” on page 13 explains how DB2 Information Integrator Classic Federation for z/OS makes it possible for your application to access all data as if the data were contained in a relational database. v “Client/server architecture” on page 14 explains how a client communicates with a data server. Different communication protocols are identified. This section also contains a brief discussion about the use of the enterprise server. v “Data sources” on page 15 provides an overview of the DB2 Information Integrator Classic Federation for z/OS concept of an ODBC-like data source definition. The term data source is used to identify the data server (and the service within that data server) that responds to your application’s SQL requests. v “Nonrelational data mapping” on page 15 explains how DB2 Information Integrator Classic Federation for z/OS makes nonrelational databases and file systems look like a relational database. Key concepts that are discussed include how nonrelational-to-relational mapping is performed and how common nonrelational database and file system features are supported in DB2 Information Integrator Classic Federation for z/OS. v “Data server and client components” on page 16 describes the components comprising a data server and how client applications communicate with that data server. v “Configuration methodology” on page 20 describes how to configure a DB2 Information Integrator Classic Federation for z/OS system.
Relational access to data One of the features that makes DB2 Information Integrator Classic Federation for z/OS such a powerful solution is that, regardless of the database or file system you want to access, your data is accessed consistently just as if it were in relational database. (Specifically, it makes your data appear as if it is in DB2 Universal Database.) This allows your front-end tools and applications to use the power of SQL to retrieve and manipulate data and removes the necessity and complexity of using varied access methods within them. DB2 Information Integrator Classic Federation for z/OS uses a metadata catalog. You use the Data Mapper, a Windows-based graphical tool, to generate the input © Copyright IBM Corp. 2003, 2004
13
to the metadata utility, which populates the metadata catalog. Mapping your data is covered later in this chapter, as well in IBM DB2 Information Integrator Data Mapper Guide for Classic Federation and Classic Event Publishing. DB2 Information Integrator Classic Federation for z/OS supports the DB2 Universal Database version 4 dialect of SQL. This is also called the SQL-92 standard. DB2 Information Integrator Classic Federation for z/OS supports a fairly complete SQL implementation, including inner joins, outer joins, subselects, GROUP BY, HAVING, and scalar functions.
Client/server architecture DB2 Information Integrator Classic Federation for z/OS uses a client/server architecture. In order to communicate with a data server, your applications need to interface with a client using one of the clients provided with DB2 Information Integrator Classic Federation for z/OS. ODBC and JDBC clients are provided for Windows applications. JDBC and CLI clients are provided for UNIX applications. Before running your application, you must configure the client. Depending upon the platform, configuration is performed in one of several ways. For the ODBC client, configuration is performed using the ODBC Administrator. For JDBC and CLI clients, configuration is performed using a text file. Regardless of the configuration method used, you must identify one or more data sources that your application will be accessing. For each data source you must identify the communications protocol (TCP/IP or WebSphere MQ) that will be used to communicate with a DB2 Information Integrator Classic Federation for z/OS data server. TCP/IP is supported by all clients. WebSphere MQ is supported by the ODBC and JDBC clients. Additionally, for local (z/OS) applications, a cross memory connection handler service is also supported that uses data spaces. Based on protocol, the following addresses must be supplied during configuration: v TCP/IP: The IP address and port number of the data server must be specified. – The IP address can be specified using dot notation or as a host name. – The port number can be specified as a number or a service name. v WebSphere MQ (ODBC and JDBC clients only) v Cross memory services: The name of a cross memory data space and queue name must be specified. These are automatically created by the data server and require no definitions to any other subsystems. If your application supports a large number of concurrent users (more than can be handled by a single data server), then the enterprise server is available to manage these situations. The enterprise server is installed between your application and the DB2 Information Integrator Classic Federation for z/OS data server and appears to the client as the data server. The enterprise server is responsible for starting additional data servers as the number of concurrent users increases. For information about the enterprise server, see IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing.
14
DB2 II Getting Started with Classic Federation
Data sources In DB2 Information Integrator Classic Federation for z/OS, a data source is similar to an ODBC data source. The term data source identifies a query processor that runs in a DB2 Information Integrator Classic Federation for z/OS data server. The query processor is neutral with respect to database and file system. You can map a data source to a particular type of database or file system or to multiple databases or file systems. You should define data sources in organizational terms based on the data that your end users need to access and not based on the underlying database or file system. For example, if you have a Credit department that require access to IMS and VSAM data, you can create a single data source, CREDIT. In this example, you would define the logical tables that reference the desired IMS and VSAM credit data in a set of metadata catalogs that are accessible to the data server with a service defined for the CREDIT data source. Logical tables are discussed in the following section. In the credit example, when your application connects to the CREDIT data source, it has access to all the credit-related data as if the data were contained in a single relational database. This allows your application to perform heterogeneous joins for any of these tables even though the data is physically stored in IMS databases and VSAM files. A DB2 Information Integrator Classic Federation for z/OS data server can support multiple data sources running different types of query processors. For example, in addition to your Credit application, you could have another application that needs to access accounting information. In this case, you would create the logical tables that contain the desired accounting information in the same metadata catalogs that contain your credit tables. You would then define another data source called ACCOUNTING for the data server.
Nonrelational data mapping For nonrelational data, like that in IMS and VSAM databases, you must map the nonrelational data into a relational representation of the data that DB2 Information Integrator Classic Federation for z/OS can use. This representation is referred to as a logical table. You can map multiple logical tables to a single physical database or file system. A typical situation in which you would perform this operation is when there is a VSAM file containing 10 different types of records. In this case, you would define 10 different logical tables, one for each type using “views,” each with its own record type. When accessing IMS data, DB2 Information Integrator Classic Federation for z/OS only accesses a single hierarchical path in the database as a logical table. If your IMS databases contains multiple child segments, in different hierarchical paths, then you must define logical tables for each hierarchical path that you want to access. After this is done, you can use JOINs to retrieve all of the data you need from different hierarchical paths in a single query.
Chapter 2. Concepts
15
Logical table definitions are stored in the metadata catalog. The logical tables, their associated column definitions, and index information are defined by metadata (USE) grammar. The index information is used to optimize access to the physical database or file system. The metadata grammar is a text file processed by the metadata utility, which updates the metadata catalog. When the metadata utility is executed, it verifies the syntax and content of the metadata grammar and verifies that the physical databases or files that are referenced exist. During this verification process, the metadata utility also collects additional physical information that is used to further optimize access to the physical database or file. Although the metadata grammar is stored in human-readable format, and can be defined manually, this is a very tedious and error-prone process. The Data Mapper makes the definition process easy. The Data Mapper is a Windows-based tool that imports COBOL copybooks to build initial logical tables. Using the Data Mapper, you can refine these initial definitions to include only the information you need and to create multiple logical table definitions from a single physical database or file (as in the VSAM record type example). The Data Mapper generates the proper metadata grammar for each of the logical tables you have defined for a data source. The Data Mapper also contains embedded FTP support so that you can easily download the COBOL copybooks for import into the Data Mapper. Additionally, you can use the embedded FTP support to move the generated metadata grammar back to the mainframe so it can be run through the metadata utility. In relational databases, the format and structure of the tables are strictly enforced. For example, a column is defined as one of the data types that is supported by the database. Additionally, relational databases do not support repeating items. Instead, you define a separate table that contains multiple rows of data for each repeating data item. These restrictions are typically not enforced by a nonrelational database or file system. DB2 Information Integrator Classic Federation for z/OS supports the definition of repeating data items in the data mapping process. DB2 Information Integrator Classic Federation for z/OS logically joins the repeating and non-repeating data and returns a separate row for each instance of the repeating data.
Data server and client components Figure 6 shows the components that make up a data server and how applications communicate with a data server using clients. It also describes how local and remote client applications and ODBC client applications interface with a data server. Each of the major components and its relationship to other system components is described in more detail in the remainder of this section.
16
DB2 II Getting Started with Classic Federation
Windows or UNIX
Mainframe Data Server
Client Application or Driver Manager
Client (ODBC, JDBC, or CLI)
Logger (Optional)
Transport Layer
Region Controller
Logger
Connection Handler
Query Processor
Transport Layer
Connectors
TCP/IP or WebSphere MQ
System Exits
Target Data Sources System Catalogs
Figure 6. DB2 Information Integrator Classic Federation for z/OS architecture
Data server components The data server consists of several components. All of these are either directly or indirectly activated by the region controller based on configuration parameters defined in a master configuration member. These different components are referred to as services and are defined using the SERVICE INFO ENTRY (SIE) parameter. The region controller is responsible for starting, stopping, and monitoring the different services that are running within the data server. The services are implemented as individual load modules running as separate z/OS tasks within the data server address space. Most of the services can have multiple instances and most can support multiple users. A description of the different types of services that the region controller manages follows.
Initialization services Initialization services are special purpose services (tasks) that are used to initialize and terminate different types of interfaces to underlying database management systems or z/OS system components. For example, an initialization service is provided to activate the DRA interface used by the IMS DRA connector in order to access IMS data. An example of a z/OS system component initialization service is the Workload Manager (WLM) service.
Connection handler services The connection handler service task is responsible for accepting connections from your applications. The connection request is routed to the requested query processor service for subsequent processing. Chapter 2. Concepts
17
Query processor services The query processor is the DB2 Information Integrator Classic Federation for z/OS relational engine that services user SQL requests. The query processor can service SELECT statements and stored procedure invocations. The query processor invokes one or more connectors to access the target database or file system that is referenced in an SQL request. The following connectors are supported: v IMS BMP/DBB interface: Allows IMS data to be accessed through an IMS region controller. A region controller is restricted to a single PSB for the data server, limiting the number of concurrent users the query processor can handle. v IMS DRA interface: Allows IMS data to be accessed using the IMS DRA interface. The DRA interface supports multiple PSBs and is the only way to support a large number of concurrent users. This is the recommended interface. v Sequential interface: Allows access to Sequential files or members. v Stored procedure interface: Allows a z/OS Assembler, C, COBOL, or PLI application program to be invoked. v VSAM interface: Allows access to VSAM ESDS, KSDS or RRDS files. This interface also supports use of alternate indexes. v CA-IDMS interface: Allows access to CA-IDMS files. v Adabas interface: Allows access to Adabas files. v CA-Datacom interface: Allows access to CA-Datacom files. v DB2 interface: Allows access to DB2 Universal Database tables.
MTO interface The MTO interface is a z/OS Master Terminal Operator interface that allows you to display and control the services and users that are being serviced by a data server. Using the MTO interface you can also dynamically configure the data server. The MTO interface is contained within the region controller service.
Logger service The logger service is a task that is used for system monitoring and troubleshooting. During normal operations you will not need to be concerned with the logger service.
Data server system exits The data server is a fully multi-threaded implementation designed to service large numbers of concurrent users. A set of system exits for security and accounting purposes is supplied with DB2 Information Integrator Classic Federation for z/OS. All system exits are written in Assembler language and are designed to run in a multi-user environment. Source code is provided for all exits so that you can customize the supplied exits to meet your site standards. Complete descriptions about activating the system exits and their APIs can be found IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing. The following system exits are provided: v SAF security exit v SMF accounting exit v v v v
18
CPU Resource Governor exit Workload Manager exit DB2 Thread Management exit Record Processing exit
DB2 II Getting Started with Classic Federation
The exits are described in the sections that follow.
SAF security exit The SAF security exit is provided to perform checks to determine whether a user is authorized to access the Sequential or VSAM data sets that are referenced in a query. For IMS access, the SAF exit determines whether the user is authorized to use the PSBs that are used to access the IMS databases referenced in a query. Additionally, the SAF exit is used to verify that a user has authority to execute a stored procedure program. The SAF exit is activated using the SAF EXIT configuration parameter.
SMF accounting exit The SMF exit is allows you to generate SMF user records that report the CPU time and elapsed time a user was connected to a query processor service. The SMF exit is invoked immediately after the user connects to a query processor service task instance. Recording ends immediately before final disconnect processing within the query processor service task instance. An SMF record for the user is generated when the user disconnects from the data server. The SMF exit is activated using the SMF EXIT configuration parameter.
CPU Resource Governor exit The CPU Resource Governor exit is used to restrict the amount of CPU time that a user can consume for a unit-of-work. In DB2 Information Integrator Classic Federation for z/OS, a unit-of-work is considered a single query or series of queries. Typically, the unit-of-work is a single query. If your application opens multiple simultaneous cursors for the same data source, however, then the unit-of-work begins when a PREPARE is issued for the first cursor and ends when all cursors are closed. When the CPU Resource Governor exit is activated, it is passed the available CPU time for that user. Periodically, the CPU Resource Governor exit is called to check how much CPU time has been used. After the allotted time is exceeded, the exit returns a return code that stops the query. The frequency with which the exit is called is controlled by DB2 Information Integrator Classic Federation for z/OS. The CPU Resource Governor exit is activated using the CPU GOVERNOR configuration parameter. In addition to the CPU Resource Governor feature, DB2 Information Integrator Classic Federation for z/OS also supports resource governing features based on the number of rows fetched and the number of rows staged in the result set. These governors are set using the MAX ROWS RETRIEVED and MAX ROWS EXAMINED configuration parameters. These governors restrict the number of records retrieved by the connectors activated to process a query and the number of possible rows that can be returned in a result set. These are somewhat coarse governors and large amounts of CPU can be expended before one of these limits is reached.
Workload Manager exit The Workload Manager exit allows you to place the queries that a user is running under Workload Manager (WLM). WLM goal mode allows the amount of resources that are consumed by a query to be controlled by the operating system based on site-defined rules. WLM compatibility mode allows you to use RMF™ monitor reports to examine the usage of resources by a query in comparison to site-defined goals. The Workload Manager exit is activated using the WLM EXIT configuration parameter. Chapter 2. Concepts
19
The Workload Manager exit uses the same unit-of-work concept that the CPU Resource Governor exit uses. A unit-of-work is a single query unless your application opens multiple simultaneous cursors, in which case the unit-of-work is from first cursor open to last cursor close. Note: The Workload Manager exit uses enclave TCB support. When the Workload Manager exit is active, it joins a Workload Manager enclave when the unit-of-work starts. The enclave is left when the unit-of-work is completed. While in the unit-of-work, the query processor is under Workload Manager control, provided that WLM goal mode is active.
DB2 Thread Management exit The DB2 Thread Management/Security exit modifies the default behavior of connecting to and disconnecting from DB2 Universal Database. In addition, this exit performs SAF calls to validate the user ID of the DB2 Information Integrator Classic Federation for z/OS client and establishes the correct primary authorization ID for the client in DB2 Universal Database.
Record Processing exit The Record Processing exit, available for VSAM and Sequential data access, is used to modify the characteristics of the record to make it easier for DB2 Information Integrator Classic Federation for z/OS to process.
Clients The client is responsible for loading the appropriate transport layer to establish a connection with the target data servers. When your application connects to a data source, the connection handler activates the appropriate transport layer service based on configuration parameters. The client is responsible for shipping all requests to the appropriate transport layer service for the duration of the session (until your application disconnects from a data source). DB2 Information Integrator Classic Federation for z/OS provides the following clients: v ODBC v JDBC v Call Level Interface (CLI)
Configuration methodology DB2 Information Integrator Classic Federation for z/OS configuration varies based on the type of client used and data server types. For example, when you use the ODBC client with an application, configuration is performed with the ODBC Administrator. When configuring a CLI client without an ODBC driver manager, then configuration is performed manually, using a text configuration file. Client configuration is simple and straightforward. You must define the data sources that your application uses. You can define additional tuning and debugging parameters. This configuration, however, usually is only performed once per new application deployment. The DB2 Information Integrator Classic Federation for z/OS data server is designed for continuous operation. As your use of DB2 Information Integrator Classic Federation for z/OS expands, the data servers are designed such that you
20
DB2 II Getting Started with Classic Federation
can add new data sources and services without affecting existing applications. You can also perform tuning and troubleshooting without having to stop a data server. The data server configuration files are text files that contain the various configuration parameters that define services and other operational and tuning parameters. These configuration files are stored as members in a configuration PDS. Data servers have three classes of configuration members stored in the SCACCONF data set: v Data server configuration (CACDSCF) v Query processor configuration (CACQPCF) v Administrator configuration (CACADMIN) These configuration members are described in the sections that follow.
Data server configuration (CACDSCF) There is a single configuration member that defines the services that run within the DB2 Information Integrator Classic Federation for z/OS data server address space. Services are defined using the SERVICE INFO ENTRY configuration parameter. This configuration member also contains other configuration parameters that affect all of the services running within a data server.
Query processor configuration (CACQPCF) Each query processor service defined in the data server configuration member can name a service level configuration member that contains definitions specific to that query processor.
Administrator configuration (CACADMIN) The query processor also allows configuration parameter values to be overridden at an individual user ID level. User configuration overrides are activated using the USER CONFIG parameter, which must be specified in either the data server or the query processor configuration member. When user configuration overrides are activated, then the user connects to the data server, and a query processor service task is selected to service that user. The configuration PDS is accessed using the user ID for that user as the configuration member name. If a member name exists, then the configuration definitions found in that member override applicable definitions that exist in the query processor configuration member. Note: Typically, you only use the user configuration override feature when you are developing an application, tuning, or troubleshooting. They should be used with caution. For normal production operations, the configuration parameters that are used to control the query processor should be defined at the query processor configuration member level.
Methods for updating configuration members You can update the data server configuration members manually, or dynamically using the z/OS MTO interface. When you update the data server configuration member manually, you must restart the data server for the updates to take affect. When you update query processor configuration members, the associated service must be stopped and then restarted for the updates to take affect. Manual updates
Chapter 2. Concepts
21
to an Administrator configuration member take affect when a user connects to the data server and a query processor is activated. An overview of how to perform dynamic configuration using the MTO operator interface is discussed in IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing.
22
DB2 II Getting Started with Classic Federation
Chapter 3. Setting up Adabas This chapter explains how to set up IBM DB2 Information Integrator Classic Federation for z/OS to access data in Software AG Adabas. It includes the following sections: v “Overview” on page 23 v v v v
“Setting up the Adabas environment” on page 23 “Accessing file definition information and creating logical tables” on page 24 “Loading the metadata catalog” on page 24 “Validating the setup” on page 26
Overview The following sections describe how to enable SQL access to Software AG Adabas data. They explain how to set up a sample file called Employees, which is usually created during installation of Adabas. The Employees file contains 1107 records of employee information. If the Employees sample is not available at your site, you can use one of your own Adabas files. Although this chapter uses a sample database, you use the same general steps to enable SQL access to your own Adabas databases: v Set up the Adabas environment v Run the USE grammar generator (USG) to extract information about the Adabas file and create logical tables v Load the metadata catalog with the logical tables v Access the Adabas data with SQL These steps are described in more detail in the following sections. For additional information about developing and deploying applications with DB2 Information Integrator Classic Federation for z/OS, see IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing. Note: In all of the jobs that are described in this section, you must customize the JCL as appropriate for your site. For example, you may need to concatenate libraries specific to Adabas that are provided by the vendor. Templates for these libraries are included in the JCL. You must uncomment them and provide the appropriate high-level qualifiers.
Setting up the Adabas environment Before the installation can proceed, you must link a module with the Adabas module, ADAUSER, and you must create a new version of ADALNK. The SCACSAMP data set contains a member named CACADAL. Edit and submit the CACADAL job. This creates and populate the members CACADLN and CACADLN2 in the SCACLOAD data set with the CACADABS and ADALNK modules, which is needed for Adabas access.
© Copyright IBM Corp. 2003, 2004
23
Accessing file definition information and creating logical tables The SCACSAMP data set on the mainframe contains a job that extracts information on the layout of the Employees file from Predict and generates the metadata grammar (USE grammar) that describes the logical table CAC.EMPLADA. The member name is CACADAUG. To create logical tables: 1. Edit the member CACADADD (in the SCACSAMP data set) to ensure that the Adabas environment information is correct. 2. Customize the JCL in member CACADAUG (in the SCACSAMP data set) to run in your environment. The statement SYSIN DD * contains the control information. The value of N (after the keyword SYSDIC) must be the file number of the Predict file on your Adabas system. For example: //SYSIN DD * OPTIONS SYSDIC 20 ...
3. Submit the job CACUDAUG. The job creates a member called CACIVGAD in the dataset SCACSAMP. The CACIVGAD member contains the USE grammar for the table CAC.EMPLADA. 4. If the Employees file is defined under a name different from EMPLOYEES-FILE, change it accordingly. If you are using the sample employees file installed from Adabas version 7.1 or higher, the Predict view name should be EMPLOYEES. This view is available with Predict 4.11. 5. If you have the new sample Employees file, but the EMPLOYEES view is not available, use the EMPLOYEES-FILE view and make the following changes to the output of the CACADAUG job (that is, to the member CACIVGAD in the SCACSAMP data set): a. Replace USE AS DECIMAL(6,0) with USE AS DATE “MMDDYYYY” in the BIRTH field. b. Replace DECIMAL(6,0) with DECIMAL(8,0) in the LEAVE_START and LEAVE_END fields.
Loading the metadata catalog To load the DB2 Information Integrator Classic Federation for z/OS metadata catalog with the table you created in the previous section: 1. If you have not already allocated the metadata catalog that is used by the system, do so now. Sample JCL to allocate the metadata catalog is provided in the SCACSAMP data set, in a member called CACCATLG. To allocate the metadata catalog: a. Customize the CACCATLG JCL to run in your environment. b. Submit the JCL. c. After this job completes, ensure that the server procedure (CACDS) in the PROCLIB points to the newly-created metadata catalogs using the CACCAT and CACINDX DD statements. d. Ensure that the CACCAT and CACINDX DD statements are uncommented in the JCL. 2. Update the Adabas environment information in member CACADADD as necessary.
24
DB2 II Getting Started with Classic Federation
3. Load the metadata catalog using the USE grammar as input. Sample JCL to load the metadata catalog is provided in the SCACSAMP data set, in a member called CACMETAU. To load the metadata catalog: a. Customize the CACMETAU JCL to run in your environment: 1) Ensure that the symbolic GRAMMAR is pointing to the appropriate metadata grammar member (GRAMMAR=CACIVGAD). 2) Uncomment and set the Adabas symbolic ADA, the STEPLIB DDs that point to your CA-IDMS libraries, and the DDLPUNCH DD. 3) Ensure that the CACCAT DD and CACINDX DD refer to the catalogs created using the CACCATLG JCL. b. Submit the JCL. After this job has been run successfully, the metadata catalog is loaded with the logical tables created in the Data Mapper. A return code of 4 is expected. The DROP TABLE fails because the table does not exist yet. Important: When the metadata catalog is initialized, security is set up with System Administration authority (SYSADM) granted to the user ID who installed DB2 Information Integrator Classic Federation for z/OS and ran the metadata utility. That user ID is the only user who can access the system or the catalogs. To turn off security, the new System Administrator must either grant SYSADM authorization to PUBLIC, allowing all users access and thus negating security, or grant table access authority to individual user IDs. 4. Grant the appropriate access rights to users. Sample JCL to grant access rights is contained in the SCACSAMP data set, in a member called CACGRANT. The JCL loads the catalogs with the appropriate access rights to the tables. The job reads in its input from the CACGRIN member, which contains the GRANTs required to access the samples tables. If you are bringing your own tables on line with DB2 Information Integrator Classic Federation for z/OS, you must add the appropriate GRANTs for the new tables. See the information about SQL security in IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing for more information. To grant access rights: a. Customize the CACGRANT JCL to run in your environment by performing the following steps: 1) Ensure the symbolic GRAMMAR is pointing to the appropriate member containing the desired security commands. For example, GRAMMAR=CACGRIN. 2) Ensure that the CACCAT DD and CACINDX DDs refer to the catalogs created using the CACCATLG JCL. 3) Review CACGRIN and uncomment the appropriate GRANT for your database. b. Submit the job CACGRANT. If you plan to use tools that require access to the metadata catalog information, you must run CACGRANT using CACGRSYS as the input. After this job completes, the catalogs have been loaded with the desired security.
Chapter 3. Setting up Adabas
25
Validating the setup This section explains how to verify that DB2 Information Integrator Classic Federation for z/OS is correctly configured to access your Adabas data.
Accessing Adabas data with SQL locally This section explains how to access Adabas data by issuing SQL statements locally to the data server. Accessing Adabas locally with SQL can be useful if you ever need to diagnose a problem. By running the SQL locally, you eliminate potential problem areas such as the network and the client machine (for example, Windows or UNIX). It also speeds up the resolution process because all of the diagnostics are occurring on the mainframe. To access Adabas data with SQL: 1. Review the SERVICE INFO ENTRY (SIE) for CACSAMP. a. In the SCACCONF data set, look at the data server configuration file CACDSCF. b. Within the CACDSCF, find the SERVICE INFO ENTRY (SIE) that contains the word CACSAMP: SERVICE INFO ENTRY = CACQP CACSAMP ....
c. Uncomment the configuration parameter shown above. The configuration parameter defines the data source name that is used by the client to connect to the data server. The name CACSAMP is the data source name that will be defined as an ODBC data source. To create additional data source names, simply replicate this configuration parameter and give it a different data source name. For example: SERVICE INFO ENTRY = CACQP CACSAMP .... SERVICE INFO ENTRY = CACQP ACCOUNTING...
In this case, there are two valid data source names that can be configured on the client side. The first is the default CACSAMP, and the second is ACCOUNTING. You can create as many of these SIEs as you need. Using different SIEs is a good way to separate different users based on their business needs. This example uses the default data source CACSAMP. 2. Modify your data server JCL (CACDS) in your PROCLIB to ensure that the Adabas symbolic ADA and DDs have been uncommented and modified for your environment. This includes the STEPLIB DD and the DDCARD DD. 3. Start the data server with the z/OS operator command s cacds. See IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing for detailed information about operator commands. 4. Access data from a local client on the mainframe by following the steps below: a. Identify the CACSQL member, which is contained in the SCACSAMP data set. The CACSQL member contains sample SQL for the various databases supported by DB2 Information Integrator Classic Federation for z/OS. It is the input to the CACCLNT JCL pointed to by its SYSIN DD. b. In the CACSQL member, uncomment the SQL statements for Adabas to access the sample Adabas data. Note: If you are testing your own Adabas tables, then you can create your own SQL in the same member (CACSQL) or create another member for your SQL statements. c. Configure the client.
26
DB2 II Getting Started with Classic Federation
The client configuration file is used to communicate to the data server using the communication protocol defined in the data server. In the SCACCONF data set is a member called CACUSCF. Configure the DATASOURCE parameter based on the communications protocol set up in the data server, as described in IBM DB2 Information Integrator Installation Guide for Classic Federation and Classic Event Publishing. d. Customize and submit the local client job CACCLNT. In the SCACSAMP data set, there is a member called CACCLNT. This job executes the client batch job to issue SQL to the data server using CACSQL as the input SQL. e. View the output. The output should contain the SQL statement that is being issued and the corresponding result sets.
Accessing Adabas data with SQL remotely Connecting remotely to a data server requires the use of one of the DB2 Information Integrator Classic Federation for z/OS clients and an application that is compliant with ODBC, JDBC, or CLI standards. See IBM DB2 Information Integrator Client Guide for Classic Federation and Classic Event Publishing for detailed information about configuring and using the various DB2 Information Integrator Classic Federation for z/OS clients.
Chapter 3. Setting up Adabas
27
28
DB2 II Getting Started with Classic Federation
Chapter 4. Setting up CA-Datacom This chapter explains how to set up IBM DB2 Information Integrator Classic Federation for z/OS to access CA-Datacom data. It includes the following sections: v “Overview” on page 29 v “Exporting the CA-Datacom file definition into COBOL copybook format” on page 29 v “Mapping the source language member to logical tables” on page 30 v “Loading the metadata catalog” on page 35 v “Validating the setup” on page 37
Overview The following sections describe how to enable SQL access to CA-Datacom data. They explain how to set up a sample CA-Datacom database called CUST. The CUST sample is included in the CA-Datacom installation. It contains 116 records of customer information. If the CUST sample is not available at your site, you can use one of your own CA-Datacom databases. Although this chapter uses a sample database, you use the same general steps to enable SQL access to your own CA-Datacom databases: v Export the CA-Datacom file definition into COBOL copybook format. v Map the source language member to logical tables and export those definitions as metadata (USE) grammar. v Load the USE grammar into the metadata catalog. v Verify SQL access to the CA-Datacom data. These steps are described in more detail in the following sections. For additional information about developing and deploying applications with DB2 Information Integrator Classic Federation for z/OS, see IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing. Note: In all of the jobs that are described in this section, you must customize the JCL as appropriate for your site. For example, you may need to concatenate libraries specific to CA-Datacom that are provided by the vendor. Templates for these libraries are included in the JCL. You must uncomment them and provide the appropriate high-level qualifiers.
Exporting the CA-Datacom file definition into COBOL copybook format This section explains how to create a COBOL copybook for all fields in the selected CA-Datacom table. The resulting copybook can then be transferred to the workstation and imported into the Data Mapper. The SCACSAMP data set on the mainframe contains a member called CACDCSLG. This member contains sample JCL that you can use to run the CA-Datacom source language generation utility, which creates a COBOL copybook for all fields in the CUST table. To use the CACDCSLG JCL: 1. Customize the JCL to run in your environment: © Copyright IBM Corp. 2003, 2004
29
a. Change the DC symbolic parameter to the CA-Datacom high-level qualifier. b. Change the DCOUT symbolic parameter to the copybook output high-level qualifier (HLQ). c. Change DISKU and DISKVOL symbolic parameters to valid DASD units and volumes. d. Supply an authorized user name in the ‘-USR’ statement. e. Supply a user access code (password) in the ‘-USR’ statement. f. Supply the CA-Datacom table occurrence entity name in the ‘-utl copy,table’ statement. The name is CUST, if you are using the sample Customer table. g. Supply the CA-Datacom table occurrence status/version in the ‘-utl copy,table’ statement. h. Supply a member name in the ‘-utl copy,table’ statement (for example, caccusfd). 2. Submit the CACDCSLG JCL. The job should complete with a return code 0. As delivered, the job creates a member named CACCUSFD in the COPYBOOK data set (which is specified in the JCL CACDCSLG). The newly-created member contains COBOL copybook source definitions that describe the structure of table CUST. 3. Locate the first line of data in the newly-created member: ./ ADD NAME=CACCUSFD
4. Delete the line shown in the previous step, and save the modified file. (The first line is not standard COBOL, so it must be removed to avoid syntax errors.)
Mapping the source language member to logical tables This section explains how to map the CA-Datacom data into logical relational tables. It uses the sample CACCUSFD member that you created in the previous section. CACCUSFD contains COBOL copybook source definitions that describe the structure of table CUST. It is contained in the SCACSAMP data set on the mainframe. For more detailed information on data mapping, see IBM DB2 Information Integrator Data Mapper Guide for Classic Federation and Classic Event Publishing. To map the sample CA-Datacom copybook: 1. (Optional) Transfer the sample COBOL copybook to the workstation on which the Data Mapper is installed: a. Transfer the CACCUSFD member from the COPYBOOK data set to the workstation where the Data Mapper is installed. The following steps assume that the files are located in the Samples subdirectory of the Data Mapper installation directory. By default, the location is C:\Program Files\IBM\DB2IIClassic82\Data Mapper\Samples. b. Change the file name from CACCUSFD to customer.fd. 2. Start the Data Mapper. On the Windows Start menu, open IBM DB2 Information Integrator Classic Tools and click Data Mapper. 3. Open the sample repository Sample.mdb. a. On the Data Mapper File menu, click Open Repository. b. In the Open Repository dialog, select the Sample.mdb repository file and click Open. (If the Sample.mdb repository isn’t listed in the working
30
DB2 II Getting Started with Classic Federation
directory, browse to the Data Mapper installation directory C:\Program Files\IBM\DB2IIClassic82\Data Mapper and look in the Xadata subdirectory. c. Click Open. The Sample.mdb repository window opens:
4. Create a new data catalog in the repository. a. On the Edit menu, click Create a New Data Catalog. The Create Data Catalog dialog box appears: b. Enter the following information in the dialog box: v Name: CUSTOMER SAMPLE - DATACOM v Type: DATACOM v Remarks: Any remarks that may be appropriate.
c. Click OK. 5. List the tables in the CUSTOMER SAMPLE - DATACOM data catalog. a. In the Sample.mdb repository window, click in the first column of the CUSTOMER SAMPLE - DATACOM row to select that data catalog.
Chapter 4. Setting up CA-Datacom
31
b. On the Window menu, click List Tables. The initial list of tables is empty.
6. Create a new table in the CUSTOMER SAMPLE - IDMS data catalog. a. On the Edit menu, click Create a New Table. b. In the Create CA-Datacom Table dialog box, enter the following table properties: v Name: CUSTDCOM v OWNER: CAC v Table Name: CUST CUST is the CA-Datacom table that will be opened when the logical table CAC.CUSTDCOM is accessed with SQL. v Status/Version: PROD (to use the production version). v Access URT Name: CACDCURT. This is the name of a User Requirements Table (URT) that can be used to access CUST. If a URT does not exist, you must assemble and link edit it prior to running your installation verification. A sample URT is provided in the SCACSAMP data set, in a member named CACDCURT. CACDCURA, another SCACSAMP member, contains sample JCL to assemble and link edit member CACDCURT.
32
DB2 II Getting Started with Classic Federation
c. Use the default values for all other fields. d. Click OK. 7. Import the field (column) definitions from the COBOL copybook that you generated on the mainframe. a. In the Tables for Data Catalog CUSTOMER SAMPLE — DATACOM window, select the CUSTDCOM table by clicking in the first column of its row. b. On the File menu, click Import External File.
c. Select the copybook file that you created on the mainframe for the CUST table. If you transferred the CACCUSFD member from the mainframe to a customer.fd file on the workstation in step 1 on page 30, then simply select the customer.fd file. If you did not transfer the CACCUSFD member from the mainframe, then you can retrieve the CACCUSFD member using the Remote FTP feature: 1) Click Remote. 2) In the FTP Connect dialog box, enter the appropriate connection information for the mainframe on which the CACCUSFD member resides: v Host Address: The host address of the mainframe. This can be a host name or IP address. v Port ID: Typically 21 for FTP connections. v User ID: The user ID that is required by the FTP server on the mainframe. v User Password: The user password that is required by the FTP server on the mainframe. 3) Click Connect. Wait while the connection is completed and the data set list is built and transferred to the workstation. 4) When the data set list appears: a) Scroll the list or change the working directory to locate the source language data set that you generated on the mainframe. For this example, go to COPYBOOK. Chapter 4. Setting up CA-Datacom
33
b) Click the member name. For this example, click CACCUSFD. c) Click Transfer. d. Click OK. e. In the Import Copybook dialog box, confirm the information and then click Import.
The COBOL definitions are imported from the copybook into the table CAC.CUSTDCOM and mapped to SQL data types. The resulting table columns are show in the Columns for DATACOM Table CUSTDCOM window.
8. Close all windows within the Data Mapper except the Sample.mdb repository window. 9. Generate USE statements for the mappings that you created. a. In the Sample.mdb repository window, select the data catalog CUSTOMER SAMPLE - DATACOM. b. On the File menu, click Generate USE Statements.
34
DB2 II Getting Started with Classic Federation
c. Enter a file name (such as custdcom.use) for the generated USE statements and click OK.
d. After the Data Mapper generates the USE statements, it offers you a chance to see the USE statements (metadata grammar) that were generated. Click Yes to open the USE statements in Microsoft Notepad. The following sample shows a portion of the completed metadata grammar: DROP TABLE CAC.CUSTDCOM; USE TABLE CAC.CUSTDCOM DBTYPE DATACOM TABLENAME "CUST" STATUSVERSION IS "PROD" ACCESS USING "CACDCURT" ( /* COBOL Name CUSTOMER-ID */ CUSTOMER_ID SOURCE DEFINITION DATAMAP OFFSET 0 LENGTH 6 DATATYPE UC /* Zoned Decimal */ USE AS CHAR(6), /* COBOL Name LAST-NAME */ LAST_NAME SOURCE DEFINITION DATAMAP OFFSET 6 LENGTH 20 DATATYPE C USE AS CHAR(20), /* COBOL Name FIRST-NAME */ FIRST_NAME SOURCE DEFINITION DATAMAP OFFSET 26 LENGTH 15 DATATYPE C USE AS CHAR(15), /* COBOL Name STREET */ STREET SOURCE DEFINITION DATAMAP OFFSET 41 LENGTH 30 DATATYPE C USE AS CHAR(30), /* COBOL Name CITY */ . . .
Loading the metadata catalog To load the metadata catalog with the table that you created in the previous section: 1. Transfer (via FTP) the metadata USE grammar file that you generated (custdcom.use) to the SCACSAMP data set on the mainframe. For this exercise, name the SCACSAMP data set member DCOMUSE. 2. Assemble the User Requirements Table (URT):
Chapter 4. Setting up CA-Datacom
35
a. Edit the member CACDCURT in the SCACSAMP data set and make sure that the entry for the CUST table is accurate. If you are mapping a different CA-Datacom table, make an entry for that table. b. Edit the member CACDCURA to customize it for your environment, and then submit. This JCL assembles and links the URT and stores the module in the SCACLOAD data set. 3. If you have not already allocated the metadata catalog that is used by the system, do so now. Sample JCL to allocate the metadata catalog is provided in the SCACSAMP data set, in a member called CACCATLG. To allocate the metadata catalog: a. Customize the CACCATLG JCL to run in your environment. b. Submit the JCL. c. Ensure that the CACCAT and CACINDX DD statements are uncommented in the JCL. 4. Edit the SCACSAMP member CACDCID to ensure that the CA-Datacom security information is correct. 5. Load the metadata catalog using the metadata grammar as input. Sample JCL to load the metadata catalog is provided in the SCACSAMP data set, in a member called CACMETAU. To load the metadata catalog: a. Customize the CACMETAU JCL to run in your environment: 1) Uncomment and set the DC symbolic to specify the high level qualifiers for your CA-Datacom libraries. 2) Ensure that the symbolic GRAMMAR is pointing to the appropriate metadata grammar member (GRAMMAR=DCOMUSE). 3) Uncomment the STEPLIB DD statements for your CA-Datacom libraries, and verify that the data set names are correct. 4) Uncomment the DDIDENT DD statement, and verify that the data set name is correct. The data set must contain information required for access to the CA-Datacom datadictionary. In the SCACSAMP data set, there is a member named CACDCID that demonstrates the format and content required. The required information is USER= followed by an ENTITYOCCURRENCE-NAME (user name), and PASSWORD= followed by a QUALIFIER (password), if one has been assigned. The user name must be a CA-Datacom PRODuction status PERSON occurrence. If you leave this field blank, CA-Datacom Datadictionary Service Facility assumes there is no authorization. 5) Ensure that the CACCAT and CACINDX DDs refer to the catalogs created using the CACCATLG JCL. 6) Make sure the User Requirements Table that you identified in the Data Mapper is available in one of the STEPLIB load libraries. If not, code the appropriate URT, assemble it, and link edit it into the SCACLOAD library. For assistance with the definition, assembly, and link edit steps, see members CACDCURT and CACDCURA in the SCACSAMP data set. b. Submit the CACMETAU JCL. After this job has been run successfully, the metadata catalog is loaded with the logical table created in the Data Mapper. A return code of 4 is expected. The DROP TABLE fails because the table does not exist yet.
36
DB2 II Getting Started with Classic Federation
Important: When the metadata catalog is initialized, security is set up with System Administration authority (SYSADM) granted to the user ID who installed DB2 Information Integrator Classic Federation for z/OS and ran the metadata utility. That user ID is the only user who can access the system or the metadata catalogs. To turn off security, the new System Administrator must either grant SYSADM authorization to PUBLIC, allowing all users access and thus negating security, or grant table access authority to individual user IDs. 6. Grant the appropriate access rights to users. Sample JCL to grant access rights is contained in the SCACSAMP data set, in a member called CACGRANT. The JCL loads the catalogs with the appropriate access rights to the tables. This job will read in its input from the CACGRIN member. This member contains the GRANTs required to access the samples tables. If you are bringing your own tables on line with DB2 Information Integrator Classic Federation for z/OS, you must add the appropriate GRANTs for the new tables. See the information about SQL security in IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing for more information. To grant access rights: a. Customize the CACGRANT JCL to run in your environment by performing the following steps: 1) Ensure the symbolic GRAMMAR is pointing to the appropriate member containing the desired security commands. For example, GRAMMAR=CACGRIN. 2) Ensure that the CACCAT and CACINDX DDs refer to the catalogs created using the CACCATLG JCL. 3) Review CACGRIN and uncomment the appropriate GRANT for your database. b. Submit the CACGRANT JCL. After this job completes, the catalogs have been loaded with the desired security. Note: If you plan to use tools that require access to the metadata catalog information, you must run CACGRANT using CACGRSYS as the input.
Validating the setup This section explains how to verify that DB2 Information Integrator Classic Federation for z/OS is correctly configured to access your CA-Datacom data.
Accessing CA-Datacom data with SQL locally This section explains how to access CA-Datacom data by issuing SQL statements locally to the data server. Accessing CA-Datacom locally with SQL can be useful if you ever need to diagnose a problem. By running the SQL locally, you eliminate potential problem areas such as the network and the client machine (for example, Windows or UNIX). It also speeds up the resolution process because all of the diagnostics are occurring on the mainframe. To access CA-Datacom data locally, using SQL: 1. Review the SERVICE INFO ENTRY (SIE) for CACSAMP.
Chapter 4. Setting up CA-Datacom
37
a. In the SCACCONF data set, look at the data server configuration file called CACDSCF. b. Within the CACDSCF, search for the SERVICE INFO ENTRY (SIE) that contains the word CACSAMP: SERVICE INFO ENTRY = CACQP CACSAMP ....
c. Uncomment the configuration parameter shown above. The configuration parameter defines the data source name that is used by the client to connect to the data server. To create additional data source names, replicate this configuration parameter and give it a different data source name. For example: SERVICE INFO ENTRY = CACQP CACSAMP .... SERVICE INFO ENTRY = CACQP ACCOUNTING...
In this case, there are two valid data source names that can be configured on the client side. The first is the default CACSAMP, and the second is ACCOUNTING. You can create as many of these SIEs as you need. Using different SIEs is a good way to separate different users based on their business needs. This example uses the default data source CACSAMP. d. Uncomment and review the SERVICE INFO ENTRY (SIE) for DCOM. SERVICE INFO ENTRY = CACDCI DCOM 2 1 1 50 4 5M 5M 4
The last field controls the number of connections to be opened with a CA-Datacom MUF. A value of 4 is usually safe and sufficient. You can change this value if required by your site standards. 2. Edit the member CACDS in the data set SCACSAMP to ensure that the CA-Datacom symbolic parameter DC and STEPLIB DDs have been uncommented and modified for your environment. 3. Start the data server with the operator command s cacds. See IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing for detailed information about MTO operator commands. 4. Access data from local client on the mainframe by following the steps below: a. Identify the CACSQL member, which is contained in the SCACSAMP data set. The CACSQL member contains sample SQL for the various databases supported by DB2 Information Integrator Classic Federation for z/OS. It is the input to the CACCLNT JCL pointed to by its SYSIN DD. b. In the CACSQL member, uncomment the SQL statements for CA-Datacom. Note: If you are testing your own CA-Datacom tables, you can either create your own SQL in the same member (CACSQL) or create another member for your SQL statements. c. Configure the client. The client configuration file is used to communicate to the data server using the communication protocol defined in the data server. In the SCACCONF data set there is a member called CACUSCF. Configure the DATASOURCE parameter based on the communications protocol set up in the data server, as described in IBM DB2 Information Integrator Installation Guide for Classic Federation and Classic Event Publishing. d. Customize and submit the local client job CACCLNT. In the SCACSAMP data set there is a member called CACCLNT. This job executes the client batch job to issue SQL to the data server using CACSQL as the input SQL. Customize the JCL to run in your environment and submit.
38
DB2 II Getting Started with Classic Federation
e. View the output. The output should contain the SQL statement that is being issued and the corresponding result sets.
Accessing CA-Datacom data with SQL remotely Connecting remotely to a DB2 Information Integrator Classic Federation for z/OS data server requires the use of one of the DB2 Information Integrator Classic Federation for z/OS clients and an application that is compliant with ODBC, JDBC, or CLI standards. See IBM DB2 Information Integrator Client Guide for Classic Federation and Classic Event Publishing for detailed information about configuring and using the various DB2 Information Integrator Classic Federation for z/OS clients.
Chapter 4. Setting up CA-Datacom
39
40
DB2 II Getting Started with Classic Federation
Chapter 5. Setting up CA-IDMS This chapter explains how to set up IBM DB2 Information Integrator Classic Federation for z/OS to access CA-IDMS data. It includes the following sections: v “Overview” on page 41 v “Punching the schema and subschema” on page 41 v “Mapping the CA-IDMS schema and subschema to logical tables” on page 42 v “Loading the metadata catalog” on page 46 v “Validating the setup” on page 48
Overview The following sections describe how to enable SQL access to CA-IDMS. They explain how to set up a sample CA-IDMS database called Employee Demo Database. The sample database is part of the CA-IDMS installation and is identified by a schema named EMPSCHM and subschema named EMPSS01. Although this chapter uses a sample database, you use the same general steps to enable SQL access to your own CA-IDMS databases: v On the mainframe, punch the schema and subschema. v On Windows, use the Data Mapper to create logical tables based on the CA-IDMS schema and subschema. v On the mainframe, use the metadata utility to load the logical tables into the data server’s metadata catalog. v Verify SQL access to the CA-IDMS data. These steps are described in more detail in the following sections. For additional information about developing and deploying applications with DB2 Information Integrator Classic Federation for z/OS, see IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing. Note: In all of the jobs that are described in this section, you must customize the JCL as appropriate for your site. For example, you may need to concatenate libraries specific to CA-IDMS that are provided by the vendor. Templates for these libraries are commented out in the JCL. You must uncomment them and provide the appropriate high-level qualifiers.
Punching the schema and subschema The SCACSAMP data set on the mainframe contains a member called CACIDPCH. This member contains sample JCL that you can use to punch out the schema and subschema in COBOL copybook format. To punch the schema and subschema: v Customize the JCL in CACIDPCH to run in your environment. v Submit the JCL. By default, the job creates two members in the SCACSAMP data set, called CACIDSCH and CACIDSUB. These newly-created members contain the schema and subschema for the Employee Demo Database.
© Copyright IBM Corp. 2003, 2004
41
Mapping the CA-IDMS schema and subschema to logical tables This section explains how to map a CA-IDMS schema and subschema into logical relational tables. It uses the sample CACIDSCH and CACIDSUB members that you created in the previous section, which are contained in the SCACSAMP data set on the mainframe. It explains how to create a mapping between the Employee Demo Database that is described by the CACIDSCH and CACIDSUB schema and subschema and logical relational tables. For more information about data mapping, see IBM DB2 Information Integrator Data Mapper Guide for Classic Federation and Classic Event Publishing. To map the CA-IDMS schema and subschema: 1. Prepare the sample CA-IDMS schema and subschema: a. On the mainframe, ensure that the schema (CACIDSCH) and subschema (CACIDSUB) are in library members so that they can be transferred to your workstation. b. Transfer (via FTP) the CACIDSCH and CACIDSUB members from the SCACSAMP data set to the workstation where the Data Mapper is installed. The following steps assume that the files are located in the Samples subdirectory of the Data Mapper installation directory. By default, the location is C:\Program Files\IBM\DB2IIClassic82\Data Mapper\Sample. c. Rename the files CACIDSCH and CACIDSUB to follow Windows naming conventions: v cacidsch.sch v cacidsub.sub
| | | |
|
2. Start the Data Mapper. On the Windows Start menu, open IBM DB2 Information Integrator Classic Tools and click Data Mapper. 3. Open the sample repository Sample.mdb. a. On the Data Mapper File menu, click Open Repository. b. In the Open Repository dialog, select the Sample.mdb repository file and click Open. (If the Sample.mdb repository isn’t listed in the working directory, browse to the Data Mapper installation directory C:\Program Files\IBM\DB2IIClassic82\Data Mapper and look in the Xadata subdirectory. c. Click Open. 4. Create a new data catalog in the repository. a. On the Edit menu, click Create a New Data Catalog. The Create Data Catalog dialog box appears. b. Enter the following information: v Name: CUSTOMER SAMPLE - IDMS v Type: IDMS
42
DB2 II Getting Started with Classic Federation
c. Click OK. 5. Load the CA-IDMS schema and subschema, so that you can use them for reference when creating logical tables. a. On the File menu, click Load CA-IDMS Schema for Reference. b. In the Load CA-IDMS Schema File dialog box, select the cacidsch.sch schema file that you transferred from the mainframe and click OK.
c. After loading the schema, the Data Mapper prompts you to load a subschema. Click OK. d. In the Load CA-IDMS Schema File dialog box, select the cacidsub.sub subschema file that you transferred from the mainframe and click OK. e. The Data Mapper confirms that the load operation was successful. Click OK. 6. List the tables in the CUSTOMER SAMPLE - IDMS data catalog. a. In the Sample.mdb repository window, click in the first cell of the CUSTOMER SAMPLE - IDMS row to select that data catalog.
Chapter 5. Setting up CA-IDMS
43
b. On the Window menu, click List Tables. The initial list of tables is empty.
7. Create a new table in the CUSTOMER SAMPLE - IDMS data catalog. The following steps create a logical table that includes the CA-IDMS records EMPLOYEE and DEPARTMENT as defined by the set DEPT-EMPLOYEE. a. On the Edit menu, click Create a New Table. b. In the Create CA-IDMS Table dialog box, enter the following table properties: v Name: EMPLIDMS v Owner: CAC v Database Name: EMPDEMO c. Specify the path information: 1) From the Record Name drop down list, select EMPLOYEE. 2) In the Add’l Recs section, click the Insert button. (The Insert button has a set of rows and a blue arrow pointing to the left.) 3) In the Target Record drop down list, select DEPARTMENT.
44
DB2 II Getting Started with Classic Federation
d. Use the default values for all other fields. e. Click OK. 8. Import the field (column) definitions for the records EMPLOYEE and DEPARTMENT from the currently-loaded schema. a. On the File menu, click Import External File. b. The Data Mapper asks if you want to import from the existing schema. Verify that the schema is cacidsch.sch, and click Yes. The Import - CA-IDMS Record Select dialog box appears. Notice that the EMPLOYEE and DEPARTMENT records are selected for import.
c. Click Continue. The Import Copybook dialog box appears.
d. Click Import. The COBOL definitions are imported from the loaded schema into the table CAC.EMPLIDMS, as shown in the Columns for IDMS Table
Chapter 5. Setting up CA-IDMS
45
EMPLIDMS window.
9. Close all windows within the Data Mapper except the Sample.mdb repository window. 10. Generate USE statements for the mappings that you created. a. In the Sample.mdb repository window, select the data catalog CUSTOMER SAMPLE - IDMS. b. On the File menu, click Generate USE Statements. c. Enter a file name (such as idms.use) for the generated USE statements and click OK. d. After the Data Mapper generates the USE statements, it offers you a chance to see the USE statements (metadata grammar) that were generated. Click Yes to open the USE statements in Microsoft Notepad. The following sample shows a portion of the completed metadata grammar: DROP TABLE CAC.EMPLIDMS; USE TABLE CAC.EMPLIDMS DBTYPE IDMS EMPSCHM SUBSCHEMA IS EMPSS01 VERSION IS 100 DBNAME IS EMPDEMO PATH IS ( EMPLOYEE, SET IS DEPT-EMPLOYEE, DEPARTMENT ) ( /* COBOL Name EMP-ID-0415 */ EMP_ID_0415 SOURCE DEFINITION ENTRY EMPLOYEE EMP-ID-0415 USE AS CHAR(4), /* COBOL Name EMP-FIRST-NAME-0415 */ EMP_FIRST_NAME_0415 SOURCE DEFINITION ENTRY EMPLOYEE EMP-FIRST-NAME-0415 USE AS CHAR(10), ... DEPT_HEAD_ID_0410 SOURCE DEFINITION ENTRY DEPARTMENT DEPT-HEAD-ID-0410 USE AS CHAR(4) );
Loading the metadata catalog To load the metadata catalog with the tables that you created in the previous section: 1. Transfer (via FTP) the metadata USE grammar file that you generated (idms.use) to the SCACSAMP (IDMSUSE) data set on the mainframe. 2. If you have not already allocated the metadata catalog that is used by the system, do so now. Sample JCL to allocate the metadata catalog is provided in the SCACSAMP data set, in a member called CACCATLG. To allocate the metadata catalog: a. Customize the CACCATLG JCL to run in your environment. b. Submit the JCL.
46
DB2 II Getting Started with Classic Federation
c. After this job completes, ensure that the server procedure in the PROCLIB points to the newly-created metadata catalogs using the CACCAT and CACINDX DD statements. d. Ensure that the CACCAT and CACINDX DD statements are uncommented in the JCL. 3. Load the metadata catalog using the USE grammar as input. Sample JCL to load the metadata catalog is provided in the SCACSAMP data set, in a member called CACMETAU. To load the metadata catalog: a. Customize the CACMETAU JCL to run in your environment: 1) Ensure that the symbolic GRAMMAR is pointing to the appropriate metadata grammar member (GRAMMAR=IDMSUSE). 2) Uncomment and set the IDMS symbolic IDMS, the STEPLIB DDs that point to your CA-IDMS libraries, DDLPUNCH DD, and SYSCTL DD. 3) Ensure that the CACCAT DD and CACINDX DD refer to the catalogs created using the CACCATLG JCL. b. Submit the JCL. After this job has been run successfully, the metadata catalog is loaded with the logical tables created in the Data Mapper. A return code of 4 is expected. The DROP TABLE fails because the table does not exist yet. Important: When the metadata catalog is initialized, security is set up with System Administration authority (SYSADM) granted to the user ID who installed DB2 Information Integrator Classic Federation for z/OS and ran the metadata utility. That user ID is the only user who can access the system or the metadata catalogs. To turn off security, the new System Administrator must either grant SYSADM authorization to PUBLIC, allowing all users access and thus negating security, or grant table access authority to individual user IDs. 4. Grant the appropriate access rights to users. Sample JCL to grant access rights is contained in the SCACSAMP data set, in a member called CACGRANT. The JCL loads the catalogs with the appropriate access rights to the tables. The job reads in its input from the CACGRIN member, which contains the GRANTs required to access the samples tables. If you are bringing your own tables on line with DB2 Information Integrator Classic Federation for z/OS, you must add the appropriate GRANTs for the new tables. See the information about SQL security in IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing for more information. To grant access rights: a. Customize the CACGRANT JCL to run in your environment by performing the following steps: 1) Ensure the symbolic GRAMMAR is pointing to the appropriate member containing the desired security commands. 2) Ensure that the CACCAT and CACINDX DDs refer to the catalogs created using the CACCATLG JCL. 3) Review CACGRIN and uncomment the appropriate GRANT for your database. b. Submit the job CACGRANT.
Chapter 5. Setting up CA-IDMS
47
If you plan to use tools that require access to the metadata catalog information, you must run CACGRANT using CACGRSYS as the input. After this job completes, the catalogs have been loaded with the desired security.
Validating the setup This section explains how to verify that DB2 Information Integrator Classic Federation for z/OS is correctly configured to access your CA-IDMS data.
Accessing CA-IDMS data with SQL locally This section explains how to access CA-IDMS data by issuing SQL statements locally to the data server. Accessing CA-IDMS locally with SQL can be useful if you ever need to diagnose a problem. By running the SQL locally, you eliminate potential problem areas such as the network and the client machine (for example, Windows or UNIX). It also speeds up the resolution process because all of the diagnostics are occurring on the mainframe. To access CA-IDMS data locally using SQL: 1. Review the SERVICE INFO ENTRY (SIE) for CACSAMP. a. In the SCACCONF data set, look at the data server configuration file called CACDSCF. b. Within the CACDSCF file, search for the SERVICE INFO ENTRY (SIE) that contains the word CACSAMP: SERVICE INFO ENTRY = CACQP CACSAMP ....
c. Uncomment this configuration parameter. The configuration parameter defines the data source name that is used by the client to connect to the data server. The name CACSAMP is the data source name that will be defined as an ODBC data source. To create additional data source names, replicate this configuration parameter and give it a different data source name. For example: SERVICE INFO ENTRY = CACQP CACSAMP .... SERVICE INFO ENTRY = CACQP ACCOUNTING...
In this case, there are two valid data source names that can be configured on the client side. The first is the default CACSAMP, and the second is ACCOUNTING. You can create as many of these SIEs as you need. Using different SIEs is a good way to separate different users based on their business needs. This example uses the default data source CACSAMP. 2. Ensure that the CA-IDMS symbolics and DDs have been uncommented and modified for your environment. These include the DDs for the STEPLIB concatenation, and the SYSCTL DD statement. 3. Start the data server. Start the data server with the operator command s cacds. See IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing for detailed information about MTO operator commands. 4. Access data from local client on the mainframe by following the steps below: a. Identify the CACSQL member, which is contained in the SCACSAMP data set. The CACSQL member contains sample SQL for the various databases supported by DB2 Information Integrator Classic Federation for z/OS. It is the input to the CACCLNT JCL pointed to by its SYSIN DD. b. In the CACSQL member, uncomment the SQL statements for CA-IDMS.
48
DB2 II Getting Started with Classic Federation
Note: If you are testing your own CA-IDMS tables, then you can create your own SQL in the same member (CACSQL) or create another member for your SQL statements. c. Configure the client. The client configuration file is used to communicate to the data server using the communication protocol defined in the data server. In the SCACCONF data set there is a member called CACUSCF. Configure the DATASOURCE parameter based on the communications protocol set up in the data server, as described in IBM DB2 Information Integrator Installation Guide for Classic Federation and Classic Event Publishing. d. Customize and submit the local client job CACCLNT. In the SCACSAMP data set there is a member called CACCLNT. This job executes the client batch job to issue SQL to the data server using CACSQL as the input SQL. Customize the JCL to run in your environment and submit. e. View the output. The output should contain the SQL statement that is being issued and the corresponding result sets.
Accessing CA-IDMS data with SQL remotely Connecting remotely to DB2 Information Integrator Classic Federation for z/OS data server requires the use of one of the DB2 Information Integrator Classic Federation for z/OS clients and an application that is compliant with ODBC, JDBC, or CLI standards. See IBM DB2 Information Integrator Client Guide for Classic Federation and Classic Event Publishing for detailed information about configuring and using the various DB2 Information Integrator Classic Federation for z/OS clients.
Chapter 5. Setting up CA-IDMS
49
50
DB2 II Getting Started with Classic Federation
Chapter 6. Setting up DB2 Universal Database for z/OS This chapter explains how to set up IBM DB2 Information Integrator Classic Federation for z/OS to access data in DB2 Universal Database for z/OS (DB2 UDB for z/OS). It includes the following sections: v “Overview” on page 51 v “Setting up the interface to DB2 Universal Database for z/OS” on page 51 v “Mapping the DB2 Universal Database table definitions to logical tables” on page 52 v “Loading the metadata catalog” on page 52 v “Validating the setup” on page 53
Overview The following sections describe how to enable SQL access to DB2 Universal Database for z/OS (DB2 UDB). They explain how to set up a sample DB2 UDB database table called EMP, which is usually created during installation of DB2 UDB under a sample Employees database. The EMP table contains 42 records of employee information. If the Employees sample is not available at your site, you can use one of your own DB2 UDB databases. Although this chapter uses a sample database, you use the same general steps to enable SQL access to your own DB2 UDB databases: v Set up the interface to DB2 UDB. v Map the DB2 UDB tables to DB2 Information Integrator Classic Federation for z/OS logical tables. v Load the metadata catalog with the logical tables. v Verify SQL access to the DB2 UDB data. These steps are described in more detail in the following sections. For additional information about developing and deploying applications with DB2 Information Integrator Classic Federation for z/OS, see IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing. Note: In all of the jobs that are described in this section, you must customize the JCL as appropriate for your site. For example, you may need to concatenate libraries specific to DB2 UDB that are provided by IBM. Templates for these libraries are included in the JCL. You must uncomment them and provide the appropriate high-level qualifiers.
Setting up the interface to DB2 Universal Database for z/OS Before you can set up access to data in DB2 Universal Database for z/OS, you must bind an application plan for use by the DB2 UDB Call Attachment Facility (CAF) service. You must have BINDADD authority to run the BIND job. To bind an application plan: 1. Edit bind control card CACBCNTL. Edit the CACBCNTL member in the SCACSAMP data set and change the term DSN to the appropriate DB2 UDB subsystem ID for your site. If your site © Copyright IBM Corp. 2003, 2004
51
requires plan names to match specific standards, you can change the plan name from CAC82PLN to a name that fits those standards. 2. Run the bind job CACBIND. Edit the bind job CACBIND to conform to your environment. This job uses the provided DBRM (MSLP2EC) to create a plan CAC82PLN (or the name provided by you in CACBCNTL). 3. Submit the CACBIND job. 4. Provide EXECUTE authority on the plan for users of the DB2 UDB for z/OS system catalog. For example, you can use the following statement if you want to give execute authority to all. You can run the GRANT statement using either SQL Processor Using File Input (SPUFI) in the DB2 Interactive (DB2I) Tool or by submitting a DB2 UDB batch job. GRANT EXECUTE ON PLAN CAC82PLN TO PUBLIC;
Mapping the DB2 Universal Database table definitions to logical tables This section explains how to import DB2 U–niversal Database table definitions. It uses the sample CACDB2UG member, which is contained in the SCACSAMP data set on the mainframe. CACDB2UG contains an example of the syntax that is used to extract information on the layout of the sample EMP table from DB2 Universal Database for z/OS system catalog and to create the DB2 Information Integrator Classic Federation for z/OS logical table CAC.EMPLDB2. To import DB2 Universal Database table definitions: v Customize the statements in member CACDB2UG for your environment. 1. Change the value of DSN to be the appropriate DB2 Universal Database subsystem ID for your site. 2. If you bound the plan with some other name, change the name of the plan CAC82PLN accordingly. 3. Provide the fully qualified two-part DB2 Universal Database table name (such as CAC.EMPLDB2) for the sample table EMP.
Loading the metadata catalog To load the DB2 Information Integrator Classic Federation for z/OS metadata catalog with the table you created in the previous section: 1. If you have not already allocated the metadata catalog that is used by the system, do so now. Sample JCL to allocate the metadata catalog is provided in the SCACSAMP data set, in a member called CACCATLG. To allocate the metadata catalog: a. Customize the CACCATLG JCL to run in your environment. b. Submit the JCL. c. After this job completes, ensure that the server procedure in the PROCLIB points to the newly-created metadata catalogs using the CACCAT and CACINDX DD statements. d. Ensure that the CACCAT and CACINDX DD statements are uncommented in the JCL. 2. Load the metadata catalogs. Sample JCL to load the metadata catalog is provided in the SCACSAMP data set, in a member called CACMETAU. To load the metadata catalog: a. Customize the CACMETAU JCL to run in your environment:
52
DB2 II Getting Started with Classic Federation
1) Ensure that the symbolic GRAMMAR is pointing to the appropriate metadata grammar member (GRAMMAR=CACDB2UG). 2) Ensure that the CACCAT and CACINDX DDs refer to the catalogs created using the CACCATLG JCL. b. Submit the CACMETAU JCL. After this job has been run successfully, the metadata catalog is loaded with the logical tables. A return code of 4 is expected. The DROP TABLE fails because the table does not exist yet. Important: When the metadata catalog is initialized, security is set up with System Administration authority (SYSADM) granted to the user ID who installed DB2 Information Integrator Classic Federation for z/OS and ran the metadata utility. That user ID is the only user who can access the system or the metadata catalogs. To turn off security, the new System Administrator must either grant SYSADM authorization to PUBLIC, allowing all users access and thus negating security, or grant table access authority to individual user IDs. 3. Grant the appropriate access rights to users. Sample JCL to grant access rights is contained in the SCACSAMP data set, in a member called CACGRANT. The JCL loads the catalogs with the appropriate access rights to the tables. The job reads in its input from the CACGRIN member, which contains the GRANTs required to access the samples tables. If you are bringing your own tables on line with DB2 Information Integrator Classic Federation for z/OS, you must add the appropriate GRANTs for the new tables. See the information about SQL security in IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing for more information. To grant access rights: a. Customize the CACGRANT JCL to run in your environment by performing the following steps: 1) Ensure the symbolic GRAMMAR is pointing to the appropriate member containing the desired security commands. For example, GRAMMAR=CACGRIN. 2) Ensure that the CACCAT and CACINDX DDs refer to the catalogs created using the CACCATLG JCL. 3) Review CACGRIN and uncomment the appropriate GRANT for your database. b. Submit the JCL. If you plan to use tools that require access to the metadata catalog information, you must run CACGRANT using CACGRSYS as the input. After this job completes, the catalogs have been loaded with the desired security.
Validating the setup This section explains how to verify that DB2 Information Integrator Classic Federation for z/OS is correctly configured to access your data in DB2 Universal Database for z/OS.
Chapter 6. Setting up DB2 Universal Database for z/OS
53
Accessing DB2 Universal Database data with SQL locally This section explains how to access CA-IDMS data by issuing SQL statements locally to the data server. Accessing CA-IDMS locally with SQL can be useful if you ever need to diagnose a problem. By running the SQL locally, you eliminate potential problem areas such as the network and the client machine (for example, Windows or UNIX). It also speeds up the resolution process because all of the diagnostics are occurring on the mainframe. To access the DB2 Universal Database data with SQL: 1. Review SERVICE INFO ENTRY (SIE) for CACSAMP. a. In the SCACCONF data set, see the data server configuration file called CACDSCF. b. Within the CACDSCF, search for the SERVICE INFO ENTRY (SIE) containing the word CACSAMP: SERVICE INFO ENTRY = CACQP CACSAMP ....
c. Uncomment this configuration parameter. The configuration parameter defines the data source name that is used by the client to connect to the DB2 Information Integrator Classic Federation for z/OS data server. To create additional data source names, simply replicate this configuration parameter and give it a different data source name. For example: SERVICE INFO ENTRY = CACQP CACSAMP 2 1 4 5 4 5M 5M CACQPCF SERVICE INFO ENTRY = CACQP ACCOUNTING . . .
In this case, there are two valid data source names that can be configured on the client side. The first is the default CACSAMP, and the second is ACCOUNTING. You can create as many of these SIEs as you need. Using different SIEs is a good way to separate different users based on their business needs. This example uses the default data source CACSAMP. d. Define a CAF service for DB2 Universal Database access. DB2 Universal Database connections are created and managed by a separate DB2 Universal Database CAF service. The following example shows the configuration of this service. SERVICE INFO ENTRY = CACCAF DSN 2 1 5 1 4 5M 5M CAC82PLN
The service name DSN in the CAF service identifies the DB2 Universal Database subsystem the service connects to for user queries. Change the DSN to your DB2 Universal Database subsystem name. This service has a minimum task count of 1, a maximum task count of 5, and a maximum connections count of 1. Unlike a query processor task, the CAF task requires the maximum connections to be 1 because only one connection to DB2 Universal Database can be created by each task. Based on the definition in the example, a maximum of 5 concurrent users can access the DB2 Universal Database subsystem DSN at a time. Based on the preceding SERVICE INFO ENTRY example there is a possibility that some users will not be able to connect to DB2 Universal Database because the query processor service supports 20 concurrent users and the DB2 Universal Database CAF service only supports 5. 2. Ensure that the DB2 Universal Database symbolic and STEPLIB DD have been uncommented and modified for your environment. 3. Start the data server with the operator command s cacds. See IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing for detailed information about MTO operator commands.
54
DB2 II Getting Started with Classic Federation
4. Access data from local client on the mainframe by following the steps below: a. Identify the CACSQL member, which is contained in the SCACSAMP data set. The CACSQL member contains sample SQL for the various databases supported by DB2 Information Integrator Classic Federation for z/OS. It is the input to the CACCLNT JCL pointed to by its SYSIN DD. b. In the CACSQL member, uncomment the SQL statements for DB2 Universal Database. Note: If you are testing your own DB2 Universal Database tables, then you can create your own SQL in the same member (CACSQL) or create another member for your own SQL statements. c. Configure the client. The client configuration file is used to communicate to the data server using the communication protocol defined in the data server. In the SCACCONF data set is a member called CACUSCF. Configure the DATASOURCE parameter based on the communications protocol set up in the data server, as described in IBM DB2 Information Integrator Installation Guide for Classic Federation and Classic Event Publishing. d. Customize and submit the local client job CACCLNT. In the SCACSAMP data set there is a member called CACCLNT. This job executes the client batch job to issue SQL to the data server using CACSQL as the input SQL. Customize the JCL to run in your environment and submit. e. View the output. The output should contain the SQL statement that is being issued and the corresponding result sets.
Accessing DB2 Universal Database data with SQL remotely Connecting remotely to DB2 Information Integrator Classic Federation for z/OS data server requires the use of one of the DB2 Information Integrator Classic Federation for z/OS clients and an application that is compliant ODBC, JDBC, or CLI standards. See IBM DB2 Information Integrator Client Guide for Classic Federation and Classic Event Publishing for detailed information about configuring and using the various DB2 Information Integrator Classic Federation for z/OS clients.
Chapter 6. Setting up DB2 Universal Database for z/OS
55
56
DB2 II Getting Started with Classic Federation
Chapter 7. Setting up IMS This chapter explains how to set up IBM DB2 Information Integrator Classic Federation for z/OS to access IMS data. It includes the following sections: v “Overview” on page 57 v “Mapping the sample IMS DBD to logical tables” on page 57 v “Loading the metadata catalog” on page 65 v “Validating the setup” on page 67
Overview The following sections describe how to enable SQL access to IMS data. They explain how to set up access to a sample IMS database called DI21PART. You IMS system should have the DI21PART database installed by default. The SCACSAMP data set on the mainframe contains DBD and COBOL copybooks that describe the DI21PART database. The DBD is contained in a member called CACIMPAR and the two COBOL copybooks are in members CACIMROT (PARTROOT segment) and CACIMSTO (STOKSTAT segment). You will need these files to complete the following sections. The following sections create a mapping for the recommended data capture options for root and child segments. Although this chapter uses a sample database, you use the same general steps to enable SQL access to your own IMS databases: v Identify the DBD and COBOL copybooks that describe the database. v Map the DBD to logical tables and export those definitions as metadata (USE) grammar. v Load the USE grammar into the metadata catalog. v Verify SQL access to the IMS data. These steps are described in more detail in the following sections. For additional information about developing and deploying applications with DB2 Information Integrator Classic Federation for z/OS, see IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing. Note: In all of the jobs that are described in this section, you must customize the JCL as appropriate for your site. For example, you may need to concatenate libraries specific to IMS that are provided by the vendor. Templates for these libraries are included in the JCL. You must uncomment them and provide the appropriate high-level qualifiers.
Mapping the sample IMS DBD to logical tables To map the DI21PART database: 1. Prepare the sample COBOL copybook: a. Transfer (via FTP) the CACIMPAR, CACIMROT, and CACIMSTO members from the SCACSAMP data set to the workstation where the Data Mapper is installed. The following steps assume that the files are located in the
© Copyright IBM Corp. 2003, 2004
57
Samples subdirectory of the Data Mapper installation directory. By default, the location is C:\Program Files\IBM\DB2IIClassic82\Data Mapper\Samples. b. Rename the files to follow Windows naming conventions: v cacimpar.dbd v cacimrot.fd v cacimsto.fd 2. Start the Data Mapper. On the Windows Start menu, open IBM DB2 Information Integrator Classic Tools and click Data Mapper. 3. Open the sample repository Sample.mdb. a. On the Data Mapper File menu, click Open Repository. b. In the Open Repository dialog, select the Sample.mdb repository file and click Open. (If the Sample.mdb repository isn’t listed in the working directory, browse to the Data Mapper installation directory C:\Program Files\IBM\DB2IIClassic82\Data Mapper and look in the Xadata subdirectory. c. Click Open. 4. Create a new data catalog in the repository. a. On the Edit menu, click Create a New Data Catalog. The Create Data Catalog dialog box appears. b. Enter the following information: v Name: Parts Catalog - IMS v Type: IMS v Change Capture: If you are also using DB2 Information Integrator Classic Event Publisher and you want to enable change capture for the data in this catalog, select this check box.
c. Click OK. 5. Load the IMS DL/I DBD, so that you can use it for reference when creating logical tables. a. On the File menu, click Load DL/I DBD for Reference.
58
DB2 II Getting Started with Classic Federation
b. In the Load DBD File dialog box, select the DBD that you transferred from the mainframe (cacimpar.dbd) and click OK.
The DL/I DBD window appears.
6. List the tables in the Parts Catalog - IMS data catalog. a. In the Sample.mdb repository window, click in the first cell of the Parts Catalog - IMS row to select that data catalog. b. On the Window menu, click List Tables. The initial list of tables is empty.
7. Create a new table in the Parts Catalog - IMS data catalog for the PARTROOT segment.
Chapter 7. Setting up IMS
59
The following steps create a logical table that includes the IMS root segment PARTROOT and the leaf segment PARTROOT as defined by the DBD. a. On the Edit menu, click Create a New Table. b. In the Create IMS Table dialog box, enter the following table properties: Name: automatically populated from the Leaf Seg field Owner: CAC Index Root: PARTROOT Leaf Seg: PARTROOT PARTROOT is referred to as the leaf segment because it acts as the leaf segment as defined by this logical table. v PSB Name: DFSSAM03 This is the PSB name that is scheduled at run time by DBCTL. You can use the PSB named DFSSAM03 for the DI21PART database. v PSB Prefix: Enter the PCB prefix for the PSB name. For more about PCB prefixes, see IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing. v v v v
c. Click OK. 8. Import the field (column) definitions from the CACIMROT copybook that you transferred from the SCACSAMP data set into the PARTROOT table. a. In the Tables for Data Catalog Parts Catalog - IMS window, select the PARTROOT table by clicking in the first cell of its row. b. On the File menu, click Import External File. c. Select the cacimrot.fd copybook file that you transferred to your workstation. d. Click OK. e. In the Import Copybook dialog box, confirm the information.
60
DB2 II Getting Started with Classic Federation
Make sure the PARTROOT segment in the Seg Name field is selected, because it is the segment for which you are loading the copybook.
f. Click Import. The COBOL definitions are imported from the cacimrot.fd copybook into the table CAC.PARTROOT and mapped to SQL data types. The resulting table columns are show in the Columns for VSAM Table PARTROOT window.
You have now created a logical table mapping that matches the data access options that you specified for the PARTROOT segment. The following steps show you how to create the logical table for the STOKSTAT segment. 9. Create a new table in the Parts Catalog - IMS data catalog for the STOKSTAT segment. a. Go to the IMS Tables for Data Catalog Parts Catalog - IMS window. b. On the Edit menu, click Create a New Table. c. In the Create IMS Table dialog box, enter the following table properties: v Name: automatically populated from the Leaf Seg field v Owner: CAC v Index Root: PARTROOT v Leaf Seg: STOKSTAT STOKSTAT is referred to as the leaf segment because it acts as the leaf segment as defined by this logical table. v PSB Name: DFSSAM03
Chapter 7. Setting up IMS
61
This is the PSB name that is scheduled at run time by DBCTL. You can use the PSB named DFSSAM03 for the DI21PART database. v PSB Prefix: Enter the PCB prefix for the PSB name. For more about PCB prefixes, see IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing.
d. Click OK. 10. Import the field (column) definitions from the CACIMROT copybook that you transferred from the SCACSAMP data set into the STOKSTAT table. a. In the Tables for Data Catalog Parts Catalog - IMS window, select the STOKSTAT table by clicking in the first cell of its row. b. On the File menu, click Import External File. c. Select the cacimrot.fd copybook file that you transferred to your workstation. d. Click OK. e. In the Import Copybook dialog box, confirm the information.
62
DB2 II Getting Started with Classic Federation
Make sure the STOKSTAT segment in the Seg Name field is selected, because it is the segment for which you are loading the copybook.
f. Click Import. The COBOL definitions are imported from the cacimrot.fd copybook into the table CAC.STOKSTAT and mapped to SQL data types. The resulting table columns are show in the Columns for VSAM Table STOKSTAT window.
11. Import the field (column) definitions from the CACIMSTO copybook that you transferred from the SCACSAMP data set into the STOKSTAT table. a. In the Tables for Data Catalog Parts Catalog - IMS window, select the STOKSTAT table by clicking in the first cell of its row. b. On the File menu, click Import External File. c. Select the cacimsto.fd copybook file that you transferred to your workstation. d. Click OK. e. In the Import Copybook dialog box make sure that the following fields are set correctly: v Append to Existing Columns is checked v Seg Name is STOKSTAT
Chapter 7. Setting up IMS
63
f. Click Import. The COBOL definitions are imported from the cacimsto.fd copybook into the table CAC.STOKSTAT and mapped to SQL data types. The resulting table columns are appended to the CACIMROT columns and appear in the Columns for VSAM Table STOKSTAT window.
You have now defined a logical table which includes a root and child segment. 12. Close all windows within the Data Mapper except the Sample.mdb repository window. 13. Generate USE statements for the mappings that you created. a. In the Sample.mdb repository window, select the data catalog Parts Catalog - IMS. b. On the File menu, click Generate USE Statements. c. Enter a file name (such as parts.use) for the generated USE statements and click OK. d. After the Data Mapper generates the USE statements, it offers you a chance to see the USE statements (metadata grammar) that were generated. Click Yes to open the USE statements in Microsoft Notepad.
64
DB2 II Getting Started with Classic Federation
The following sample shows a portion of the completed metadata grammar: DROP TABLE CAC.PARTROOT; USE TABLE CAC.PARTROOT DBTYPE IMS DI21PART INDEXROOT PARTROOT PARTROOT SCHEDULEPSB DFSSAM03 ( /* COBOL Name PARTCOD */ PARTCOD SOURCE DEFINITION ENTRY PARTROOT DATAMAP OFFSET 0 LENGTH 2 DATATYPE C USE AS CHAR(2), /* COBOL Name PARTNO */ PARTNO SOURCE DEFINITION ENTRY PARTROOT DATAMAP OFFSET 2 LENGTH 15 DATATYPE C USE AS CHAR(15), /* COBOL Name DESCRIPT */ DESCRIPT SOURCE DEFINITION ENTRY PARTROOT DATAMAP OFFSET 26 LENGTH 20 DATATYPE C USE AS CHAR(20) ); ALTER TABLE CAC.PARTROOT DATA CAPTURE CHANGES; DROP TABLE CAC.STOKSTAT; USE TABLE CAC.STOKSTAT DBTYPE IMS DI21PART INDEXROOT PARTROOT STOKSTAT SCHEDULEPSB DFSSAM03 ( /* COBOL Name PARTCOD */ PARTCOD SOURCE DEFINITION ENTRY STOKSTAT DATAMAP OFFSET 0 LENGTH 2 DATATYPE C USE AS CHAR(2), /* COBOL Name PARTNO */ PARTNO SOURCE DEFINITION ENTRY STOKSTAT DATAMAP OFFSET 2 LENGTH 15 DATATYPE C USE AS CHAR(15), /* COBOL Name DESCRIPT */ . . .
Loading the metadata catalog To load the metadata catalog with the tables you created in the previous section: 1. Transfer (via FTP) the metadata USE grammar file that you generated (parts.use) to the SCACSAMP data set on the mainframe. For this exercise, name the SCACSAMP data set member PARTSUSE. 2. If you have not already allocated the metadata catalog that is used by the system, do so now. Sample JCL to allocate the metadata catalog is provided in the SCACSAMP data set, in a member called CACCATLG. To allocate the metadata catalog: a. Customize the CACCATLG JCL to run in your environment, specifying the metadata catalog data set name and data set allocation information. b. Submit the JCL. 3. When the CACCATLG job completes: v Make sure that the data server procedure in the PROCLIB into which you copied the CACCS member points to the newly-created metadata catalog using the CACCAT and CACINDX DD statements v Make sure that the CACCAT and CACINDX DD statements are uncommented in the JCL. 4. Load the metadata catalog using the metadata grammar as input.
Chapter 7. Setting up IMS
65
Sample JCL to load the metadata catalog is provided in the SCACSAMP data set, in a member called CACMETAU. To load the metadata catalog: a. Customize the CACMETAU JCL to run in your environment: 1) Ensure that the symbolic GRAMMAR is pointing to the appropriate metadata grammar member (GRAMMAR=PARTSUSE). 2) Uncomment and set the IMS symbolic and the DBDLIB DD that points to your IMS DBD library. 3) Make sure the CACCAT and CACINDX DD statements refer to the metadata catalogs created using the CACCATLG JCL. b. Submit the CACMETAU JCL. After this job has been run successfully, the metadata catalog is loaded with the logical table created in the Data Mapper. A return code of 4 is expected. The DROP TABLE fails because the table does not exist yet. Important: When the metadata catalog is initialized, security is set up with System Administration authority (SYSADM) granted to the user ID who installed DB2 Information Integrator Classic Federation for z/OS and ran the metadata utility. That user ID is the only user who can access the system or the metadata catalogs. To turn off security, the new System Administrator must either grant SYSADM authorization to PUBLIC, allowing all users access and thus negating security, or grant table access authority to individual user IDs. 5. Grant the appropriate access rights to users. Sample JCL to grant access rights is contained in the SCACSAMP data set, in a member called CACGRANT. The JCL loads the catalogs with the appropriate access rights to the tables. This job will read in its input from the CACGRIN member. This member contains the GRANTs required to access the samples tables. If you are bringing your own tables on line with DB2 Information Integrator Classic Federation for z/OS, you must add the appropriate GRANTs for the new tables. See the information about SQL security in IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing for more information. To grant access rights: a. Customize the CACGRANT JCL to run in your environment by performing the following steps: 1) Make sure the symbolic GRAMMAR is pointing to the appropriate member containing the desired security commands. 2) Make sure that the CACCAT and CACINDX DDs refer to the metadata catalog that was created using the CACCATLG JCL. 3) Review CACGRIN and uncomment the appropriate GRANT for your database. b. Submit the CACGRANT JCL. After this job completes, the catalogs have been loaded with the desired security. Note: If you plan to use tools that require access to the metadata catalog information, you must run CACGRANT using CACGRSYS as the input.
66
DB2 II Getting Started with Classic Federation
Validating the setup This section explains how to verify that DB2 Information Integrator Classic Federation for z/OS is correctly configured to access your IMS data.
Establishing the interface to DBCTL/DRA Now that an IMS logical table is mapped, you must establish the interface to IMS using DBCTL. DBCTL is an IMS database manager subsystem that runs in its own address space. DBCTL must be configured and running to use the DB2 Information Integrator Classic Federation for z/OS IMS DRA interface. The DB2 Information Integrator Classic Federation for z/OS data server looks like a CICS region talking to IMS. The DRA is the interface between the DB2 Information Integrator Classic Federation for z/OS data server and DBCTL. The DRA start-up/router program is supplied with the IMS product and executes within the DB2 Information Integrator Classic Federation for z/OS data server address space. For more information on DBCTL, see the IMS documentation. To enable the DRA: 1. Make the DRA start-up/router (load module DFSPRRC0) accessible to the DB2 Information Integrator Classic Federation for z/OS data server by either of the following methods: v Copying DFSPRRC0 from the IMS.RESLIB library (built by the IMS generation process) into the SCACLOAD load library v Concatenating the IMS.RESLIB library to the SCACLOAD STEPLIB in the SCACSAMP data set member CACDS 2. Make the DRA start-up table (load module DFSPZPxx) accessible to the server by including the IMS.RESLIB library in the server JCL. The following steps activate the IMS DRA initialization service. 3. Uncomment the SERVICE INFO ENTRY for the IMS DRA initialization service Task found in the SCACCONF data set member CACDSCF. This SERVICE INFO ENTRY is identified by the comments in the configuration member. 4. In the Task Data field at the end of the SERVICE INFO ENTRY parameter, specify the additional information that is used to initialize the DRA interface: v DRA Start-up Table Suffix: Modify 00 to specify the last two characters of the load module name created in DRA Setup. If you are using the default DRA start-up table load module, leave it as 00. v DRA user ID: Modify DRAUSER to specify the default DRA user ID that is used to connect to and register with DBCTL. The DRAUSER is the name by which the DB2 Information Integrator Classic Federation for z/OS data server is known to DBCTL. If the DRA user ID is six characters in length and the user ID is in use, DB2 Information Integrator Classic Federation for z/OS appends a two-digit suffix to the user ID, starting with 00. DB2 Information Integrator Classic Federation for z/OS then tries to connect with the new user ID or increment by one until it finds an available user ID. A similar process is performed with a seven character user ID and a single digit suffix. v Default PSB Name: Modify DEFPSB to specify the name of a PSB to be used when an IMS table is referenced whose metadata grammar contains no PSB name. For more information about PSB scheduling when using a DRA interface, see the information about optimization and mapping data in IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing.
Chapter 7. Setting up IMS
67
The default load module, DFSPZP00, is in the IMS.RESLIB library. For an example of DFSPZP00, see IBM IMS/ESA Installation Volume 2: System Definition and Tailoring.
Accessing IMS data with SQL locally This section explains how to access IMS data by issuing SQL statements locally to the data server. Accessing IMS locally with SQL can be useful if you ever need to diagnose a problem. By running the SQL locally, you eliminate potential problem areas such as the network and the client machine (for example, Windows or UNIX). It also speeds up the resolution process because all of the diagnostics are occurring on the mainframe. To access IMS data locally, using SQL: 1. Review the SERVICE INFO ENTRY (SIE) for CACSAMP. a. In the SCACCONF data set, look at the data server configuration file called CACDSCF. b. Within the CACDSCF, search for the SERVICE INFO ENTRY (SIE) that contains the word CACSAMP: SERVICE INFO ENTRY = CACQP CACSAMP ....
c. Uncomment the configuration parameter shown above. The configuration parameter defines the data source name that is used by the client to connect to the data server. To create additional data source names, replicate this configuration parameter and give it a different data source name. For example: SERVICE INFO ENTRY = CACQP CACSAMP .... SERVICE INFO ENTRY = CACQP ACCOUNTING...
In this case, there are two valid data source names that can be configured on the client side. The first is the default CACSAMP, and the second is ACCOUNTING. You can create as many of these SIEs as you need. Using different SIEs is a good way to separate different users based on their business needs. This example uses the default data source CACSAMP. d. Uncomment the IMS symbolic and STEPLIB DDs and modify them for your environment. 2. Start the data server with the operator command s cacds. See IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing for detailed information about MTO operator commands. 3. Access data from local client on the mainframe by following the steps below: a. Identify the CACSQL member, which is contained in the SCACSAMP data set. The CACSQL member contains sample SQL for the various databases supported by DB2 Information Integrator Classic Federation for z/OS. It is the input to the CACCLNT JCL pointed to by its SYSIN DD. b. In the CACSQL member, uncomment the SQL statements for IMS. Note: If you are testing your own IMS tables, you can either create your own SQL in the same member (CACSQL) or create another member for your SQL statements. 4. Configure the client. The client configuration file is used to communicate to the data server using the communication protocol defined in the data server. In the SCACCONF data set there is a member called CACUSCF. Configure the DATASOURCE parameter based on the communications protocol set up in the
68
DB2 II Getting Started with Classic Federation
data server, as described in IBM DB2 Information Integrator Installation Guide for Classic Federation and Classic Event Publishing. 5. Customize and submit the local client job CACCLNT. In the SCACSAMP data set there is a member called CACCLNT. This job executes the client batch job to issue SQL to the data server using CACSQL as the input SQL. Customize the JCL to run in your environment and submit. 6. View the output. The output should contain the SQL statement that is being issued and the corresponding result sets.
Accessing IMS data with SQL remotely Connecting remotely to DB2 Information Integrator Classic Federation for z/OS data server requires the use of one of the DB2 Information Integrator Classic Federation for z/OS clients and an application that is compliant with ODBC, JDBC, or CLI standards. See IBM DB2 Information Integrator Client Guide for Classic Federation and Classic Event Publishing for detailed information about configuring and using the various DB2 Information Integrator Classic Federation for z/OS clients.
Chapter 7. Setting up IMS
69
70
DB2 II Getting Started with Classic Federation
Chapter 8. Setting up Sequential This chapter explains how to set up IBM DB2 Information Integrator Classic Federation for z/OS to access Sequential databases. It includes the following sections: v “Overview” on page 71 v “Mapping the sample Sequential copybook to logical tables” on page 71 v “Loading the metadata catalog” on page 75 v “Validating the setup” on page 76
Overview The following sections describe how to enable SQL access to Sequential data. They describe this process using a sample Sequential file that was created during the installation process of the data server. The sample Sequential file contains 34 records of employee information. A sample COBOL copybook describing this file is also provided in the SCACSAMP data set. Although this chapter uses a sample database, you use the same general steps to enable SQL access to your own Sequential databases: v On Windows, use the Data Mapper to create logical tables based on the Sequential copybook and to export those table definitions as metadata (USE) grammar. v On the mainframe, use the metadata utility to load the logical tables into the data server’s metadata catalog. v Verify SQL access to the Sequential data. These steps are described in more detail in the following sections. For additional information about developing and deploying applications with DB2 Information Integrator Classic Federation for z/OS, see IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing. Note: Because Sequential files do not have indexes, any SQL statement issued to this data source results in a full scan of the Sequential file. Note: In all of the jobs that are described in this section, you must customize the JCL as appropriate for your site.
Mapping the sample Sequential copybook to logical tables This section explains how to map Sequential files into logical relational tables. This section uses the member CACEMPFD, which is contained in the SCACSAMP data set on the mainframe. The CACEMPFD member is a sample COBOL copybook that describes the employee Sequential file that was created during the installation process. To map the sample Sequential copybook: 1. Prepare the sample COBOL copybook: a. Transfer (via FTP) the CACEMPFD member from the SCACSAMP data set to the workstation where the Data Mapper is installed. The following steps assume that the files are located in the Samples subdirectory of the Data © Copyright IBM Corp. 2003, 2004
71
Mapper installation directory. By default, the location is C:\Program Files\IBM\DB2IIClassic82\Data Mapper\Samples. b. Change the name of the CACEMPFD file to cacemp.fd. 2. Start the Data Mapper. On the Windows Start menu, open IBM DB2 Information Integrator Classic Tools and click Data Mapper. 3. Open the sample repository Sample.mdb. a. On the Data Mapper File menu, click Open Repository. b. In the Open Repository dialog, select the Sample.mdb repository file and click Open. (If the Sample.mdb repository isn’t listed in the working directory, browse to the Data Mapper installation directory C:\Program Files\IBM\DB2IIClassic82\Data Mapper and look in the Xadata subdirectory. 4. Create a new data catalog in the repository. a. On the Edit menu, click Create a New Data Catalog. The Create Data Catalog dialog box appears:
b. Enter the following information in the dialog box: v Name: Employee Sample - Sequential v Type: Sequential v Change Capture: Unchecked v Remarks: Any remarks about this data catalog c. Click OK. 5. List the tables in the Employee Sample - Sequential data catalog: a. In the Sample.mdb repository window, click in the first field (the row number) of the Employee Sample - Sequential row to select that data catalog. b. On the Window menu, click List Tables. The initial list of tables is empty. 6. Create a new table in the Employee Sample - Sequential data catalog. a. On the Edit menu, click Create a New Table. b. In the Create Sequential Table dialog box, enter the following table properties: v Name: EMPLSEQ v Owner: CAC v Dataset: DS v Dataset Name: The name of the VSAM cluster that is opened when the logical table that you are creating is accessed with an SQL statement. For accessing the sample data, the name is of the form:
72
DB2 II Getting Started with Classic Federation
CAC.VnRnMnn.SEQ.EMPLOYEE
In the name, VnRnMnn is the version and release number of the default installation product datasets. For example: CAC.V8R2M00.SEQ.EMPLOYEE
c. Click OK. 7. Import the field (column) definitions from the cacemp.fd copybook that you transferred from the mainframe. a. On the File menu, click Import External File. b. Select the cacemp.fd copybook. c. Click OK.
Chapter 8. Setting up Sequential
73
d. In the Import Copybook dialog box, confirm the information.
e. Click Import. The COBOL definitions are imported from the cacemp.fd copybook into the table CAC.EMPLSEQ and mapped to SQL data types. The resulting table columns are show in the Columns for SEQUENTIAL Table EMPLSEQ window.
8. Close all windows within the Data Mapper except the Sample.mdb repository window. 9. Generate USE statements for the mappings that you created. a. In the Sample.mdb repository window, select the data catalog Employee Sample - Sequential. b. On the File menu, click Generate USE Statements. c. Enter a file name (such as emplseq.use) for the generated USE statements and click OK. d. After the Data Mapper generates the USE statements, it offers you a chance to see the USE statements (metadata grammar) that were generated. Click Yes to open the USE statements in Microsoft Notepad. The following sample shows a portion of the generated USE grammar:
74
DB2 II Getting Started with Classic Federation
DROP TABLE CAC.EMPLSEQ; USE TABLE CAC.EMPLSEQ DBTYPE SEQUENTIAL DS ’CAC.V8R2M00.SEQ.EMPLOYEE’ ( /* COBOL Name ENAME */ ENAME SOURCE DEFINITION DATAMAP OFFSET 0 LENGTH 20 DATATYPE C USE AS CHAR(20), /* COBOL Name PHONE */ PHONE SOURCE DEFINITION DATAMAP OFFSET 20 LENGTH 4 DATATYPE UF USE AS INTEGER, /* COBOL Name MAILID */ MAILID SOURCE DEFINITION DATAMAP OFFSET 25 LENGTH 6 DATATYPE C USE AS CHAR(6), /* COBOL Name SALARY */ . . .
Loading the metadata catalog To load the metadata catalog with the table that you created in the previous section: 1. Transfer (via FTP) the metadata USE grammar file that you generated (emplseq.use) to the SCACSAMP data set on the mainframe. 2. If you have not already allocated the metadata catalog that is used by the system, do so now. Sample JCL to allocate the metadata catalog is provided in the SCACSAMP data set, in a member called CACCATLG. To allocate the metadata catalog: a. Customize the CACCATLG JCL to run in your environment. b. Submit the JCL. c. After this job completes, ensure that the server procedure in the PROCLIB points to the newly-created catalogs using the CACCAT and CACINDX DD statements. d. Ensure that the CACCAT and CACINDX DD statements are uncommented in the JCL. 3. Load the metadata catalog using the USE grammar as input. Sample JCL to load the metadata catalog is provided in the SCACSAMP data set, in a member called CACMETAU. To load the metadata catalog: a. Customize the CACMETAU JCL to run in your environment: 1) Ensure that the symbolic GRAMMAR is pointing to the appropriate USE grammar member (GRAMMAR=EMPLSEQUSE). 2) Ensure that the CACCAT and CACINDX DDs refer to the catalogs created using the CACCATLG JCL. b. Submit the JCL. After this job has been run successfully, the metadata catalog is loaded with the logical tables created in the Data Mapper. A return code of 4 is expected. The DROP TABLE fails because the table does not exist yet. Important: When the metadata catalog is initialized, security is set up with System Administration authority (SYSADM) granted to the user ID who installed DB2 Information Integrator Classic Federation for z/OS and ran the metadata utility. That user ID is the only user who can access the system or the metadata catalog. To turn off Chapter 8. Setting up Sequential
75
security, the new System Administrator must either grant SYSADM authorization to PUBLIC, allowing all users access and thus negating security, or grant table access authority to individual user IDs. 4. Grant the appropriate access rights to users. Sample JCL to grant access rights is contained in the SCACSAMP data set, in a member called CACGRANT. The JCL loads the catalogs with the appropriate access rights to the tables. The job reads in its input from the CACGRIN member, which contains the GRANTs required to access the samples tables. If you are bringing your own tables on line with DB2 Information Integrator Classic Federation for z/OS, you must add the appropriate GRANTs for the new tables. See the information about SQL security in IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing for more information. To grant access rights: a. Customize the CACGRANT JCL to run in your environment by performing the following steps: 1) Ensure the symbolic GRAMMAR is pointing to the appropriate member containing the desired security commands. 2) Ensure that the CACCAT and CACINDX DDs refer to the catalogs created using the CACCATLG JCL. 3) Review CACGRIN and uncomment the appropriate GRANT for your database. b. Submit the JCL. After this job completes, the catalogs have been loaded with the desired security. If you plan to use tools that require access to the metadata catalog information, you must run CACGRANT using CACGRSYS as the input.
Validating the setup This section explains how to verify that DB2 Information Integrator Classic Federation for z/OS is correctly configured to access your Sequential data.
Accessing the Sequential data with SQL locally This section explains how to access Sequential data by issuing SQL statements locally to the data server. Accessing Sequential locally with SQL can be useful if you ever need to diagnose a problem. By running the SQL locally, you eliminate potential problem areas such as the network and the client machine (for example, Windows or UNIX). It also speeds up the resolution process because all of the diagnostics are occurring on the mainframe. To access Sequential data locally using SQL: 1. Review SERVICE INFO ENTRY (SIE) for CACSAMP. a. In the SCACCONF data set, see the data server configuration file called CACDSCF. b. Within the CACDSCF, search for the SERVICE INFO ENTRY (SIE) containing the word CACSAMP: SERVICE INFO ENTRY = CACQP CACSAMP ....
c. Uncomment this configuration parameter. The configuration parameter defines the data source name that is used by the client to connect to the data server. The name CACSAMP is the data source name defined as an ODBC data source. To create additional data
76
DB2 II Getting Started with Classic Federation
source names, simply replicate this configuration parameter and give it a different data source name, for example: SERVICE INFO ENTRY = CACQP CACSAMP .... SERVICE INFO ENTRY = CACQP ACCOUNTING...
In this case, there are two valid data source names that can be configured on the client side. The first is the default CACSAMP, and the second being ACCOUNTING. You can create as many of these SIEs as you need. Using different SIEs is a good way to separate different users based on their business needs. This example uses the default data source CACSAMP. 2. Start the data server . Start the data server with the operator command s cacds. See IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing for detailed information about MTO operator commands. 3. Access data from local client on the mainframe by following the steps below: a. Uncomment the SQL statements for Sequential: In the SCACSAMP data set there is member called CACSQL. This member contains sample SQL for the various databases supported by DB2 Information Integrator Classic Federation for z/OS. This member is the input to the CACCLNT JCL pointed to by its SYSIN DD. Note: If you are testing for your own Sequential files, then you can create your own SQL in the same member (CACSQL) or create another member for your own SQL statements. b. Configure the client: The client configuration file is used to communicate to the data server using the communication protocol defined in the data server. In the SCACCONF data set there is a member called CACUSCF. Configure the appropriate DATASOURCE parameter based on the communications protocol set up in the data server, as described in IBM DB2 Information Integrator Installation Guide for Classic Federation and Classic Event Publishing. c. Execute local client In the SCACSAMP data set there is a member called CACCLNT. This job executes the client batch job to issue SQL to the data server using CACSQL as the input SQL. Customize the JCL to run in your environment and submit. d. View the output. The output should contain the SQL statement that is being issued and the corresponding result sets.
Accessing Sequential data with SQL remotely Connecting remotely to a DB2 Information Integrator Classic Federation for z/OS data server requires the use of one of the DB2 Information Integrator Classic Federation for z/OS clients and an application that is compliant with the ODBC, JDBC, or CLI standards. See IBM DB2 Information Integrator Client Guide for Classic Federation and Classic Event Publishing for detailed information about configuring and using the various DB2 Information Integrator Classic Federation for z/OS clients.
Chapter 8. Setting up Sequential
77
78
DB2 II Getting Started with Classic Federation
Chapter 9. Setting up VSAM This chapter explains how to set up IBM DB2 Information Integrator Classic Federation for z/OS to access VSAM data. It explains how to use native VSAM and CICS. It includes the following sections: v “Overview” on page 79 v “Mapping VSAM data” on page 79 v “Loading the metadata catalog” on page 85 v “Validating the setup” on page 86
Overview The following sections describe how to enable access to VSAM data through the native VSAM interfaces, as well as CICS. Your VSAM installation should have a sample VSAM cluster. This VSAM cluster contains 34 records of employee information. The same process described below can be used to bring your own VSAM database online. Although this chapter uses a sample database, you use the same general steps to enable access to your own VSAM databases: v On Windows, use the Data Mapper to create logical tables based on the VSAM copybook and to export those table definitions as metadata (USE) grammar. v On the mainframe, use the metadata utility to load the logical tables into the data server’s metadata catalog. v Verify access to the VSAM data. Note: In all the jobs that follow, you must customize the JCL as appropriate for your site. Note: All references to VSAM files throughout the DB2 Information Integrator Classic Federation for z/OS documentation also apply to IAM files. IAM (Innovation Access Method) is supplied by Innovation Data Processing and is a reliable, high-performance disk file manager that can be used in place of VSAM KSDS or ESDS data sets. Innovation Data Processing also provides an optional Alternate Index feature that is fully supported by DB2 Information Integrator Classic Federation for z/OS. The only exceptions are references to VSAM RRDS files, which currently are not supported by IAM.
Mapping VSAM data The SCACSAMP data set on the mainframe contains a sample COBOL copybook describing the employee VSAM cluster created during the installation process. The member name is CACEMPFD. You need this sample copybook to complete the following steps.
Mapping the sample VSAM copybook (native) To map the sample VSAM copybook for native use: 1. Transfer the sample COBOL copybook to the workstation on which the Data Mapper is installed: © Copyright IBM Corp. 2003, 2004
79
a. Transfer the CACEMPFD member from the SCACSAMP data set to the workstation where the Data Mapper is installed. The following steps assume that the files are located in the Samples subdirectory of the Data Mapper installation directory. By default, the location is C:\Program Files\IBM\DB2IIClassic82\Data Mapper\Samples. b. Change the file name from CACEMPFD to cacemp.fd. 2. Start the Data Mapper. On the Windows Start menu, open IBM DB2 Information Integrator Classic Tools and click Data Mapper. 3. Open the sample repository Sample.mdb. a. On the Data Mapper File menu, click Open Repository. b. In the Open Repository dialog, select the Sample.mdb repository file and click Open. (If the Sample.mdb repository isn’t listed in the working directory, browse to the Data Mapper installation directory C:\Program Files\IBM\DB2IIClassic82\Data Mapper and look in the Xadata subdirectory. 4. Create a new data catalog in the repository. a. On the Edit menu, click Create a New Data Catalog. The Create Data Catalog dialog box appears. b. Enter the following information in the dialog box: v Name: Employee Sample - VSAM v Type: VSAM Note: To use the change capture feature of DB2 Information Integrator Classic Event Publisher, you must be accessing VSAM through CICS. c. Click OK. 5. List the tables in the new Employee Sample - VSAM data catalog. a. In the Sample.mdb repository window, click in the first column of the Employee Sample - VSAM row to select that data catalog. b. On the Window menu, click List Tables. The initial list of tables is empty. 6. Create a new table in the Employee Sample - VSAM data catalog: a. On the Edit menu, click Create a New Table. b. In the Create VSAM Table dialog box, enter the following table properties: v Name: EMPLVSAM v Owner: CAC v Dataset: DS v Dataset Name: The name of the VSAM cluster that is opened when the logical table that you are creating is accessed with an SQL statement. For accessing the sample data, the name is of the form: CAC.VnRnMnn.VSAM.EMPLOYEE
In the name, VnRnMnn is the version and release number of the default installation datasets. For example: CAC.V8R2M00.VSAM.EMPLOYEE
v Use the default values for all other fields. c. Click OK. 7. Import the field (column) definitions from the cacemp.fd copybook, which you transferred from the mainframe. a. On the File menu, click Import External File. b. Select the cacemp.fd copybook.
80
DB2 II Getting Started with Classic Federation
c. Click OK. d. In the Import Copybook dialog box, confirm the information and then click Import. The COBOL definitions are imported from the cacemp.fd copybook into the table CAC.EMPLVSAM and mapped to SQL data types. The resulting table columns are show in the Columns for VSAM Table EMPLVSAM window. 8. Close all windows within the Data Mapper except the Sample.mdb repository window. 9. Generate USE statements for the mappings that you created. a. In the Sample.mdb repository window, select the data catalog Employee Sample - VSAM. b. On the File menu, click Generate USE Statements. c. Enter a file name (such as emplvsam.use) for the generated USE statements and click OK. d. After the Data Mapper generates the USE statements, it offers you a chance to see the USE statements (metadata grammar) that were generated. Click Yes to open the USE statements in Microsoft Notepad. The following sample shows a portion of the generated USE grammar: DROP TABLE CAC.EMPLVSAM; USE TABLE CAC.EMPLVSAM DBTYPE VSAM DS ’CAC.V8R2M00.VSAM.EMPLOYEE’ ( /* COBOL Name ENAME */ ENAME SOURCE DEFINITION DATAMAP OFFSET 0 LENGTH 20 DATATYPE C USE AS CHAR(20), /* COBOL Name PHONE */ PHONE SOURCE DEFINITION DATAMAP OFFSET 20 LENGTH 4 DATATYPE UF USE AS INTEGER, /* COBOL Name MAILID */ MAILID SOURCE DEFINITION DATAMAP OFFSET 25 LENGTH 6 DATATYPE C USE AS CHAR(6), /* COBOL Name SALARY */ . . .
Mapping the sample VSAM copybook (CICS) To map the sample VSAM copybook for access through CICS: 1. Prepare the sample COBOL copybook: a. Transfer the CACEMPFD member from the SCACSAMP data set to the workstation where the Data Mapper is installed. The following steps assume that the files are located in the Samples subdirectory of the Data Mapper installation directory. By default, the location is C:\Program Files\IBM\DB2IIClassic82\Data Mapper\Samples. b. Change the file name from CACEMPFD to cacemp.fd. 2. Start the Data Mapper. On the Windows Start menu, open IBM DB2 Information Integrator Classic Tools and click Data Mapper. 3. Open the sample repository Sample.mdb. a. On the Data Mapper File menu, click Open Repository. b. In the Open Repository dialog, select the Sample.mdb repository file and click Open. (If the Sample.mdb repository isn’t listed in the working Chapter 9. Setting up VSAM
81
directory, browse to the Data Mapper installation directory C:\Program Files\IBM\DB2IIClassic82\Data Mapper and look in the Xadata subdirectory. 4. Create a new data catalog in the repository. a. On the Edit menu, click Create a New Data Catalog. The Create Data Catalog dialog box appears:
b. Enter the following information in the dialog box: v Name: Employee Sample - CICS VSAM v Type: VSAM v Change Capture: If you are also using DB2 Information Integrator Classic Event Publisher and you want to enable change capture for the data in this catalog, select this check box. c. Click OK. 5. List the tables in the new Employee Sample - CICS VSAM data catalog. a. In the Sample.mdb repository window, click in the first column of the Employee Sample - CISC VSAM row to select that data catalog. b. On the Window menu, click List Tables. The initial list of tables is empty. 6. Create a new table in the Employee Sample - CICS VSAM data catalog: a. On the Edit menu, click Create a New Table. b. In the Create VSAM Table dialog box, enter the following table properties: 1) Name: EMPCICS 2) Owner: CAC 3) Dataset: DD/CICS 4) Dataset Name: CICEMP 5) Local Applid: CACCICS1 6) CICS Applid: APPLID 7) Logmode: MTLU62 8) Transaction ID: EXV1 9) Remote Network Name: The remote network name, if required at your location 10) Use the default values for all other fields.
82
DB2 II Getting Started with Classic Federation
c. Click OK. 7. Import the field (column) definitions from the cacemp.fd copybook. a. On the File menu, click Import External File. b. Select the cacemp.fd copybook. c. Click OK.
Chapter 9. Setting up VSAM
83
d. In the Import Copybook dialog box, confirm the information and then click Import.
The COBOL definitions are imported from the cacemp.fd copybook into the table CAC.EMPCICS and mapped to SQL data types. The resulting table columns are show in the Columns for VSAM Table EMPCICS window.
8. Close all windows within the Data Mapper except the Sample.mdb repository window. 9. Generate USE statements for the mappings that you created. a. In the Sample.mdb repository window, select the data catalog Employee Sample - CICS VSAM. b. On the File menu, click Generate USE Statements. c. Enter a file name (such as empcics.use) for the generated USE statements and click OK. d. After the Data Mapper generates the USE statements, it offers you a chance to see the USE statements (metadata grammar) that were generated. Click Yes to open the USE statements in Microsoft Notepad. The following sample shows a portion of the completed metadata grammar:
84
DB2 II Getting Started with Classic Federation
DROP TABLE CAC.EMPCICS; /* Default connection parameters for CICS File Information Access. */ /* The local APPLID and Transaction ID have been suffixed with a 2 */ /* to avoid conflicts with an active Data Server. */ CONNECT TO CICS USING APPLID "CACCICS2" "APPLID" LOGMODE "MTLU62" TRANID "EXV2" USE TABLE CAC.EMPCICS DBTYPE VSAM DD CACEMP CICS APPLID "CACCICS1" "APPLID" LOGMODE "MTLU62" TRANID "EXV1" ( /* COBOL Name ENAME */ ENAME SOURCE DEFINITION DATAMAP OFFSET 0 LENGTH 20 DATATYPE C USE AS CHAR(20), /* COBOL Name PHONE */ PHONE SOURCE DEFINITION DATAMAP OFFSET 20 LENGTH 4 DATATYPE UF USE AS INTEGER, /* COBOL Name MAILID */ MAILID SOURCE DEFINITION DATAMAP OFFSET 25 LENGTH 6 DATATYPE C USE AS CHAR(6), . . .
Loading the metadata catalog To load the metadata catalog with the table that you created in the previous section: 1. Transfer (via FTP) the metadata USE grammar file that you generated (emplvsam.use or empcics.use) to the SCACSAMP data set on the mainframe. 2. If you have not already allocated the metadata catalog that is used by the system, do so now. Sample JCL to allocate the metadata catalog is provided in the SCACSAMP data set, in a member called CACCATLG. To allocate the metadata catalog: a. Customize the CACCATLG JCL to run in your environment. b. Submit the JCL. c. After this job completes, ensure that the server procedure in the PROCLIB points to the newly-created catalogs using the CACCAT and CACINDX DD statements. d. Ensure that the CACCAT and CACINDX DD statements are uncommented in the JCL. 3. Load the metadata catalog using the USE grammar as input. Sample JCL to load the metadata catalog is provided in the SCACSAMP data set, in a member called CACMETAU. To load the metadata catalog: a. Customize the CACMETAU JCL to run in your environment: 1) Ensure that the symbolic GRAMMAR is pointing to the appropriate USE grammar member (GRAMMAR=EMPLUSE or GRAMMAR=EMPLCICSUSE). 2) Ensure that the CACCAT and CACINDX DDs refer to the catalogs created using the CACCATLG JCL. b. Submit the JCL. After this job has been run successfully, the metadata catalog is loaded with the logical tables created in the Data Mapper. A return code of 4 is expected. The DROP TABLE fails because the table does not exist yet. Chapter 9. Setting up VSAM
85
Important: When the metadata catalog is initialized, security is set up with System Administration authority (SYSADM) granted to the user ID who installed DB2 Information Integrator Classic Federation for z/OS and ran the metadata utility. That user ID is the only user who can access the system or the metadata catalogs. To turn off security, the new System Administrator must either grant SYSADM authorization to PUBLIC, allowing all users access and thus negating security, or grant table access authority to individual user IDs. 4. Grant the appropriate access rights to users. Sample JCL to grant access rights is contained in the SCACSAMP data set, in a member called CACGRANT. The JCL loads the catalogs with the appropriate access rights to the tables. The job reads in its input from the CACGRIN member, which contains the GRANTs required to access the samples tables. If you are bringing your own tables on line with DB2 Information Integrator Classic Federation for z/OS, you must add the appropriate GRANTs for the new tables. See the information about SQL security in IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing for more information. To grant access rights: a. Customize the CACGRANT JCL to run in your environment by performing the following steps: 1) Ensure the symbolic GRAMMAR is pointing to the appropriate member containing the desired security commands. 2) Ensure that the CACCAT and CACINDX DDs refer to the catalogs created using the CACCATLG JCL. 3) Review CACGRIN and uncomment the appropriate GRANT for your database. b. Submit the JCL. After this job completes, the catalogs have been loaded with the desired security. If you plan to use tools that require access to the metadata catalog information, you must run CACGRANT using CACGRSYS as the input.
Validating the setup This section explains how to validate that the setup steps that you performed in the previous sections were successful.
Accessing VSAM data through CICS DB2 Information Integrator Classic Federation for z/OS provides an interface to CICS to allow VSAM data to be read, updated, inserted, or deleted through CICS. To access VSAM data through CICS, the DB2 Information Integrator Classic Federation for z/OS data server will establish a VTAM® LU 6.2 connection to CICS to initiate a transaction when the query begins and uses this transaction to communicate with CICS during the query. To establish this environment, VTAM and CICS definitions will be required. An additional DB2 Information Integrator Classic Federation for z/OS table mapping is also required to define a table to use CICS to access VSAM data.
VTAM resource definitions A VTAM APPL definition will be required to communicate with CICS as well as creating a VTAM mode table.
86
DB2 II Getting Started with Classic Federation
Sample member CACCAPPL in the SCACSAMP data set contains sample VTAM APPL definitions. It contains two APPL definitions. One is for the DB2 Information Integrator Classic Federation for z/OS data server and one is for the metadata utilities. If you will be running more than one metadata utility at once or want to balance the load and assign different files to different APPLs, you can add more APPL definitions by duplicating one of the existing ones and changing the name. The following is the sample member: * * SAMPLE APPL ID DEFINITIONS FOR CICS INTERFACE * CACCAPPL VBUILD TYPE=APPL CACCICS1 APPL ACBNAME=CACCICS1, APPC=YES, AUTOSES=1, MODETAB=CACCMODE, DLOGMOD=MTLU62, AUTH=(ACQ), EAS=100,PARSESS=YES, SONSCIP=YES, DMINWNL=0, DMINWNR=1, DSESLIM=100 CACCICS2 APPL ACBNAME=CACCICS2, APPC=YES, AUTOSES=1, MODETAB=CACCMODE, DLOGMOD=MTLU62, AUTH=(ACQ), EAS=1,PARSESS=YES, SONSCIP=YES, DMINWNL=0, DMINWNR=1, DSESLIM=1
Note: The sample is set to allow 100 concurrent users. If you will be having additional users, the count on EAS and DSESLIM will need to be adjusted. A Logon Mode table entry will need to be created. Member CACCMODE in SCACSAMP data set contains the macro definitions to define it. This member needs to be assembled and cataloged in VTAM’s VTAMLIB. The following is the members content: CACCMODE MODETAB MTLU62 MODEENT LOGMODE=MTLU62, TYPE=0, FMPROF=X’13’, TSPROF=X’07’, PRIPROT=X’B0’, SECPROT=X’B0’, COMPROT=X’D0B1’, RUSIZES=X’8989’, PSERVIC=X’060200000000000000000300’ MODEEND END
CICS resource definitions CICS SIT, transaction, program, connection, and session entries need to be added to allow the DB2 Information Integrator Classic Federation for z/OS data server to communicate to CICS. A file definition will also need to be added. The CICS system initialization table (DFHSIT) definition or initialization overrides must include ISC=YES to enable intercommunication programs. If this does not already exist, you will need to add it and cycle CICS. Chapter 9. Setting up VSAM
87
The ACFIVTAM application definition for your CICS system must include PARSESS=YES on the VTAM APPL statement to enable LUTYPE6 parallel session support. For more information on the other options in the CICS APPLID, refer to the information about stored procedures, VTAM and CICS in IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing. Load modules CACCICAT and CACCIVS need to be copied from the DB2 Information Integrator Classic Federation for z/OS load library to the CICS user load library. IBM’s Language Environment® (LE) must be installed and operational in CICS. Sample file CACCDEF in SCACSAMP data set contain a sample job to add the CICS transaction, program, connection, session, and file definitions required for DB2 Information Integrator Classic Federation for z/OS. Follow these steps to run the job. 1. Update the job card for your site specifications. 2. Update the STEPLIB for the correct CICS library. 3. Update the DFHCSD DD for the correct CSD file. 4. Update the DSNAME in the DEFINE FILE to the name of the sample VSAM file that was installed. 5. Update the ATTACHSEC parameter in the DEFINE CONNECTION entries to VERIFY if you want CICS to validate the userid and password. Note: The connection used by the metadata utility (EXC2) should not be set to VERIFY as this transaction only inquires file attributes and the metadata utility does not send a userid or password. 6. Update the MAXIMUM parameter in the DEFINE SESSION entries to increase the number of concurrent users. This should be the same as specified on the DSESLIM and EAS values in the APPL definition. Note: When adding your own files to CICS, file operation BROWSE must be specified to allow SELECT queries, ESDS UPDATE queries, or UPDATE, INSERT, or DELETE queries to an RRDS file to process. READ must be specified to allow UPDATE, INSERT, or DELETE queries to process. UPDATE must be specified to allow UPDATE queries to process. ADD must be specified to INSERT queries to process. DELETE must be specified to allow DELETE queries to process. After successful completion of the job, the new definitions must be installed. This is accomplished with the following CICS transaction: CEDA INSTALL GROUP(CACVSAM) The CACVSAM group should then be added to your start-up group. This is accomplished with the following CICS transaction: CEDA ADD GR(CACVSAM) LIST(xxxxxxxx)
In the example above, xxxxxxxx is the name of the start-up group from your SIT table.
88
DB2 II Getting Started with Classic Federation
Accessing VSAM data with SQL locally To access VSAM or VSAM CICS data locally using SQL: 1. Review SERVICE INFO ENTRY (SIE) for CACSAMP. In the SCACCONF data set, see the DB2 Information Integrator Classic Federation for z/OS data server configuration file called CACDSCF. Within the CACDSCF, search for the SERVICE INFO ENTRY (SIE) containing the word CACSAMP, for example: SERVICE INFO ENTRY = CACQP CACSAMP ....
Uncomment this configuration parameter. It defines the data source name that is used by the client to connect to the DB2 Information Integrator Classic Federation for z/OS data server. To create additional data source names, simply replicate this configuration parameter and give it a different data source name, for example: SERVICE INFO ENTRY = CACQP CACSAMP .... SERVICE INFO ENTRY = CACQP ACCOUNTING...
In this case, there are two valid data source names that can be configured on the client side. The first is the default CACSAMP and the second being ACCOUNTING. You can create as many of these SIEs as you need. Using different SIEs is a good way to separate different users based on their business needs. This example uses the default data source CACSAMP. 2. Uncomment the SIE for the VSAM service (VSAMSRV). 3. Start DB2 Information Integrator Classic Federation for z/OS data server and access data from local client. Start the data server. For access to VSAM through CICS, use the operator command s cacds. Otherwise, use the operator command s. See the MTO command reference in the IBM DB2 Information Integrator Administration Guide and Reference for Classic Federation and Classic Event Publishing for more information about operator commands. To access data from a local client on the mainframe: a. Uncomment the SQL statement(s) for VSAM or CICS VSAM. In the SCACSAMP data set you will also find a member called CACSQL. This member contains sample SQL for the various databases supported by DB2 Information Integrator Classic Federation for z/OS. This member is the input to the CACCLNT JCL pointed to by its SYSIN DD, which is discussed in the next step. Note: If you are testing for your own VSAM files you want to create your own SQL in the same member (CACSQL) or create another member for your own SQL statements. b. Configure the client. The client configuration file is used to communicate to the data server using the communication protocol defined in the data server. In the SCACCONF data set is a member called CACUSCF. Configure the DATASOURCE parameter based on the communications protocol set up in the DB2 Information Integrator Classic Federation for z/OS data server, as described in the IBM DB2 Information Integrator Installation Guide for Classic Federation and Classic Event Publishing. c. Run local client.
Chapter 9. Setting up VSAM
89
In the SCACSAMP data set there is a member called CACCLNT. This job executes the client batch job to issue SQL to the data server using CACSQL as the input SQL. Customize the JCL to run in your environment and submit. d. View the output. The output should contain the SQL statement being issued and the corresponding result set(s). This step is designed to issue SQL statements locally to the data server, which is important if you ever need to diagnose a problem. By running the SQL locally, you eliminate potential problem areas such as the network and/or the client machine (for example, Microsoft Windows or UNIX). It also speeds up the problem resolution process as all the diagnostics are occurring on the mainframe.
Accessing VSAM data with SQL remotely Connecting remotely to a DB2 Information Integrator Classic Federation for z/OS data server requires the use of one of the DB2 Information Integrator Classic Federation for z/OS clients and an application that is compliant with the ODBC, JDBC, or CLI standard. See IBM DB2 Information Integrator Client Guide for Classic Federation and Classic Event Publishing for detailed information about configuring and using the various DB2 Information Integrator Classic Federation for z/OS clients.
90
DB2 II Getting Started with Classic Federation
Appendix. DB2 Information Integrator documentation This topic provides information about the documentation that is available for DB2 Information Integrator. The tables in this topic provide the official document title, form number, and location of each PDF book. To order a printed book, you must know either the official book title or the document form number. Titles, file names, and the locations of the DB2 Information Integrator release notes and installation requirements are also provided in this topic. This topic contains the following sections: v Accessing DB2 Information Integrator documentation v Documentation for replication function on z/OS v Documentation for event publishing function for DB2 Universal Database on z/OS v Documentation for event publishing function for IMS and VSAM on z/OS v Documentation for event publishing and replication function on Linux, UNIX, and Windows v Documentation for federated function on z/OS v Documentation for federated function on Linux, UNIX, and Windows v Documentation for enterprise search on Linux, UNIX, and Windows v Release notes and installation requirements
Accessing DB2 Information Integrator documentation All DB2 Information Integrator books and release notes are available in PDF files from the DB2 Information Integrator Support Web site at www.ibm.com/software/data/integration/db2ii/support.html. To access the latest DB2 Information Integrator product documentation, from the DB2 Information Integrator Support Web site, click on the Product Information link, as shown in Figure 7 on page 92.
© Copyright IBM Corp. 2003, 2004
91
Figure 7. Accessing the Product Information link from DB2 Information Integrator Support Web site
You can access the latest DB2 Information Integrator documentation, in all supported languages, from the Product Information link: v DB2 Information Integrator product documentation in PDF files v Fix pack product documentation, including release notes v Instructions for downloading and installing the DB2 Information Center for Linux, UNIX, and Windows v Links to the DB2 Information Center online Scroll though the list to find the product documentation for the version of DB2 Information Integrator that you are using.
92
DB2 II Getting Started with Classic Federation
The DB2 Information Integrator Support Web site also provides support documentation, IBM Redbooks, white papers, product downloads, links to user groups, and news about DB2 Information Integrator. You can also view and print the DB2 Information Integrator PDF books from the DB2 PDF Documentation CD. To view or print the PDF documentation: 1. From the root directory of the DB2 PDF Documentation CD, open the index.htm file. 2. Click the language that you want to use. 3. Click the link for the document that you want to view.
Documentation about replication function on z/OS Table 1. DB2 Information Integrator documentation about replication function on z/OS Form number
Name
Location
ASNCLP Program Reference for Replication and Event Publishing
N/A
DB2 Information Integrator Support Web site
Introduction to Replication and Event Publishing
GC18-7567
DB2 Information Integrator Support Web site
Migrating to SQL Replication
N/A
DB2 Information Integrator Support Web site
Replication and Event Publishing Guide and Reference
SC18-7568
v DB2 PDF Documentation CD
Replication Installation and Customization Guide for z/OS
SC18-9127
DB2 Information Integrator Support Web site
SQL Replication Guide and Reference
SC27-1121
v DB2 PDF Documentation CD
v DB2 Information Integrator Support Web site
v DB2 Information Integrator Support Web site Tuning for Replication and Event Publishing Performance
N/A
DB2 Information Integrator Support Web site
Tuning for SQL Replication Performance
N/A
DB2 Information Integrator Support Web site
Release Notes for IBM DB2 Information N/A Integrator Standard Edition, Advanced Edition, and Replication for z/OS
v In the DB2 Information Center, Product Overviews > Information Integration > DB2 Information Integrator overview > Problems, workarounds, and documentation updates v DB2 Information Integrator Installation launchpad v DB2 Information Integrator Support Web site v The DB2 Information Integrator product CD
Appendix. DB2 Information Integrator documentation
93
Documentation about event publishing function for DB2 Universal Database on z/OS Table 2. DB2 Information Integrator documentation about event publishing function for DB2 Universal Database on z/OS Name
Form number
Location
ASNCLP Program Reference for Replication and Event Publishing
N/A
DB2 Information Integrator Support Web site
Introduction to Replication and Event Publishing
GC18-7567
v DB2 PDF Documentation CD
Replication and Event Publishing Guide and Reference
SC18-7568
Tuning for Replication and Event Publishing Performance
N/A
v DB2 Information Integrator Support Web site v DB2 PDF Documentation CD v DB2 Information Integrator Support Web site
Release Notes for IBM DB2 Information N/A Integrator Standard Edition, Advanced Edition, and Replication for z/OS
DB2 Information Integrator Support Web site v In the DB2 Information Center, Product Overviews > Information Integration > DB2 Information Integrator overview > Problems, workarounds, and documentation updates v DB2 Information Integrator Installation launchpad v DB2 Information Integrator Support Web site v The DB2 Information Integrator product CD
Documentation about event publishing function for IMS and VSAM on z/OS Table 3. DB2 Information Integrator documentation about event publishing function for IMS and VSAM on z/OS Name
94
Form number
Location
Client Guide for Classic Federation and Event Publisher for z/OS
SC18-9160
DB2 Information Integrator Support Web site
Data Mapper Guide for Classic Federation and Event Publisher for z/OS
SC18-9163
DB2 Information Integrator Support Web site
Getting Started with Event Publisher for z/OS
GC18-9186
DB2 Information Integrator Support Web site
Installation Guide for Classic Federation and Event Publisher for z/OS
GC18-9301
DB2 Information Integrator Support Web site
Operations Guide for Event Publisher for z/OS
SC18-9157
DB2 Information Integrator Support Web site
DB2 II Getting Started with Classic Federation
Table 3. DB2 Information Integrator documentation about event publishing function for IMS and VSAM on z/OS (continued) Name
Form number
Planning Guide for Event Publisher for z/OS
SC18-9158
DB2 Information Integrator Support Web site
Reference for Classic Federation and Event Publisher for z/OS
SC18-9156
DB2 Information Integrator Support Web site
System Messages for Classic Federation and Event Publisher for z/OS
SC18-9162
DB2 Information Integrator Support Web site
Release Notes for IBM DB2 Information Integrator Event Publisher for IMS for z/OS
N/A
DB2 Information Integrator Support Web site
Release Notes for IBM DB2 Information Integrator Event Publisher for VSAM for z/OS
N/A
DB2 Information Integrator Support Web site
Location
Documentation about event publishing and replication function on Linux, UNIX, and Windows Table 4. DB2 Information Integrator documentation about event publishing and replication function on Linux, UNIX, and Windows Name
Form number Location
ASNCLP Program Reference for Replication and N/A Event Publishing
DB2 Information Integrator Support Web site
Installation Guide for Linux, UNIX, and Windows
GC18-7036
v DB2 PDF Documentation CD
Introduction to Replication and Event Publishing
GC18-7567
Migrating to SQL Replication
N/A
DB2 Information Integrator Support Web site
Replication and Event Publishing Guide and Reference
SC18-7568
v DB2 PDF Documentation CD
SQL Replication Guide and Reference
SC27-1121
DB2 Information Integrator Support Web site
Tuning for Replication and Event Publishing Performance
N/A
DB2 Information Integrator Support Web site
Tuning for SQL Replication Performance
N/A
DB2 Information Integrator Support Web site
v DB2 Information Integrator Support Web site v DB2 PDF Documentation CD v DB2 Information Integrator Support Web site
v DB2 Information Integrator Support Web site
Appendix. DB2 Information Integrator documentation
95
Table 4. DB2 Information Integrator documentation about event publishing and replication function on Linux, UNIX, and Windows (continued) Name
Form number Location
Release Notes for IBM DB2 Information Integrator Standard Edition, Advanced Edition, and Replication for z/OS
N/A
v In the DB2 Information Center, Product Overviews > Information Integration > DB2 Information Integrator overview > Problems, workarounds, and documentation updates v DB2 Information Integrator Installation launchpad v DB2 Information Integrator Support Web site v The DB2 Information Integrator product CD
Documentation about federated function on z/OS Table 5. DB2 Information Integrator documentation about federated function on z/OS Name
Form number Location
Client Guide for Classic Federation and Event Publisher for z/OS
SC18-9160
DB2 Information Integrator Support Web site
Data Mapper Guide for Classic Federation and Event Publisher for z/OS
SC18-9163
DB2 Information Integrator Support Web site
Getting Started with Classic Federation for z/OS GC18-9155
DB2 Information Integrator Support Web site
Installation Guide for Classic Federation and Event Publisher for z/OS
GC18-9301
DB2 Information Integrator Support Web site
Reference for Classic Federation and Event Publisher for z/OS
SC18-9156
DB2 Information Integrator Support Web site
System Messages for Classic Federation and Event Publisher for z/OS
SC18-9162
DB2 Information Integrator Support Web site
Transaction Services Guide for Classic Federation for z/OS
SC18-9161
DB2 Information Integrator Support Web site
Release Notes for IBM DB2 Information Integrator Classic Federation for z/OS
N/A
DB2 Information Integrator Support Web site
Documentation about federated function on Linux, UNIX, and Windows Table 6. DB2 Information Integrator documentation about federated function on Linux, UNIX, and Windows Name
Form number
Location
Application Developer’s Guide
SC18-7359
v DB2 PDF Documentation CD v DB2 Information Integrator Support Web site
96
DB2 II Getting Started with Classic Federation
Table 6. DB2 Information Integrator documentation about federated function on Linux, UNIX, and Windows (continued) Name
Form number
Location
C++ API Reference for Developing Wrappers
SC18-9172
v DB2 PDF Documentation CD v DB2 Information Integrator Support Web site
Data Source Configuration Guide
N/A
v DB2 PDF Documentation CD v DB2 Information Integrator Support Web site
Federated Systems Guide
SC18-7364
v DB2 PDF Documentation CD v DB2 Information Integrator Support Web site
Guide to Configuring the Content Connector for N/A VeniceBridge
DB2 Information Integrator Support Web site
Installation Guide for Linux, UNIX, and Windows
GC18-7036
v DB2 PDF Documentation CD
Java API Reference for Developing Wrappers
SC18-9173
v DB2 Information Integrator Support Web site v DB2 PDF Documentation CD v DB2 Information Integrator Support Web site
Migration Guide
SC18-7360
v DB2 PDF Documentation CD v DB2 Information Integrator Support Web site
Wrapper Developer’s Guide
SC18-9174
v DB2 PDF Documentation CD v DB2 Information Integrator Support Web site
Release Notes for IBM DB2 Information Integrator Standard Edition, Advanced Edition, and Replication for z/OS
N/A
v In the DB2 Information Center, Product Overviews > Information Integration > DB2 Information Integrator overview > Problems, workarounds, and documentation updates v DB2 Information Integrator Installation launchpad v DB2 Information Integrator Support Web site v The DB2 Information Integrator product CD
Appendix. DB2 Information Integrator documentation
97
Documentation about enterprise search function on Linux, UNIX, and Windows Table 7. DB2 Information Integrator documentation about enterprise search function on Linux, UNIX, and Windows Name
Form number
Location
Administering Enterprise Search
SC18-9283
DB2 Information Integrator Support Web site
Installation Guide for Enterprise Search
GC18-9282
DB2 Information Integrator Support Web site
Programming Guide and API Reference for Enterprise Search
SC18-9284
DB2 Information Integrator Support Web site
Release Notes for Enterprise Search
N/A
DB2 Information Integrator Support Web site
Release notes and installation requirements Release notes provide information that is specific to the release and fix pack level for your product and include the latest corrections to the documentation for each release. Installation requirements provide information that is specific to the release of your product. Table 8. DB2 Information Integrator Release Notes and Installation Requirements Name
File name
Location
Installation Requirements for IBM Prereqs DB2 Information Integrator Event Publishing Edition, Replication Edition, Standard Edition, Advanced Edition, Advanced Edition Unlimited, Developer Edition, and Replication for z/OS
v The DB2 Information Integrator product CD
Release Notes for IBM DB2 Information Integrator Standard Edition, Advanced Edition, and Replication for z/OS
v In the DB2 Information Center, Product Overviews > Information Integration > DB2 Information Integrator overview > Problems, workarounds, and documentation updates
ReleaseNotes
v DB2 Information Integrator Installation Launchpad
v DB2 Information Integrator Installation launchpad v DB2 Information Integrator Support Web site v The DB2 Information Integrator product CD Release Notes for IBM DB2 Information Integrator Event Publisher for IMS for z/OS
98
DB2 II Getting Started with Classic Federation
N/A
DB2 Information Integrator Support Web site
Table 8. DB2 Information Integrator Release Notes and Installation Requirements (continued) Name
File name
Location
Release Notes for IBM DB2 Information Integrator Event Publisher for VSAM for z/OS
N/A
DB2 Information Integrator Support Web site
Release Notes for IBM DB2 Information Integrator Classic Federation for z/OS
N/A
DB2 Information Integrator Support Web site
Release Notes for Enterprise Search
N/A
DB2 Information Integrator Support Web site
To view the installation requirements and release notes that are on the product CD: v On Windows operating systems, enter: x:\doc\%L
x is the Windows CD drive letter and %L is the locale of the documentation that you want to use, for example, en_US. v On UNIX operating systems, enter: /cdrom/doc/%L/
cdrom refers to the UNIX mount point of the CD and %L is the locale of the documentation that you want to use, for example, en_US.
Appendix. DB2 Information Integrator documentation
99
100
DB2 II Getting Started with Classic Federation
Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in all countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country/region or send inquiries, in writing, to: IBM World Trade Asia Corporation Licensing 2-31 Roppongi 3-chome, Minato-ku Tokyo 106-0032, Japan The following paragraph does not apply to the United Kingdom or any other country/region where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions; therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product, and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. © Copyright IBM Corp. 2003, 2004
101
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information that has been exchanged, should contact: IBM Corporation J46A/G4 555 Bailey Avenue San Jose, CA 95141-1003 U.S.A. Such information may be available, subject to appropriate terms and conditions, including in some cases payment of a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement, or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems, and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements, or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. All statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious, and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs, in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM’s application programming interfaces.
102
DB2 II Getting Started with Classic Federation
Each copy or any portion of these sample programs or any derivative work must include a copyright notice as follows: © (your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs. © Copyright IBM Corp. _enter the year or years_. All rights reserved.
Trademarks The following terms are trademarks of International Business Machines Corporation in the United States, other countries, or both: IBM CICS DB2 DB2 Universal Database IMS Language Environment RMF VTAM WebSphere z/OS The following terms are trademarks or registered trademarks of other companies: Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel Inside (logos), MMX and Pentium are trademarks of Intel Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Other company, product or service names may be trademarks or service marks of others.
Notices
103
104
DB2 II Getting Started with Classic Federation
Index A
D
M
Adabas 23 accessing data 26, 27 creating logical tables 24 interface 18 loading metadata catalog with metadata grammar 24 sample database 23 setting up the environment 23 applications binding 51
Data Mapper 5, 7, 16 data mapping. See Mapping data data servers 2 automatically starting 7 configuring 21 data sources 15 data spaces 14 DB2 interface 18 Thread Management exit 20 DB2 UDB 51 DB2 Universal Database accessing data 54, 55 creating logical tables 52 sample database 51 DBCTL subsystem 67 DELETE command 5
mapping data 7, 15, 16 data sources 15 DB2 Universal Database 52 IMS 57 Mapping data Sequential 71 metadata catalog loading for CA-Datacom 35 loading for CA-IDMS 46 loading for IMS 65 loading with Adabas metadata grammar 24 metadata grammar 8 loading metadata catalog 24 MTO interface 18
B binding applications
51
C CA-Datacom 29 accessing data with SQL 37, 39 loading metadata catalog 35 mapping source language member 30 punching source language member 29 CA-Datacom/DB interface 18 CA-IDMS 41 accessing data with SQL 48, 49 interface 18 loading metadata catalog 46 mapping schema/subschema 42 punching scheme/subschema 41 setting up 41 catalog access rights, granting 37, 53, 66 catalogs See also metadata catalog loading Adabas 24 DB2 Universal Database 52 Sequential 75 child segments, mapping 15 CICS VSAM 79 client communications 2 clients 5 Clients 20 communications between server and application 16 supported protocols 14 configuration members, updating 21 connection handler services 17 connection handlers 4 connectors 18 CPU Resource Governor exit 19 cross memory 14
© Copyright IBM Corp. 2003, 2004
N nonrelational data mapping 15, 16 translating to relational data 8
E enterprise servers 7
O
F FTP support Data Mapper
ODBC client 6 operational components
2
10
P
I
punching schema/subschema
IMS accessing data with SQL 68, 69 BMP/DBB interface 18 DRA interface 18 establishing DBCTL/DRA 67 loading metadata catalog 65 mapping child segments 15 mapping data 57 initialization services 17 INSERT statement 5
41
Q queries optimizing 3 query processor services 18 query processors 4, 5 configuring 21
R J JOINs 3 retrieving child segments from IMS databases 15
Record Processing exit 20 region controller 17 region controllers 4 result sets converting to consistent relational form 2
L load balancing 7 logger service 18 logical tables creating for Adabas
S 24
SAF exit 19 schema mapping CA-IDMS 42 security SAF exit 19 Sequential 71 accessing data with SQL 76, 77
105
Sequential (continued) interface 18 mapping data 71 services, controlling 17 SMF exit 19 SQL accessing Adabas data 26, 27 CA-Datacom data 37, 39 CA-IDMS data 48, 49 DB2 Universal Database data 55 IMS data 68, 69 Sequential data 76, 77 VSAM data 89, 90 rewriting 3 validating 2 SQL-92 14 stored procedures invoking 18 Stored procedures interface 18 subschema mapping CA-IDMS 42 system exits 18
54,
T tables creating logical for Adabas TCP/IP 14
24
U UPDATE statement 5 user configuration overrides 21
V VSAM 79 accessing data with SQL 89, 90 interface 18
W WebSphere MQ 14 Workload Manager exit 19
106
DB2 II Getting Started with Classic Federation
Contacting IBM To contact IBM customer service in the United States or Canada, call 1-800-IBM-SERV (1-800-426-7378). To learn about available service options, call one of the following numbers: v In the United States: 1-888-426-4343 v In Canada: 1-800-465-9600 To locate an IBM office in your country or region, see the IBM Directory of Worldwide Contacts on the Web at www.ibm.com/planetwide.
Product information Information about DB2 Information Integrator is available by telephone or on the Web. If you live in the United States, you can call one of the following numbers: v To order products or to obtain general information: 1-800-IBM-CALL (1-800-426-2255) v To order publications: 1-800-879-2755 On the Web, go to www.ibm.com/software/data/integration/db2ii/support.html. This site contains the latest information about: v The technical library v Ordering books v Client downloads v Newsgroups v Fix packs v News v Links to Web resources
Comments on the documentation Your feedback helps IBM to provide quality information. Please send any comments that you have about this book or other DB2 Information Integrator documentation. You can use any of the following methods to provide comments: v Send your comments using the online readers’ comment form at www.ibm.com/software/data/rcf. v Send your comments by e-mail to
[email protected]. Include the name of the product, the version number of the product, and the name and part number of the book (if applicable). If you are commenting on specific text, please include the location of the text (for example, a title, a table number, or a page number).
© Copyright IBM Corp. 2003, 2004
107
108
DB2 II Getting Started with Classic Federation
Printed in USA
GC18-9155-02