Devops Kkk.pdf

  • Uploaded by: Koteswararao Kasthuri
  • 0
  • 0
  • December 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Devops Kkk.pdf as PDF for free.

More details

  • Words: 124,496
  • Pages: 168
DevOps release pipeline overview Use DevOps practices such as continuous integration and continuous delivery to quickly move application changes from development through testing to deployment on your production system. Use Pega® Platform tools and common third-party tools to implement DevOps. The release pipeline in the following diagram illustrates the best practices for using Pega Platform for DevOps. At each stage in the pipeline, a continuous loop presents the development team with feedback on testing results. This example includes the following assumptions: Pega Platform manages all schema changes. Jenkins is the automation server that helps to coordinate the release pipeline, and JFrog Artifactory is the application repository; however, other equivalent tools could be used for both.

Development Pega Platform developers use Agile practices to create applications and commit the changes into branches in a shared development environment. Automated and manual testing provides rapid feedback to developers so that they can improve the application. Follow these best practices to optimize the development process: Leverage multiple built-on applications to develop and process smaller component applications. Smaller applications move through the pipeline faster and are easier to develop, test, and maintain. Create one Pega Platform instance as a source environment that acts as a single source of truth for the application. This introduces stability into the developer environment and ensures that a problem in one developer environment does not affect other environments. Use Pega Platform developer tools, for example: The rule compare feature allows you to see the differences between two versions of a specific rule. The rule form search tool allows you to find a specific rule in your application. Follow branch-based development practices: Developers can work on a shared development environment or local environments. Content in branches migrates from the development environments to merge into the source environment. Create an archive by exporting and storing backup versions of each branch in a separate location in the application repository. If a corrupted system state requires you to restore the source environment to a previous known good application version, the branches can be down-merged to reapply the changes in those branches that were lost as part of the restore. Use PegaUnit tests to ensure quality. Ensure that the work on a ruleset is reviewed and that the changes are validated. Lock every complete and validated ruleset. Regularly synchronize the development environments with the source environment. For more information, see the following articles and help topics: Application development Development workflow in the DevOps pipeline Using multiple built-on applications Rule form search

Rule checkout and check in Rule version comparison within a rule form Version control in the DevOps pipeline Branching Enhanced features for branches Branch development Merging branches Using Lock and Roll to manage ruleset versions Adding a branch from a repository Pushing a branch to a repository Creating a toggle Testing Pega Platform application testing in the DevOps pipeline PegaUnit testing

Continuous integration With continuous integration, application developers frequently check in their changes to the source environment and use an automated build process to automatically verify these changes. Continuous integration identifies issues and pinpoints them early in the cycle. Use Jenkins with the prpcServiceUtils tool and the execute test service to automatically generate a potentially deployable application and export the application archive to a binary repository such as JFrog Artifactory. During continuous integration, maintain the following best practices: To automatically generate a valid application, properly define the application Rule-Admin-Product rule and update the rule whenever the application changes. The prpcServiceUtils tool requires a predefined Rule-Admin-Product rule. To identify issues early, run PegaUnit tests and critical integration tests before packaging the application. If any one of these tests fails, stop the release pipeline until the issue is fixed. Publish the exported application archives into a repository such as JFrog Artifactory to maintain a version history of deployable applications. For more information, see the following articles and help topics: PegaUnit tests PegaUnit testing Running PegaUnit test cases and test suites with the Execute Tests service Application packaging Setting up and packaging a release on your shared development environment Using prpcService Utils and Jenkins for automated application deployment

Continuous delivery With continuous delivery, application changes run through rigorous automated regression testing and are deployed to a staging environment for further testing to ensure that there is a high confidence the application is ready to deploy on the production system. Use Jenkins with the prpcServiceUtils tool to deploy the packaged application to test environments for regression testing or for other testing such as performance testing, compatibility testing, acceptance testing, and so on. At the end of the continuous delivery stage, the application is declared ready to deploy to the production environment. Follow these best practices to ensure quality: Use Docker or a similar tool to create test environments for user acceptance tests (UAT) and exploratory tests. Create a wide variety of regression tests through the user interface and the service layer. Check the tests into a separate version control system such as Git. If a test fails, roll back the latest import. If all the tests pass, annotate the application package to indicate that it is ready to be deployed. Deployment can be done either automatically with Jenkins and JFrog Artifactory or manually. For more information, see the following articles and help topics: Performing UI and regression testing Add Test ID for unique identification of UI elements during testing Leveraging containers Pega Platform Docker support Deploying to a staging system Deploying application changes to your staging or production environment Using prpcService Utils and Jenkins for automated application deployment Rolling back to a restore point

Deployment After an application change passes the testing requirements, use Jenkins and the prpcServiceUtils tools to migrate the changes into production after complete validation through automated testing on the staging system. Use application release guidelines to deploy with minimal downtime. For more information, see the following articles and help topics: Deploying to the production system Version control in the DevOps pipeline Define hotfixes as dependencies for product rules Deploying application changes to your staging or production environment Using prpcService Utils and Jenkins for automated application deployment Application release management in the Pega 7 Platform Application release changes, types, and processes Enabling changes to the production system Updating access groups from the command line

Version control in the DevOps pipeline Change the application version number each time you deploy changes to a production system. As a best practice, use semantic versioning because it offers a logical set of rules about when to increase each version number. When semantic versioning is used, the part of the version number that is incremented communicates the significance of the change. Additional information about semantic versioning is

available on the web. The version number, in the format NN-NN-NN, defines the major version (first two digits), minor version (middle digits), and patch version (last digits), for example, 03-01-15. Major versions include significant features that might cause compatibility issues with earlier releases. Minor versions include enhancements or incremental updates. Patch versions include small changes such as bug fixes. Rulesets include all versions of each rule. Skimming reduces the number of rules by collecting the highest version of rules in the ruleset and copying them to a new major or minor version of that ruleset, with patch version 01. For more information about skimming, see Skim to create a higher version.

Best practices for development Follow these best practices for version control in development: Work in branches. Consider creating a major version of your application if you upgrade your application server or database server to a major new version. For small single scrum teams: Increment both the patch and the minor version during every merge. Developers merge into the next incremented patch version. For multiple scrum teams: The release manager selects a development ruleset version number that includes a patch version number. Developers merge into the highest available ruleset version.

Best practices for deployment Follow these best practices when you deploy your application to production: Define target ruleset versions for production deployment. Use lock and roll to password-protect versions and roll changes to higher versions. For more information, see RuleSet Stack tab. Create restore points before each deployment. For more information about restore points, see Restore points. Set a separate ruleset version for each deployment to production.

Pega unit testing You can use Pega unit testing to automate the testing of rules. After you develop rules, you can test them and then convert the test runs to Pega unit test cases to validate application data by comparing expected output to the actual output returned by running the rules. For example, an account executive wants to ensure that a 10% discount is applied to all VIP customers. You can create a test case that verifies that this discount is applied to all VIP customers in the database. If the test does not pass, the results indicate where the 10% discount is not applied. You can use Pega unit rule testing on the following types of rules: Activities Case types Data pages Data transforms Decision tables Decision trees Flows Strategies When (available beginning with Pega 7.3.1) You can use one or more data pages, data transforms, or activities to set up the clipboard data before running the rule as part of the test case. You can also use activities to create any required test data such as work or data objects. After you run a Pega unit test case or test suite, data pages used to set up the test environment are automatically removed. You can also apply additional data transforms or activities to remove other pages or information on the clipboard. You can also use the Execute Tests service, which is run by a continuous integration (CI) tool, to run all the Pega unit test cases in your application to validate the quality of your code after every build is created.

Pega unit test suites You can group related Pega unit test cases into Pega unit test suites, which run multiple test cases in the order that you specify. For example, you can create smoke tests, which comprise test cases that you run to verify that critical application functionality is working as expected. You can create, open, and run Pega unit test suites on either the Test Cases tab or the Test Suites tab on the Automated Testing landing page, which you open by clicking Designer Studio > Application > Automated Testing. On the Test Cases tab, you can select existing test cases and add them to a new test suite.

Test Cases tab in the Automated Testing landing page You can open, run, and view test suite run results in the Test Suites tab. You can also create test suites that do not contain any test cases.

Test Suites tab in the Automated Testing landing page You can add test cases to and remove test cases from the test suite in the Create Test Suite and Edit Test Suite rule forms. You can also change the order in which test cases are run by dragging and dropping them in the Test Cases section. Note that if you have multiple pages of test cases, you cannot reorder test cases among pages; you can reorder only test cases that are on the same page.

Create Test Suite form In the Setup & Cleanup tab, you can apply one or more data pages, data transforms, or activities to set up the clipboard with values before you run the test suite. After you run a Pega unit test suite, data pages used to set up the test environment are automatically removed. You can also apply additional data transforms or activities to remove other pages or information on the clipboard before you run more test cases or suites.

Setup & Cleanup tab After you run a test suite, you can open the test results, which include the following information: When the test was last run The number of test cases that are in the test suite The number of test cases in the test suite that were tested The number of test cases in the test suite that passed the test The number of test cases in the test suite that failed the test

Pega unit test suite run results For more information about test suites, see Pega unit test suites. You can also use the Execute Tests service to run a test suite from a Continuous Integration (CI) tool such as Jenkins, so that you can validate the quality of your application after every build run. For more information, see Running Pega unit test cases and test suites with the Execute Tests service.

Pega unit test cases for flows and case types You can use Pega unit testing to create test cases for your applications. After you develop rules, you can test them, and then convert the test runs to Pega unit test cases to validate application data by comparing expected property values with the actual values returned by running the rule. You can create Pega unit test cases for a number of rule types, including case types and flows. When you create a Pega unit test case for a flow or case type, you run the flow or case type and enter data for assignments and decisions as you step through the flow or case type. You can start recording at any time, and you can stop recording at any time to create a test case with all the data that you entered up until the point that you stopped recording.

Creating a test case You can also create multiple test cases by clicking the tab that runs the flow or case type and continue to record the test case. The system records the data that you enter in a data transform, which is created after you save the test form. It also displays the graphical representation of the recorded path.

Recorded path For more information about creating Pega unit test cases for case types and flows, see Creating a Pega unit test case for a flow or case type. You can configure four new assertions for flows and case types: Assigned to Attachment exists Case status Case instance count

Assigned to assertions to verify that an assignment is routed to an operator ID or work queue You can use the assigned to assertion to verify that an assignment is routed to the appropriate operator ID or work queue. For example, if you record an entire flow, and the final assignment is routed to the Admin operator ID , you can verify that it is routed to the Admin operator ID . The expected output is compared with the data that is recorded on the pyWorkPage page.

Assigned to assertion For more information, see Assigned to assertions.

Attachment exists assertions to verify that a file or note attachment exists You can use the attachment exists assertion to verify that the flow or case type has an attachment of type file or note (attached by using the Attach Content shape) or email (attached by using the Send Email shape). If you have multiple attachments on flows or case types, the assertion checks all the attachments that are on the flow or case type. If it finds an attachment anywhere of the specified type and name, the assertion passes. The expected output is compared with the data that is recorded on the pyWorkPage page. For example, you can verify that an attachment of type Email does not exist on the pyWorkPage page for a flow or case type.

Attachment exists assertions For more information, see Attachment exists assertions.

Case instance count assertions to verify the number of created cases

You can use the case instance count assertion to verify the number of cases that were created when the case type or flow was run. For example, if you have a Job Applicant case type that spins off a Background check child case type, and you record the entire case type run, you can verify that the case instance count for each case type is 1.

Case instance count assertions For more information, see Case instance count assertions.

Case status assertions to verify case status You can use the case status assertion to verify the status of the case. The expected output is compared with the data that is recorded on the pyWorkPage page. For example, if you record an entire flow, and the final assignment in the flow has a case status of Completed, you can verify that Completed is the case status.

Case status assertion For more information, see Case status assertions.

Running Pega unit test cases and test suites with the Execute Tests service When you build an application on Pega Platform™ in a continuous delivery pipeline, you can use the Execute Tests service (REST API) to validate the quality of the build by running Pega unit test cases that are configured for the application. A continuous integration (CI) tool, such as Jenkins, calls the service, which runs all the Pega unit test cases or a test suite in your application and returns the results in xUnit format. The continuous integration tool interprets the results and, if the tests are not successful, you can correct errors before you deploy your application. When you use Jenkins, you can also use the Execute Tests service to run Pega unit tests after you merge a branch on a remote system of record and start a job. For more information, see Remotely starting automation server jobs to perform branch operations and run Pega unit tests. The service comprises the following information: Service name: Pega unit Rule-Test-Unit-Case pzExecuteTests Service package: Pega unit End point: http:///prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests You can quarantine a test case by marking it as “Disabled.” A disabled test case is not run by the Execute Tests service. Test case quarantines prevent noncritical tests from running if they are causing failures so that the service can continue to run.

Request parameters The Execute Tests service takes the following request parameters, which are strings: ApplicationInformation – Optional. The name and version of the application for which you want to run Pega unit test cases. You can pass it instead of the AccessGroup parameter. If you pass only this parameter, the service runs all the test cases in the application. If you do not pass this parameter, the service runs all the test cases in the application that are associated with the default access group that is configured for your operator. Use the format ApplicationInformation=. ​AccessGroup – Optional. The access group that is associated with the application for which you want to run Pega unit test cases. You can pass it instead of the ApplicationInformation parameter. If you pass this parameter, the service runs all the test cases in the application that are associated with this access group. If you do not pass this parameter, the service runs all the test cases in the application that are associated with the default access group that is configured for your operator. Use the format AccessGroup=. ​TestSuiteID – The pxInsName of the test suite that you want to run. You can find this value in the XML document that comprises the test suite by clicking Actions > XML on the Edit Test Suite form. You can run one test suite at a time. When you use this parameter, all the test cases in the test suite are run, but no other test cases in your application are run. This parameter is required for Pega unit test suites. If test suites share the same name among applications: If you pass the ApplicationInformation or AccessGroup parameter with the TestSuiteID parameter, the service runs the test suite in the application that you specified. If you do not pass the ApplicationInformation parameter or the AccessGroup parameter with the TestSuiteID parameter, the system runs the test suite in the application that is associated with the default access group. Use the format TestSuiteID=.

LocationOfResults – The location where the service stores the XML file that contains the test results. This parameter is optional for test cases and test suites. RunWithCoverage – Determines whether the application-level test coverage report is generated after the Execute Tests service runs all relevant test cases or the selected test suite. For more information, see Generating an application-level test coverage report. If you set the parameter to False, the application-level test coverage report is not generated. This is the default behavior. If you set the parameter to True, and application-level coverage is not running, the Execute Tests service starts application-level coverage mode, runs all unit tests, stops coverage mode, and generates the application-level coverage report. This report is displayed on the test coverage landing page in the Application level section. If you set the parameter to True, and application-level coverage is already running, the Execute Tests service returns an error.

Response The service returns the test results in an XML file in xUnit format and stores them in the location that you specified in the LocationOfResults request parameter. The output is similar to the following example:

<nodes expected="/" result="/"> <nodes xmlns:purchase="urn:acme-purchase-order" expected="/purchase:order[1]" result="/purchase-order[1]"> <error type="Local name comparison">Expected "order" but was "purchase-order" <error type="Namespace URI comparison">Expected "urn:acme-purchase-order" but was "" <sysout>This text is captured by the report <syserr/>

Configuring your default access group When you run the Execute Tests service, you can specify the access group that is associated with the application for which you want to run all Pega unit test cases or a test suite. If you do not specify an access group or application name and version, the service runs the Pega unit test cases or test suite for the default access group that is configured for your Pega Platform operator ID. To configure a default access group, complete the following steps: 1. In Designer Studio, click the Operator menu, and then click Operator. 2. In the Application Access section, select your default access group. Selecting default access group configuration 3. Click Save.

Configuring your build environment Configure your build environment so that it can call the Execute Tests service and run all the Pega unit test cases or a test suite in your application. Your configuration depends on the external validation engine that you use. For example, the following procedure describes how to configure the Jenkins server to call the service. 1. Open a web browser and go to the location of the Jenkins server. 2. Install the HTTP request plug-in for Jenkins to call the service and the JUnit plug-in so that you can view reports in xUnit format. 1. Click Manage Jenkins. 2. Click Manage Plugins. 3. On the Available tab, select the HTTP Request Plugin and the JUnit Plugin check boxes. 4. Specify whether to install the plug-in without restarting Jenkins or to download the plug-in and install it after restarting Jenkins. 3. Configure the Pega Platform credentials for the operator who authenticates the Execute Tests service. 1. Click Credentials, and then click System. 2. Click the drop-down arrow next to the domain to which you want to add credentials, and click Add credentials. 3. In the Username field, enter the operator ID that is used to authenticate the service. This operator should belong to the access group that is associated with the application for which you want to run test cases and test suites. 4. In the Password field, enter the password. 5. Click OK. 4. Configure the Jenkins URL that runs the service. 1. Click Manage Jenkins, and then click Configure System. 2. In the Jenkins Location section, in the Jenkins URL field, enter the URL of the Jenkins server. 3. Click Apply, and then click Save. 5. Add a build step to be run after the project is built. 1. Open an existing project or create a project. 2. Click Configure. 3. In the Build section, click Add build step, and select HTTP Request from the list. 4. In the HTTP Request section, in the URL field, enter the endpoint of the service. Use one of the following formats: http:///prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests http:///prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests​?AccessGroup= http:///prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests?TestSuiteID= http:///prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests?ApplicationInformation? =ApplicationInformation: If you are using multiple parameters, separate them with the ampersand (&) character, for example, http:///prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests?ApplicationInformation?=ApplicationInformation: &TestSuiteID= 5. From the HTTP mode list, select POST. 6. Click Advanced. 7. In the Authorization section, from the Authenticate list, select the Pega Platform operator ID that authenticates the service that you configured in step 3. 8. In the Response section, in the Output response to file field, enter the name of the XML file where Jenkins stores the output that it receives from the service. This field corresponds to the LocationOfResults request parameter. 9. In the Post-build Actions section, from the Add post build section list, select Publish Junit test result report and enter **/*.xml in the Test Report XML field. This setting configures the results in xUnit format, which provides information about test results, such as a graph of test results trends. These results are displayed on your project page in Jenkins.

10. Click Apply, and then click Save.

Running tests and verifying results After you configure your validation engine, run the service and verify the test results. Your test suites and test cases must be checked in so that you can run them. For example, in Jenkins, complete the following steps: 1. 2. 3. 4. 5.

Open the project and click Build Now. In the Build History pane, click the build that you ran. On the next page, click Test Result. In the All Tests section, click root. The results of all tests are displayed. Optional: Expand a test result in the All Failed Tests section and view details about why the test was not successful.

Test failures Tests can fail for the following reasons: The operator does not have access to the location of the results. The access group that is passed by the service either does not exist or no access group is associated with the operator ID. The application name and version that are passed do not exist. An application is not associated with the access group that is passed by the service. No Pega unit test cases or test suites are in the application. The test suite pxInsName does not exist for the application name and version or for the access group that is passed by the service.

Running PegaUnit test cases and test suites with the Execute Tests service in Pega 7.4 When you build an application on Pega Platform™ in a continuous delivery pipeline, you can use the Execute Tests service (REST API) to validate the quality of the build by running Pega unit test cases that are configured for the application. A continuous integration (CI) tool, such as Jenkins, calls the service, which runs all the Pega unit test cases or a test suite in your application and returns the results in xUnit format. The continuous integration tool can interpret the results and, if the tests are not successful, you can correct errors before you deploy your application. You can also use the Execute Tests service to run Pega unit tests after you merge a branch on a remote system of record and start a job when you use Jenkins. For more information, see Remotely starting automation server jobs to perform branch operations and run Pega unit tests. The service comprises the following information: Service name: PegaUnit Rule-Test-Unit-Case pzExecuteTests Service package: PegaUnit End point: http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests You can quarantine a test case by marking it as “Disabled.” A disabled test case will not be run by the Execute Tests service. Test case quarantines prevent noncritical tests from running if they are causing failures so that the service can continue to run.

Request parameters The Execute Tests service takes the following request parameters, which are strings: ApplicationInformation – The name and version of the application for which you want to run Pega unit test cases. This parameter is optional. You can pass it instead of the AccessGroup parameter. If you pass only this parameter, the service runs all the test cases in the application. If you do not pass this parameter, the service runs all the test cases in the application that are associated with the default access group that is configured for your operator. Use the format ApplicationInformation=. ​AccessGroup – The access group that is associated with the application for which you want to run Pega unit test cases. This parameter is optional. You can pass it instead of the ApplicationInformation parameter. If you pass this parameter, the service runs all the test cases in the application that are associated with this access group. If you do not pass this parameter, the service runs all the test cases in the application that are associated with the default access group that is configured for your operator. Use the format AccessGroup=. ​TestSuiteID – The pxInsName of the test suite that you want to run. You can find this value in the XML document that comprises the test suite by clicking Actions > XML on the Edit Test Suite form. You can run one test suite at a time. When you use this parameter, all the test cases in the test suite are run, but no other test cases in your application are run. This parameter is required for Pega unit test suites. If test suites share the same name among applications: If you pass the ApplicationInformation or AccessGroup parameter with the TestSuiteID parameter, the service runs the test suite in the application that you specified. If you do not pass the ApplicationInformation parameter or the AccessGroup parameter with the TestSuiteID parameter, the system runs the test suite in the application that is associated with the default access group. Use the format TestSuiteID=. LocationOfResults – The location where the service stores the XML file that contains the test results. This parameter is optional for test cases and test suites. RunWithCoverage – Determines whether the rule coverage report is generated after the Execute Tests service runs all relevant test cases or the selected test suite. If the parameter is set to False, the rule coverage report is not generated. This is the default behavior. If the parameter is set to True, the Execute Tests service starts coverage mode, runs all tests, stops coverage mode, and generates the rule coverage report. This report is displayed on the rule coverage landing page and its results are also visible on the Application Quality landing page.

Response The service returns the test results in an XML file in xUnit format and stores them in the location that you specified in the LocationOfResults request parameter. The output is similar to the following example: <nodes expected="/" result="/">
<nodes xmlns:purchase="urn:acme-purchase-order" expected="/purchase:order[1]" result="/purchase-order[1]">
<error type="Local name comparison">Expected "order" but was "purchase-order"
<error type="Namespace URI comparison">Expected "urn:acme-purchase-order" but was ""


<sysout>This text is captured by the report
<syserr/>


Configuring your default access group When you run the Execute Tests service, you can specify the access group that is associated with the application for which you want to run all Pega unit test cases or a test suite. If you do not specify an access group or application name and version, the service runs the Pega Unit test cases or test suite for the default access group that is configured for your Pega Platform operator ID. To configure a default access group, complete the following steps: 1. In Designer Studio, click the Operator menu, and then click Operator. 2. In the Application Access section, select your default access group.

Selecting default access group configuration 3. Click Save.

Configuring your build environment Configure your build environment so that it can call the Execute Tests service and run all the Pega unit test cases or a test suite in your application. Your configuration depends on the external validation engine that you use. For example, the following procedure describes how to configure the Jenkins server to call the service. 1. Open a web browser and navigate to the location of the Jenkins server. 2. Install the HTTP request plug-in for Jenkins to call the service and the JUnit Plugin so that you can view reports in xUnit format. 1. Click Manage Jenkins. 2. Click Manage Plugins. 3. On the Available tab, select the HTTP Request Plugin check box and the JUnit Plugin check box. 4. Specify whether to install the plug-in without restarting Jenkins or to download the plug-in and install it after restarting Jenkins. 3. Configure the Pega Platform credentials for the operator who authenticates the Execute Tests service. 1. Click Credentials, and then click System. 2. Click the drop-down arrow next to the domain to which you want to add credentials, and click Add credentials. 3. In the Username field, enter the operator ID that is used to authenticate the service. This operator should belong to the access group that is associated with the application for which you want to run test cases and test suites. 4. In the Password field, enter the password. 5. Click OK. 4. Configure the Jenkins URL that runs the service. 1. Click Manage Jenkins, and then click Configure System. 2. In the Jenkins Location section, in the Jenkins URL field, enter the URL of the Jenkins server. 3. Click Apply, and then click Save. 5. Add a build step to be run after the project is built. 1. Open an existing project or create a project. 2. Click Configure. 3. In the Build section, click Add build step, and select HTTP Request from the list. 4. In the HTTP Request section, in the URL field, enter the endpoint of the service. Use one of the following formats: http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests​?AccessGroup= http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests?TestSuiteID= http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests?ApplicationInformation? =ApplicationInformation: If you are using multiple parameters, separate them with the ampersand (&) character, for example, http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests?ApplicationInformation?=ApplicationInformation: &TestSuiteID= 5. From the HTTP mode list, select POST. 6. Click Advanced. 7. In the Authorization section, from the Authenticate list, select the Pega Platform operator ID that authenticates the service that you configured in step 3. 8. In the Response section, in the Output response to file field, enter the name of the XML file where Jenkins stores the output that it receives from the service. This field corresponds to the LocationOfResults request parameter. 9. In the Post-build Actions section, from the Add post build section list, select Publish Junit test result report and enter **/*.xml in the Test Report XML field. This setting configures the results in xUnit format, which provides information about test results, such as a graph of test results trends, on your project page in Jenkins.

10. Click Apply, and then click Save.

Running tests and verifying results After you configure your validation engine, run the service and verify the test results. Your test suites and test cases must be checked in so that you can run them. For example, in Jenkins, complete the following steps: 1. 2. 3. 4. 5.

Open the project and click Build Now. In the Build History pane, click the build that was run. On the next page, click Test Result. In the All Tests section, click root. The results of all failed tests and all tests are displayed. Optional: Expand a test result in the All Failed Tests section and view details about why the test was not successful.

Test failures Tests can fail for the following reasons: The operator does not have access to the location of the results. The access group that is passed by the service either does not exist or no access group is associated with the operator ID. The application name and version that are passed do not exist. An application is not associated with the access group that is passed by the service. No Pega unit test cases or test suites are in the application. The test suite pxInsName does not exist for the application name and version or for the access group that is passed by the service.

Running PegaUnit test cases and test suites with the Execute Tests service in Pega 7.3.1 When you build an application on Pega Platform™ in a continuous delivery pipeline, you can use the Execute Tests service (REST API) to validate the quality of the build by running Pega unit test cases that are configured for the application. A continuous integration (CI) tool, such as Jenkins, calls the service, which runs all the Pega unit test cases or a test suite in your application and returns the results in xUnit format. The continuous integration tool can interpret the results and, if the tests are not successful, you can correct errors before you deploy your application. You can also use the Execute Tests service to run Pega unit tests after you merge a branch on a remote system of record and start a job when you use Jenkins. For more information, see Remotely starting continuous integration jobs to perform branch operations and run Pega unit tests. The service comprises the following information: Service name: PegaUnit Rule-Test-Unit-Case pzExecuteTests Service package: PegaUnit End point: http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests Pega Platform does not provide a test case quarantine process for this service. Test case quarantines allow you to stop noncritical tests from running if they are causing failures so that the service can continue to run.

Request parameters The Execute Tests service takes the following request parameters, which are strings: ApplicationInformation – The name and version of the application for which you want to run Pega unit test cases. This parameter is optional. You can pass it instead of the AccessGroup parameter. If you pass only this parameter, the service runs all the test cases in the application. If you do not pass this parameter, the service runs all the test cases in the application that is associated with the default access group that is configured for your operator. Use the format ApplicationInformation= . ​AccessGroup – The access group associated with the application for which you want to run Pega unit test cases. This parameter is optional. You can pass it instead of the ApplicationInformation parameter. If you pass this parameter, the service runs all the test cases in the application that is associated with this access group. If you do not pass this parameter, the service runs all the test cases in the application that is associated with the default access group that is configured for your operator. Use the format AccessGroup= . ​TestSuiteID – The pxInsName of the test suite that you want to run. You can find this value in the XML document that comprises the test suite by clicking Actions > XML in the Edit Test Suite form. You can run only one test suite at a time. When you use this parameter, all the test cases in the test suite are run, but no other test cases in your application are run. This parameter is required for Pega unit test suites. If there are test suites that share the same name among applications: If you pass the ApplicationInformation or AccessGroup parameter with the TestSuiteID parameter, the service runs the test suite in the application that you specified. If you do not pass the ApplicationInformation or AccessGroup parameter with the TestSuiteID parameter, the system runs the test suite in the application that is associated with the default access group. Use the format TestSuiteID=. LocationOfResults – The location where the service stores the XML file that contains the test results. This parameter is optional for test cases and test suites.

Response The service returns the test results in an XML file in xUnit format and stores them in the location that you specified in the LocationOfResults request parameter. The output is similar to the following example: <nodes expected="/"

result="/">
<nodes xmlns:purchase="urn:acme-purchase-order" expected="/purchase:order[1]" result="/purchase-order[1]">
<error type="Local name comparison">Expected "order" but was "purchase-order"
<error type="Namespace URI comparison">Expected "urn:acme-purchase-order" but was ""


<sysout>This text is captured by the report
<syserr/>


Configuring your default access group When you run the Execute Tests service, you can specify the access group that is associated with the application for which you want to run all Pega unit test cases or a test suite. If you do not specify an access group or application name and version, the service runs the Pega unit test cases or test suite for the default access group that is configured for your Pega Platform operator ID. To configure a default access group, complete the following steps: 1. In Designer Studio, click the Operator menu, and then click Operator. 2. In the Application Access section, select your default access group.

Selecting default access group configuration

3. Click Save.

Configuring your build environment Configure your build environment so that it can call the Execute Tests service and run all the Pega unit test cases or a test suite in your application. Your configuration depends on the external validation engine that you use. For example, the following procedure describes how to configure the Jenkins server to call the service. 1. Open a web browser and navigate to the location of the Jenkins server. 2. Install the HTTP request plug-in for Jenkins to call the service and the JUnit Plugin so that you can view reports in xUnit format. 1. Click Manage Jenkins. 2. Click Manage Plugins. 3. On the Available tab, select the HTTP Request Plugin check box and the JUnit Plugin checkbox. 4. Specify whether to install the plug-ins without restarting Jenkins or download the plug-in and install it after restarting Jenkins. 3. Configure the Pega Platform credentials for the operator that authenticates the Execute Tests service. 1. Click Credentials, and then click System. 2. Click the drop-down arrow next to the domain to which you want to add credentials, and click Add credentials. 3. In the Username field, enter the operator ID that is used to authenticate the service. This operator should belong to the access group that is associated with the application for which you want to run test cases and test suites. 4. In the Password field, enter the password. 5. Click OK. 4. Configure the Jenkins URL that runs the service. 1. Click Manage Jenkins, and then click Configure System. 2. In the Jenkins Location section, in the Jenkins URL field, enter the URL of the Jenkins server. 3. Click Apply, and then click Save. 5. Add a build step to be run after the project is built by completing one of the following actions: 1. Open an existing project or create a new project. 2. Click Configure. 3. In the Build section, click Add build step, and select HTTP Request from the list. 4. In the HTTP Request section, in the URL field, enter the endpoint of the service. Use one of the following formats: http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests​? AccessGroup= http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests?TestSuit eID=

http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests?ApplicationInformation? =ApplicationInformation: If you are using multiple parameters, separate them with the ampersand (&) character, for example, http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests?ApplicationInformation? =ApplicationInformation: &TestSuiteID= 5. From the HTTP mode list, select POST. 6. Click Advanced. 7. In the Authorization section, from the Authenticate list, select the Pega Platform operator ID, which you configured in step 3, that authenticates the service. 8. In the Response section, in the Output response to file field, enter the name of the XML file where Jenkins stores the output that it receives from the service. This field corresponds to the LocationOfResults request parameter. 9. In the Post-build Actions section, from the Add post build section list, select Publish Junit test result report and enter **/*.xml in the Test Report XML field. This setting configures the results in xUnit format, which provides information about test results, such as a graph of test results trends, on your project page in Jenkins. 10. Click Apply, and then click Save.

Running tests and verifying results After you configure your validation engine, run the service and verify the test results. Your test suites and test cases must be checked in

so that you can run them. For example, in Jenkins, complete the following steps: 1. 2. 3. 4. 5.

Open the project and click Build Now. In the Build History pane, click the build that was run. On the next page, click Test Result. Click root in the All Tests section. The results of all failed tests and all tests are displayed. You can expand a test result in the All Failed Tests section to view details about why the test was not successful.

Test failures Tests are not successful in the following scenarios: The operator does not have access to the location of the results. The access group that is passed by the service either does not exist or no access group is associated with the operator ID. The application name and version that is passed does not exist. An application is not associated with the access group passed by the service. No Pega unit test cases or test suites are in the application. The test suite pxInsName does not exist for the application name and version or access group passed by the service.

Running PegaUnit test cases and test suites with the Execute Tests service in Pega 7.3 When you build an application on Pega Platform™ in a continuous delivery pipeline, you can use the Execute Tests service (REST API) to validate the quality of the build by running Pega unit test cases of that application. A continuous integration (CI) tool, such as Jenkins, calls the service, which runs all the Pega unit test cases or a test suite in your application and returns the results in xUnit format. The continuous integration tool can interpret the results and, if the tests are not successful, you can correct errors before you deploy your application. The service comprises the following information: Service name: PegaUnit Rule-Test-Unit-Case pzExecuteTests Service package: PegaUnit End point: http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests Pega Platform does not provide a test case quarantine process for this service. Test case quarantines allow you to stop non-critical tests from running if they are causing failures so that the service can continue to run.

Request parameters The Execute Tests service takes the following request parameters, which are strings: Access Group – The access group associated with the application for which you want to run Pega unit test cases. This parameter is optional for Pega unit test cases and does not apply to Pega unit test suites. If you pass this parameter, the service runs all the test cases in the application that is associated with this access group. If you do not pass this parameter, the service runs all the test cases in the application that is associated with the default access group that is configured for your operator. TestSuiteID – The pxInsName of the test suite that you want to run. You can find this value in the XML document that comprises the test suite by clicking Actions > XML in the Edit Test Suite form. You can run only one test suite at a time. When you use this parameter, all the test cases in the test suite are run, but no other test cases in your application are run. This parameter is required for Pega unit test suites. If there are test suites that share the same name among applications: If you pass the Access Group parameter with the TestSuiteID parameter, the service runs the test suite in the application that you specified. If you do not pass the Access Group parameter with the TestSuiteID parameter, the system runs the test suite in the application that is associated with the default access group. LocationOfResults – The location where the service stores the XML file that contains the test results. This parameter is optional for test cases and test suites.

Response The service returns the test results in an XML file in xUnit format and stores them in the location that you specified in the LocationOfResults request parameter. The output is similar to the following example: <nodes expected="/" result="/">
<nodes xmlns:purchase="urn:acme-purchase-order" expected="/purchase:order[1]" result="/purchase-order[1]">
<error type="Local name comparison">Expected "order" but was "purchase-order"
<error type="Namespace URI comparison">Expected "urn:acme-purchase-order" but was ""


<sysout>This text is captured by the report
<syserr/>


Configuring your default access group When you run the Execute Tests service, you can specify the access group that is associated with the application for which you want to run Pega unit test cases or test suites. If you do not specify an access group, the service runs the Pega unit test cases or test suites for the default access group that is configured for your Pega Platform operator ID. To configure a default access group, complete the following steps: 1. In Designer Studio, click the Operator menu, and then click Operator. 2. In the Application Access section, select your default access group.

Selecting default access group configuration

3. Click Save.

Configuring your build environment Configure your build environment so that it can call the Execute Tests service and run all the Pega unit test cases or a test suite in your application. Your configuration depends on the external validation engine that you use. For example, the following procedure describes how to configure the Jenkins server to call the service. 1. Open a web browser and navigate to the location of the Jenkins server. 2. Install the HTTP request plug-in for Jenkins to call the service. 1. Click Manage Jenkins. 2. Click Manage Plugins. 3. On the Available tab, select the HTTP Request Plugin check box. 4. Specify whether to install the plug-in without restarting Jenkins or download the plug-in and install it after restarting Jenkins. 3. Click Manage Jenkins. 4. Click Configure System. 5. In the Jenkins Location section, in the Jenkins URL field, enter the URL of the Jenkins server. 6. In the HTTP Request section, create a record for authentication by completing the following steps: 1. Click Add next to Basic/Digest Authentication. 2. In the Key Name field, enter a name for the authentication record. 3. In the Username field, enter the operator ID that is used for authenticating the service. This operator should belong to the access group that is associated with the application that has the Pega unit test cases or test suites that you want to run. 7. Select POST from the HTTP default mode list. 8. ​Click Apply, and then click Save. 9. Add a build step to be run after the project is built by completing one of the following actions: Create a project if you have not already done so. Open an existing project. 1. Click Configure. 2. In the Build section, click Add build step and select HTTP Request from the list. 3. In the HTTP Request section, in the URL field, enter the endpoint of the service. Use one of the following formats: http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests http:// http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests?TestSuiteID=

If you are using multiple parameters, separate them with the & character, for example, http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests?AccessGroup= &TestSuiteID= 10. From the HTTP mode list, select POST. 11. Click Advanced. 12. In the Output response to file field, enter the name of the XML file where Jenkins stores the output that it receives from the service. This field corresponds to the LocationOfResults request parameter. 13. From the Authenticate list, select the name of the authentication record that you provided in step 6b. 14. In the Post-build Actions section, select Publish Junit test result report and enter **/*.xml in the Test Report XML field. This setting configures the results in xUnit format, which provides information about test results, such as a graph of test results trends, on your project page in Jenkins. 15. Click Save.

Running tests and verifying results After you configure your validation engine, run the service and verify the test results. For example, in Jenkins, complete the following steps: 1. Open the project and click Build Now. If you are running Pega unit test cases and did not specify an access group, all the test cases are run for the application that is associated with the default access group for your operator ID. If you are running Pega unit test cases and specified an access group, the Pega unit test cases are run for the application that is associated with the access group. 2. In the Build History pane, click the build that was run. 3. On the next page, click Test Result. 4. Click root in the All Tests section. The results of all failed tests and all tests are displayed. 5. You can expand a test result in the All Failed Tests section to view details about why the test was not successful.

Test failures Tests are not successful in the following scenarios: The operator does not have access to the location of the results. The access group that is passed by the service either does not exist or no access group is associated with the operator ID. An application is not associated with the access group passed by the service. No Pega unit test cases or test suites are in the application.

Running all Pega unit test cases with the Execute Tests service in Pega 7.2.2

When you build an application on Pega Platform™ in a continuous delivery pipeline, you can use the Execute Tests service (REST API) to validate the quality of the build by running Pega unit test cases of that application. A continuous integration tool, such as Jenkins, calls the service, which runs all Pega unit test cases in your application and returns the results in xUnit format. The continuous integration tool can interpret the results and, if the tests are not successful, you can correct errors before you deploy your application. The service comprises the following information: Service name: PegaUnit Rule-Test-Unit-Case pzExecuteTests Service package: PegaUnit End point: http:///prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests The Pega 7 Platform does not provide a test case quarantine process for this service. Test case quarantines allow you to stop non-critical tests from running if they are causing failures so that service can continue to run.

Request parameters The Execute Tests service takes the following optional request parameters, which are strings: Access Group - The access group associated with the application for which you want to run automated test cases. If you pass this parameter, the service runs all the test cases in the application that is associated with this access group. If you do not pass this parameter, the service runs all the test cases in the application that is associated with the default access group that is configured for your operator. See step 9-b in the Configuring your build environment section below for an example of an access group passed into the service. LocationOfResults - The location where the service stores the XML file that contains the test results.

Response The service returns the test results in an XML file in xUnit format and stores them in the location that you specified in the LocationOfResults request parameter. The output is similar to the following example: <nodes expected="/" result="/">
<nodes xmlns:purchase="urn:acme-purchase-order" expected="/purchase:order[1]" result="/purchase-order[1]">
<error type="Local name comparison">Expected "order" but was "purchase-order"
<error type="Namespace URI comparison">Expected "urn:acme-purchase-order" but was ""


<sysout>This text is captured by the report
<syserr/>


Configuring your default access group When you run the Execute Tests service, you can specify the access group that is associated with the application for which you want to run all automated test cases. If you do not specify an access group, the service runs the Pega unit test cases for the default access group that is configured for your operator ID on the Pega 7 Platform. To configure a default access group, complete the following steps: 1. In Designer Studio, click the Operator menu, and then click Operator. 2. In the Application Access section, select your default access group.

3. Click Save.

Configuring your build environment Configure your build environment so that it can call the Execute Tests service and run all the Pega unit test cases in your application. Your configuration depends on the external validation engine that you use. For example, the following procedure describes how to configure Jenkins to call the service. 1. Open a web browser and navigate to the location of the Jenkins server. 2. Install the HTTP request plug-in for Jenkins to call the service. 1. Click Manage Jenkins. 2. Click Manage Plugins. 3. On the Available tab, select the HTTP Request Plugin check box. 4. Specify whether to install the plug-in without restarting Jenkins or download the plug-in and install it after restarting Jenkins. 3. Click Manage Jenkins. 4. Click Configure System. 5. In the Jenkins Location section, in the Jenkins URL field, enter the URL of the Jenkins server. 6. In the HTTP Request section, create a record for authentication by completing the following steps: 1. Click Add next to Basic/Digest Authentication. 2. In the Key Name field, enter a name for the authentication record. 3. In the Username field, enter the operator ID that is used for authenticating the service. This operator should belong to the access group that is associated with the application that has the Pega unit test cases that you want to run. 7. Select POST from the HTTP default mode list.

8. C ​ lick Apply, and then click Save. 9. Add a build step to be run after the project is built: 1. Complete one of the following actions: Create a project if you have not already done so. Open an existing project. 2. Click Configure. 3. In the Build section, click Add build step and select HTTP Request from the list. 4. In the HTTP Request section, in the URL field, enter the endpoint of the service. Use the following format: http:///prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests 5. Optional: At the end of the URL, append the name of the access group that is associated with the application for which you want to run Pega unit test cases. Use the format ?AccessGroup=AccessGroup. For example: http://myPega7PlatformHost:8080/prweb/PRRestService/Pega unit/Rule-Test-Unit-Case/pzExecuteTests? AccessGroup=HRDepart:Administrators If you do not specify an access group name, the service uses the default access group for your operator ID. For more information, see Configuring your default access group. 10. From the HTTP mode list, select POST. 11. Click Advanced. 12. In the Output response to file field, enter the name of the XML file where Jenkins stores the output that it receives from the service. This field corresponds to the LocationOfResults request parameter. 13. From the Authenticate list, select the name of the authentication record that you provided in step 6b. 14. In the Post-build Actions section, select Publish Junit test result report and enter **/*.xml in the Test Report XML field. This setting configures the results in xUnit format, which provides information about test results, such as a graph of test results trends, on your project page in Jenkins. 15. Click Save.

Running tests and verifying results After you configure your validation engine, run the service and verify the test results. For example, in Jenkins, complete the following steps: 1. Open the project and click Build Now. If you did not specify an access group, all the Pega unit test cases are run for the application that is associated with the default access group for your operator ID. If you specified an access group, the automated test cases are run for the application that is associated with the access group. 2. In the Build History pane, click the build that was run. 3. On the next page, click Test Result. 4. Click root in the All Tests section. The results of all failed tests and all tests are displayed. 5. You can expand a test result in the All Failed Tests section to view details about why the test was not successful.

Test failures Tests are not successful in the following scenarios: The operator does not have access to the location of the results. The access group that is passed by the service either does not exist or no access group is associated with the operator ID. An application is not associated with the access group passed by the service. No Pega unit test cases are in the application.

Running all data page unit test cases with the Execute Tests service in Pega 7.2.1 When you build an application on Pega Platform™ in a continuous delivery pipeline, you can use the Execute Tests service (REST API) to validate the quality of the build by running Pega unit test cases of that application. An external validation engine, such as Jenkins, calls the service, which runs all data page unit test cases in your application and returns the results in xUnit format. The validation engine can interpret the results and, if the tests are not successful, you can correct errors before you deploy your application. The service comprises the following information: Service name: PegaUnit Rule-Test-Unit-Case pzExecuteTests Service package: PegaUnit End point: http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests The test case quarantine process is not available for this service.

Request parameters The Execute Tests service takes the following optional request parameters, which are strings: Access Group - The access group associated with the application for which you want to run data page unit tests. If you pass this parameter, the service runs all the test cases in the application that is associated with this access group. If you do not pass this parameter, the service runs all the test cases from the application that is associated with the default access group configured for your operator. LocationOfResults - The location where the service stores the XML file that contains the test results.

Response The service returns the test results in an XML file in xUnit format and stores them in the location that you specified in the LocationOfResults request parameter. The output is similar to the following example: <nodes expected="/" result="/">
<nodes xmlns:purchase="urn:acme-purchase-order" expected="/purchase:order[1]" result="/purchase-order[1]">
<error type="Local name comparison">Expected "order" but was "purchase-order"
<error type="Namespace URI comparison">Expected "urn:acme-purchase-order" but was ""




<sysout>This text is captured by the report
<syserr/>


Configuring your default access group When you run the Execute Tests service, you can specify the access group that is associated with the application for which you want to run data page unit tests. If you do not specify an access group, the service runs data page unit test cases for the default access group that is configured for your operator ID on the Pega 7 Platform. To configure a default access group, complete the following steps: 1. In Designer Studio, click the Operator menu, and then click Operator. 2. In the Application Access section, select your default access group.

3. Click Save.

Configuring your build environment Configure your build environment so that it can call the Execute Tests service and run the data page unit test cases in your application. Your configuration depends on the external validation engine that you use. For example, the following procedure describes how to configure Jenkins to call the service. 1. Open a web browser and navigate to the location of the Jenkins server. 2. Install the HTTP request plug-in for Jenkins to call the service. 1. Click Manage Jenkins. 2. Click Manage Plugins. 3. On the Available tab, select the HTTP Request Plugin check box. 4. Specify whether to install the plug-in without restarting Jenkins or download the plug-in and install it after restarting Jenkins. 3. Click Manage Jenkins. 4. Click Configure System. 5. In the Jenkins Location section, in the Jenkins URL field, enter the URL of the Jenkins server. 6. In the HTTP Request section, create a record for authentication by completing the following steps: 1. Click Add next to Basic/Digest Authentication. 2. In the Key Name field, enter a name for the authentication record. 3. In the Username field, enter the operator ID that is used for authenticating the service. This operator should belong to the access group that is associated with the application that has the tests that you want to run. 7. Select POST from the HTTP default mode list. 8. ​Click Apply, and then click Save. 9. Add a build step to be run after the project builds: 1. Complete one of the following actions: Create a project if you have not already done so. Open an existing project. 2. Click Configure. 3. In the Build section, click Add build step and select HTTP Request from the list. 4. In the HTTP Request section, in the URL field, enter the endpoint of the service. Use the format http:///prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests. 5. Optional: At the end of the URL, append the name of the access group that is associated with the application for which you want to run data page unit tests. Use the format ?AccessGroup:AccessGroup, for example: http://myPega7PlatformHost:8080/prweb/PRRestService/PegaUnit/Rule-Test-Unit-Case/pzExecuteTests? AccessGroup=HRDepart:Administrators. If you do not specify an access group name, the service uses the default access group for your operator ID. See Configuring your default access group. 10. From the HTTP mode list, select POST. 11. Click Advanced. 12. In the Output response to file field, enter the name of the XML file where Jenkins stores the output that it receives from the service. This field corresponds to the LocationOfResults request parameter. 13. From the Authenticate list, select the name of the authentication record that you provided in step 6b. 14. In the Post-build Actions section, select Publish Junit test result report and enter **/*.xml in the Test Report XML field. This setting configures the results in xUnit format, which provides information about test results, such as a graph of test results trends, on your project page in Jenkins. 15. Click Save.

Running tests and verifying results After you configure your validation engine, run the service and verify the test results. For example, in Jenkins, complete the following steps: 1. Open the project and click Build Now. If you did not specify an access group, all data page unit tests are run for the application that is associated with the default access group for your user ID. If you specified an access group, the data pages are run for the application that is associated with the access group. 2. In the Build History pane, click the build that was run. 3. On the next page, click Test Result.

4. Click root in the All Tests section. The results of all failed tests and all tests are displayed. 5. You can expand a test result in the All Failed Tests section to view details about why the test was not successful.

Test failures Tests are not successful in the following scenarios: The operator does not have access to the location of the results. The access group that is passed by the service either does not exist or no access group is associated with the operator ID. An application is not associated with the access group passed by the service. No data page unit test cases are in the application.

Creating a data page unit test Data page unit tests are a way to validate that application data is loaded correctly. It is important to validate data page functionality when changes are made to a data page or rule form. Testing data pages ensures acceptable performance and that no existing functionality has been broken.

Starting a data page unit test Before you begin testing, you must configure your application for automated testing. See Automated data page testing: application setup. You can access a data page in your application from the Records Explorer, the Data Explorer, or data designer. 1. 2. 3. 4.

From the Records Explorer, click Data Model > Data Page. Select and open one of the data pages. Select Actions > Run. In the Run Data page dialog box, select a Thread, set the parameters for the data, and then click Run.

Results The Results section of the data page run dialog box displays the properties and their associated values. Ensure that the data page output is correct based on the parameters that you entered, and then click Convert to test. 1. 2. 3. 4.

Add a description. Select the check box next to the properties to include in the test. Select comparators and enter values for each property, if necessary. Enter a value, in seconds, for the expected run time. If the test does not run within the specified run time, the test fails. 5. Click Create and close or Create and open to save the unit test case.

The Unit Test Case configuration page

Running a Data Page Unit Test After a data page unit test is created, you can access and run it from several places: The data page unit test landing page – Select the check boxes next to the name of the unit tests that you want to run, and click Run selected. The Test cases tab of the data page – Open any data page, and click the Test cases tab. Select the check boxes next to the name of the unit tests that you want tor run, and click Run selected. Directly from the test case rule itself – Select Run from the Actions menu. Click View summary in the header to view the detailed results. The Test cases tab of the data designer.

Results When the test case is open in Designer Studio, you can view the results in the header. Click View details to view the detailed test results, which contain a list of errors and unexpected differences. The table of unexpected differences displays the property, the comparator, the expected value, and the actual value. You can also view detailed results by clicking the result value on the landing page. Test results are also displayed in the Test cases tab of the data page details. Click the name of the test result that you want to view to open that test.

Data page unit test case assertions Data page unit test cases help to validate that application data is loaded correctly. You configure assertions that define the expected output of the test. When the test runs, the expected result is compared with the actual results on the data page. Assertions are applied in the order you that you define them. All assertions, except for run-time assertions, must pass for the test to be successful. In addition to property and expected run-time assertions, you can create ordered list, unordered list, and result count assertions.

Expected run-time assertions After you create a data page unit test case, the system generates the expected run-time assertion. The default value of the assertion is the time taken by the data page to fetch results when the test was first run. The system compares this time with the run time of any future tests. You can change this value or create additional assertions. An actual run time that is significantly longer than the expected run time can indicate an issue, such as a connectivity issue between the application and database from which you are obtaining initial results.

Expected run-time assertion

Property assertions You can configure property assertions to compare the expected value of one or more properties with the actual values on the data page. In the following example, you can verify that the .pxMaxRecords property, which appears only once on the data page, is equal to 500.

Property assertion

Ordered list assertions You can create ordered list assertions for a page list on the data page to apply assertions to all results returned by the data page so that you do not have to manually create assertions for each result in the list. The following example shows the results that are obtained when the assertion runs are compared with the .pxResults property. This property contains the list of results that are obtained by running the data page. In addition, the assertion applies only to the data page entries that are specified for the filter value (when the Department is Engineering).

Ordered list assertion

Unordered list assertions You can create unordered list assertions for a page list on the data page. These assertions determine whether the expected result is anywhere in the list of results returned by the data page. In the following example, you can verify that the data page results contain an entry where Experience Required is equal to the value of 6, regardless of where the Experienced Required property appears in the data page.

Unordered list assertion

Result count assertions You can configure assertions to compare the number of items that are returned in a page list, value list, or value group on the data page

with the output that you expect to see on the clipboard. In the following example, the result count assertion verifies that the number of returned results is greater than 7.

Result count assertion

Pega 7.2.2 and later behavior when switching between Pega unit testing and Automated Unit Testing features Beginning with Pega 7.2.2, you can use Pega unit testing to create test cases to validate the quality of your application by comparing the expected test output with results that are returned by running rules. In addition, if you have the AutomatedTesting privilege, you can use Automated Unit Testing (AUT) and switch between Pega unit testing and AUT, for example, if you want to view test cases that you created in AUT. The following list describes the application behavior when you use Pega unit testing and AUT: When you unit test activities that are supported by both Pega unit testing and AUT, the Run Rule dialog box displays updated options for creating unit tests for Pega unit testing. However, you cannot create unit test cases for AUT by using this dialog box. When you use Pega unit testing, you can create, run, and view the results of Pega unit testing on the Test Cases tab for the supported rule types. You can view, run, and view the results of Pega unit test cases by clicking Designer Studio > Automated Testing > Test Cases. You can also switch to the AUT landing page by clicking Switch to old version. When you switch to the AUT landing page, you can create, run, and view the results of unit test cases for AUT on the Test Cases tab for activities, data transforms, and data tables, which are supported by both Pega unit testing and AUT. You can create unit test cases only by clicking the Record test case button and using the older Run Rule dialog box. In the Automated Unit Testing landing page, you can restore the Automated Rule Testing landing page by clicking Switch to new version. When you click the Test cases tab in an activity, decision table, or decision tree, the tab displays options for creating Pega unit test cases. If you use the Automated Unit Testing landing page, and then log out of the system, Designer Studio displays the Designer Studio > Application > Automated Unit Testing menu option instead of the Designer Studio > Application > Automated Testing menu option. To return to the Automated Unit Testing landing page, click Switch to new version on the Automated Unit Testing landing page.

Automated unit testing of data pages Data page unit tests are a way to validate that application data is loaded correctly. It is important to validate data page functionality when changes are made to a data page or rule form. Testing data pages ensures acceptable performance and that no existing functionality has been broken. Data page unit tests compare the expected value of one or more properties with their actual values in a data page. The Unit Test Case landing page contains all the unit test cases for data pages within an application. The test case contains the test criteria and test results. Unit Test Case rules are created for data pages to enable the testing of a data page within an application. Before you begin testing, your application needs to be configured for automated testing. See Automated Data Page Testing: Application Setup. Creating Data page unit tests is done by running the data page, and then converting the run into a test. Existing data page unit tests are accessed on the Test cases tab of the Data page rule. In the Records Explorer, select Data Model > Data Page, and then click the name of a data page. From the data page record detail, click the Test cases tab.

Landing page The data page unit test landing page lists all the data page unit tests in an application. On the landing page, you can selectively run automated tests defined for data pages and know which tests have passed or failed. You can also create new data page unit tests from the landing page. The landing page is accessed from the Data Explorer. The Explorer panel is located on the left side of the Designer Studio screen. Click the Data icon to display the Data Explorer. From the drop-down menu at the top-left of the Data Explorer, select View all test cases.

The Data page test case landing page

Automated data page testing: application setup You can test the functionality of data pages individually or in large batches by doing automated testing of data pages. Data page testing consists of running a data page, converting the run into a test, and then configuring and saving the test. Before you do automated data page testing, you must make sure that your application is configured correctly. Configuration involves these basic steps: Creating a development/test application on top of the current application. Creating a test ruleset in the new application copy. Verifying that the test ruleset is in the correct location in relation to other rulesets. Adding test cases to the test ruleset. After the application is configured, you can begin creating data page unit test case rules and using those rules to test your application. Note that failure to configure your application correctly can create issues. Additionally, be aware that non-test rules created after the creation of a test ruleset can be saved into the test ruleset. This situation can happen when the test ruleset is in the first position of the ruleset list as described in the following section.

Creating a development or test application on top of the current application To create a development or test application, your application must be open in Designer Studio. If you already have a development application, use that one and do not create another application. 1. Create an application instance and set its Built on application as the current application (the application that you want to test). 2. Add any development rulesets to the application. 3. Copy the access group from the original application, and do not change anything except the application name.

Creating a test ruleset and verifying its location Create a test ruleset that is separate from your production ruleset so that you do not have unnecessary test or test result data in your production application. 1. In the Application, click Add ruleset and enter a name for the new test ruleset. 2. In the Category tab of the ruleset details, select the Use this ruleset to store test cases check box. This setting enables data page unit test case rules to be stored in this ruleset. You can enable Test automation settings in more than one ruleset.

3. Verify that the test ruleset is the last ruleset in the list. If the test ruleset is not the last in the list, it is possible that subsequently created rules might be saved into the test ruleset. However, unit test case rules cannot be saved into a ruleset that does not have test automation settings enabled. 4. Order the rulesets by clicking the number next to the ruleset name and dragging it to the position that you want.

Adding test cases to the test ruleset When creating data page unit test rules, verify that you are selecting the test ruleset. 1. Check the RS in the Create Unit Test Case page when the test case is created. Click the Gear icon to update the ruleset value. 2. Verify that you save the test case in the correct development branch in the application context.

Automated Unit Testing Advanced editing and validation of result pages Summary When using Automated Unit Testing and test cases for regression unit testing, you can specify validate rules to check the values of properties when the test case is played back. For example, if you want to check that a property value always lies in the range between $50 and $100 when a particular test case is run, you can change the property value that was saved when the test case rule was first created and set the value to a validate rule. When the test case runs, if the resulting value satisfies the validate rule, then the test case status is reported as successful. If the value does not satisfy the validate rule, then the test case is reported as unsuccessful. This method is typically used for flow test cases, and usually for those properties that are related to time-based values which have different numeric values every time the flow runs. Using validation is an alternative to specifying that differences encountered during the playback of the flow test case should always be ignored for such properties. Instead, by using the validate rule, the test case run is reported as successful as long as the property value adheres to the validate rule.

Suggested Approach Use the test case's rule form to specify validate rules for properties in the test case. To edit the result pages of a test case: 1. From the Rules by Type explorer, select SysAdmin > Test Case. Choose a test case rule from the list. In V6.1, the first step is different. To open a test case rule form in V6.1: 1. Open the Automated Unit Tests gadget by selecting > Application > Automated Unit Testing > Automated Unit Tests. 2. Select Unit Test Cases to display the current application's test cases. 3. Right-click on the test case to open the context menu, and select Open. The rest of the steps in this article are the same for V6.1. 2. To begin editing the result pages of the test case, select the Results tab of the test case and click Show Result Pages. The ResultingPage Viewer window opens.

3. In the ResultingPage Viewer window, navigate to the primary page of the step you want to edit and select Action > Update Page.

4. The Update TestCase Page displays a list of all properties and their values for that step in the test case. From here you can change the value of each property or use a validate rule to test the value when the test case runs.

To edit the value of a property, highlight the value of the property and input a new value or a validate rule. For a validate rule, use the following syntax: $ [Name of Validate rule] 5. Once you have finished editing the result pages of the test case, click the Update Page button to update the new property values. Close the ResultingPage Viewer window. 6. Save the test case rule form.

Auto-generating test cases for a decision table Summary In addition to creating test cases manually for a decision table, with Automated Unit Testing, you can iterate through all possible variations of the table values and save those variations as test cases.

Suggested Approach To auto-generate all possible test cases for a decision table: 1. Open the decision table rule you want to test. 2. Click the Run toolbar icon. The Run Rule window opens. In V6.1, step two is different. To create test cases for a decision tree or decision table in V6.1: 1. Go to the Test Cases tab of the opened rule. 2. Click Record New Test Case. The Run Rule window opens. The rest of the steps in this article are the same for V6.1. 3. In the Test Page section, choose the auto-generate test cases option and click Generate Test Cases.

4. After clicking Generate Test Cases , the New Test Case dialog box appears.

5. In the Test Case Prefix field, enter a short string to be used as the prefix for the auto-generated test cases to be created, and an optional description. 6. Specify the appropriate RuleSet and version for the test cases and click Create. 7. After clicking Create a table listing all of the auto-generated test cases for the decision table opens. From this table you can choose which test cases are created by selecting the check box next to it.

8. After choosing which test cases to create, click Save Test Cases.

Related Topics About Automated Unit Testing Decision rules How to create and execute test cases for decision tree and decision table rules

Automated Testing With Automated Unit Testing, you can simplify and increase the speed of unit testing certain aspects of your Process Commander applications. As a result, you can be more agile about determining when corrective action is necessary within your development process. The benefits of Automated Unit Testing include: Early and easy process testing Individual and group test management to reduce overhead and speed up the testing process Test case retention, because tests are directly associated with the rules themselves

For example, you might have an application with a number of work object types and the flows that process those work objects. Subsequent development work alters one of those flows. Those intended changes made to one flow might have unintended impacts on the application's other work object types and flows. In this situation, the ability to efficiently regression unit test for unexpected results and assess the extent of the impacts is valuable. With Automated Unit Testing, you can quickly discover and address such unanticipated effects. Using Automated Unit Testing, developers can: Create test cases for flows, decision tables, decision trees, activities, and Service SOAP rules Run a rule's test case directly from the Test Cases tab of the rule form (Version 6) Auto-generate test cases for decision rules Run test cases manually, in groups as unit test suites from the Dashboard, or run from the Test Management Framework Save results from the test case runs, and use the saved results to compare with past runs and to report on the effects of change Instrument flows with flow markers, input fields, and property value differences to allow easier manual unit testing

Additional information This article links to other Automated Unit Testing resources. Select a version in the table below to display the applicable article. Automated unit testing is unavailable in PRPC 7.1.1 - 7.1.5. Starting in 7.1.6, users can access AUT from supported browser version of IE. Topic Process Commander Version Overview and configuration How Automated Unit Testing works Enabling Automated Unit Testing Creating a testing application and ruleset for test cases and unit test suites Creating a ruleset for test cases and unit test suites Unit testing – Flow rules How to test flows with Automated Unit Testing Creating flow rule test cases Executing flow rule test cases Create and use a flow marker Advanced editing and validation of result pages How to define properties to be ignored during flow test case execution Unit testing – Decision rules Create and execute test cases for decision tree and decision table rules Auto-generating test cases for a decision table Running all test cases for decision tree and decision table rules Unit testing – Activity rules How to unit test activities with the Automated Testing feature Unit testing – Service SOAP rules Create and execute test cases for SOAP service rules Automated regression testing Creating a unit test suite from the Test Manager Test suite reporting

V6.1 V6.1

V5.5 V5.5

V5.4 V5.4

V6.1







V5.5

V5.4

V6.1 – – V6.1 V6.1 V6.1

– V5.5 V5.5 V5.5 V5.5 V5.5

– V5.4 V5.4 V5.4 V5.4 V5.4

V6.1 V6.1 V6.1

V5.5 V5.5 V5.5

V5.4 V5.4 V5.4

V6.1

V5.5



V6.1

V5.5

V5.4

V6.1 V6.1

V5.5 V5.5

V5.4 V5.4

Create and execute test cases for SOAP service rules Summary SOAP services start processing in response to a request from an external application. What happens if the external application that makes the requests is being built and tested at the same time that you are creating your application? You can verify that the service will process data appropriately by using Automated Unit Testing and manually providing some representative data to process.

Suggested Approach You can unit test a SOAP service rule on its own before testing it in the context of the entire application you are building. With Automated Unit Testing, you can save the test data that you use as test case rules. Then, the next time you test that rule, you can run the test case rather than manually re-entering the test data.

Creating SOAP service test cases 1. Open the Service SOAP rule that you want to test. 2. Click the Run toolbar button. The Simulate SOAP Service Execution window opens. In V6.1, step two is different. To create a test case for a Service SOAP rule in V6.1: 1. Go to the Test Cases tab of the opened rule. 2. Click Record New Test Case. The Simulate SOAP Service Execution window opens. The rest of the steps for creating a test case are the same for V6.1.

3. In the Requestor Context section, you can choose to use the current requestor or to initialize a service requestor. By selecting the current requestor, the service runs as you, in your own session, with your access rights and RuleSet list. By selecting to initialize a service requestor, Process Commander creates a new requestor and runs as that requestor, with the access rights and RuleSet list specified in the access group for its service package. 4. In the Enter Request Data section, select whether to specify individual request values or to supply a SOAP request envelope. If you choose to specify individual request values, you will be required to input the values for the SOAP parameters in the SOAP Parameters Values section. If you choose to supply a SOAP request envelope, the SOAP Request Envelope section displays in the window, and you can edit the request envelope.

5. Once you selected the appropriate values for the SOAP Service rule, click Execute to test it. The Service Simulation Results window opens. The Service Simulation Results window displays the overall results as well the list of steps taken, Response Parameter values, and SOAP Response Envelope values.

6. When you are satisfied with the results, click Save Test Case . The New Test Case dialog box opens.

7. In the Test Case Name field, enter a short string that describes the test case. 8. Specify the RuleSet you created for test cases and click Create. For more information on SOAP Service rules, see Testing Services and Connectors.

Running SOAP Service Test Cases After you create a test case for a rule, it will appear in the list for Saved Test Cases in the Simulate SOAP Service Execution window for the tested rule. In V6.1, the steps for running a test case are different. After you create test cases for a rule, they appear on the Test Cases tab for that rule. To run a test case for a Service SOAP rule in V6.1:

1. Open the rule that you want to test. 2. Go to the Test Cases tab of the opened rule. 3. Click the name of the test case. The Simulate SOAP Service Execution window opens, the system runs the test case, and displays the results. To run a test case:

1. Open the rule you want to test. 2. Click the Run toolbar icon. The Simulate SOAP Service Execution window opens. 3. Select the test case you want to run from the list.

Because the test case rule contains the initial pages that were created, loaded, or copied before the rule was run, you do not have to recreate the initial conditions before running the test case. 4. Click Run Test Case . Process Commander runs the test case and displays the results in the Service Simulation Results window. If there are any differences found between the current results and the saved test case, they are displayed in the Simulate SOAP Service Execution window, along with available actions. Starting with Version 5.5:If differences are found between the current results and the saved test case, there are some additional actions you can take after running the test case. Save results: Click Save Results to save the results to the test case for reviewing later. Overwrite the saved test case : If the new results are valid, you can click Overwrite Test Case to overwrite the test case and use the new information. Ignore differences: You can choose to ignore particular differences by selecting them and then clicking Save Ignores. Instead of having a property flagged as a difference every time the test case runs, you can choose to have it ignored in future runs of this test case. Starting with Version 6.1 SP2: You can choose to ignore differences for all test cases in the application. You can also select a page to ignore all differences found on that page. You can ignore a page only for this specific test case (not across all test cases). If you select to ignore a page, all differences found on that page are ignored each time this test case runs.

Create and execute test cases for decision tree and decision table rules Summary You can unit test an individual rule on its own before testing it in the context of the entire application you are building. With Automated Unit Testing, you can save the test data that you use as test case rules. Then, the next time you test that rule, you can run the test case rather than manually re-entering the test data.

Suggested Approach Creating Test Cases for Decision Trees and Decision Tables To record and save a test case:

1. Open the decision tree or decision table rule you want to test. 2. Click the Run toolbar icon. The Run Rule window appears. In V6.1, steps two and three are different. To create a test case for a decision tree or decision table in V6.1: 1. Go to the Test Cases tab of the opened rule.

2. Click Record New Test Case. The Run Rule window appears. Starting with step 4, the rest of the steps for creating a test case are the same for V6.1. 3. In the Test Page section, specify which page to use as the main page. 4. In the Result section, enter the test data and click Run Again. 5. Examine the results and determine whether the test data used generated the expected results.

6. When you are satisfied with the results, click Save Test Case . The New Test Case dialog box appears. 7. Enter the name of the test case, a short description, and the appropriate RuleSet and version; then click Create.

Running Test Cases After you create a test case for a rule, it appears in the list of saved test cases in the Run Rule window for the tested rule. To run a test case:

1. Open the rule you want to test. 2. Click the Run button. The Run Rule window appears. In V6.1, steps 2 - 4 for running a test case are different. After you create test cases for a rule, they appear on the Test Cases tab for that rule. To run a test case in V6.1:

1. Go to the Test Cases tab of the opened rule. 2. Click the name of the test case. The Run Rule window appears, with the name of the test case already selected. The system runs the test case, and displays the results in the Result section of the window. If any differences are found, a message states that the results are unexpected. Step 5 in this article is the same for V6.1. 3. Select the run against a saved test case option and choose a test case from the list.

Because the test case rule contains the initial pages that were created, loaded, or copied before the rule was run, you do not have to recreate the initial conditions before running the test case. 4. Click Run Test Case. Process Commander runs the test case and displays the results in the Result section of the Run Rule window. If there are any differences found, a message states that the results were unexpected. 5. Click Save Results to save the descriptions of any differences found between the current results and those stored in the test case.

In the case of unexpected results, you can examine the rule history of the rule by clicking View Rule History. Additionally, if the new results are valid, you can overwrite the test case so it uses the new information by clicking Overwrite Test Case.

Related Topics About Automated Unit Testing Decision rules

Create and use a flow marker Summary When you have access to Automated Unit Testing, you can use flow markers to mark various points in a flow process. A flow marker saves test data and decisions that advance the flow execution to that point. Then, when you unit test the flow, you can jump directly to these specific points without having to input the same information every time. For example, if you are changing a specific area of a complex flow, you can focus your unit testing on that area by setting a flow marker and jumping to it to begin testing from there. By using a flow marker, you can unit test more rapidly by skipping all the flow steps leading up to that point.

Suggested Approach Creating a flow marker: 1. Open the flow rule that you want to create the flow marker for. 2. Click the Run toolbar tool. The Run Rule window appears. In V6.1, steps 2 - 4 are different. To create a flow marker in V6.1: 1. Go to the Test Cases tab of the opened rule. 2. Click Record New Test Case. The system creates a new test page and starts executing the flow, beginning with creating the work object. 3. Click Createto create the work object. Enter test data and proceed with the flow. After the work object is created, you can save flow markers at points in the process by clicking Save Flow Marker. The rest of the steps after step 4 for creating a flow marker are the same for V6.1. 3. In the Test Page section, specify which page to use as the main page. 4. Enter initial test data for the flow rule and click Continue. 5. Once you have reached the step in the flow rule where you want to place the flow marker, click the Save Flow Marker button in the test navigation window. The New Flow Marker dialog box appears.

7. In the Flow Marker Name field, enter a short string that describes the flow marker. 8. Specify an appropriate RuleSet and version. 9. Click Create.

Using Flow Markers In V6.1, the steps for using flow markers are different. To use a flow marker in V6.1: 1. Go to the Test Cases tab of the flow rule. 2. In the Flow Markers table, select the name of the flow marker you want to jump to. 3. The system advances the process to the flow marker's location. To use a flow marker you have created, in the Test Page section of the Run Rule window, select which flow marker you want to jump to.

Click the Jump to Flow Marker button. By toggling the Show Flow markers for all Operator ID's check box, you can select from the flow markers created only by you or from all flow markers created.

. You advance directly to the step saved in the flow marker. From that step you can proceed through the flow process.

Creating a RuleSet for test cases and test suites Summary When using Automated Unit Testing, it is good practice to create a RuleSet specifically used to store your test case and unit test suite rules. The separate RuleSet allows you to manage your test case and test suite rules independent of your application rules. This article describes the steps available in Process Commander versions prior to Version 6.1. To create a RuleSet to store your test case and unit test suite rules as of Version 6.1, see Creating a testing application and RuleSet for test cases and unit test suites (V6).

Suggested Approach To create a test case RuleSet you need to create a RuleSet and RuleSet version instance. You then need to add that RuleSet to the access group of those operators who will be creating and executing test cases. To create a new RuleSet: 1. From the Rules by Type explorer, select SysAdmin > RuleSet. 2. Click the New icon. The New Rule Instance dialog box displays.

3. In the RuleSet Name field, enter the name for your test case RuleSet, click Create , then click the save icon on the RuleSet rule form. 4. Next, from the Rules by Type explorer, select SysAdmin > RuleSet Version. Click the New icon. The New Rule Instance dialog box displays. 5. In the RuleSet Name field, enter the name of the RuleSet you created. In the Version field, enter the version number of the RuleSet then click Create. 6. Optional: on the Security tab, in the Requires RuleSet and Versions section, enter one or more RuleSet versions on which this RuleSet version depends.

To add the RuleSet to the production RuleSets in the appropriate access groups: 1. From the Rules by Type explorer, select Application Definition > Application. Select the application rule instance for the application that you are running your test cases and test suites against from the list. 2. On the Definition tab, add the test case RuleSet you created to the list of available production RuleSets and click the save icon.

3. Next, you need to add the RuleSet to your access group. From the Rules by Type explorer, select Security > Access Group. Select your access group from the list.

4. On the Access tab, add the test case RuleSet you created to the list of production RuleSets and click the save icon.

Creating a test suite Summary This article describes the steps available in Process Commander Version 5.4. For Version 6.1 and Version 5.5 steps, see Creating unit test suites. Part of the Automated Testing facility, test suite rules identify an access group and a collection of test cases and their RuleSets. When you run individual test cases from the Run Rule window, the test case runs in your session as part of your requestor. Test suites, however, run in the background as part of the Pega-ProCom agent. When the agent runs the test suite, it uses the access group specified in the test suite rather than the access group specified for the batch requestor type data instance or for the agent queue. You can create a test suite that includes all the test cases for a specific rule type or you can select individual rules and specify the sequence in which to run them.

Suggested Approach Before you begin creating test suite rules, complete the following tasks: Determine which access group to assign to the test suite. Access groups must give the agent access to the test suite rule and the test cases listed in the test suite. If you need to create a new access group, do so now. You can create a test suite that contains all the test cases for a specific rule type and then constrain that list with a When rule. The test cases for the rules identified by the When rule are included in the test suite. Determine whether you need to use a When rule in your test suite and, if so, create it now.

Creating Test Suites To create a test suite: 1. From the Rules by Type explorer, select SysAdmin > Test Suite. 2. From the list, click the New toolbar icon.

3. In the New Test Suite form, name the test suite. Specify the test case RuleSet and version that you created for your test cases. Click Create.

4. On the Contents tab, specify the RuleSets that hold the test cases you want to include in the unit test suite. The default RuleSet for this field is the RuleSet chosen when you created the unit test suite. 5. Enter the user ID the agent uses when running this unit test suite. The user ID gives the agent access to the RuleSet that the unit test suite belongs to as well as the RuleSets listed in the RuleSets field. This field defaults to the Operator ID that created the unit test suite. 6. In version 5.5, you can choose to not delete the work object created by the test suite by checking the Remove Test Work Objects? check box. This box is checked by default. 7. Select the test cases you want to include in this unit test suite in one of the following ways: If the order in which test cases run is significant, do not include entries in the Rule Types section. Specify test cases by application name and version. Specify test cases by rule type. Specify a When rule to further constrain the list of test cases. In this case, the test cases for the rules identified by the When rule are included in the unit test suite. This option is useful when the order in which the test cases run is not significant. Search for and then select individual test cases. Enter the name or partial name in the Test Case Name field and click Query. Select the rules you want from the subsequent list. If the order in which the test cases run matters, be sure to list them in the order in which they are to run. If you configure selections in both the Rule Types section and the Query Test Cases section, note that the test cases defined in the Rule Types section run before the test cases listed in the Query Test Cases section. 8. If you specified individual test cases, their RuleSets appear in the list next to their names. Verify that these RuleSets are included in the RuleSets for Test Cases list at the top of the form. If a RuleSet is not in the list, add it now. Otherwise, the test case rule will not run when the unit test suite runs.

9. On the History tab, enter a description in the Full Description and Usage fields. 10. Save the test suite rule.

Creating a testing application and RuleSet for test cases and unit test suites (V6) Summary When using Automated Unit Testing to do regression unit testing, it is a good practice to create a testing application and a RuleSet for regression unit testing and for storing the test case and unit test suite rules. Having a separate application rule and RuleSet gives you the flexibility to manage the test case and unit test suite rules independently from the rules that make up your main application. For configuration steps in releases before PRPC 6.1, see Creating a RuleSet for test cases and unit test suites (V5).

Suggested Approach During the development and unit testing project phases, it is helpful to have a testing application that is built on the main application. In this set up, you avoid having to include the test cases and unit test suites RuleSet in main application's rule. This set up makes it possible to migrate and deliver the main application without requiring the test cases and unit test suite rules to go with it. To use this method: 1. Create a new RuleSet and RuleSet version for unit testing. 2. Create a new application rule that is built on the main application rule, and add the new RuleSet to it. 3. Provide operator access to the new application. Step 1: Create a new RuleSet and RuleSet version for unit testing To create a new RuleSet and version: 1. In the Rules Explorer, in the SysAdmin category, right-click RuleSet and select New. 2. Enter a name for the RuleSet for holding the rules related to unit testing of your application. For example, if your application is named

OrderEntrySystem, you might name the RuleSet OrderEntrySystem-Test. 3. Tab out of the RuleSet Namefield, and the system enters some default information in the fields of the form: Version: The system enters a default three-part version identifier of 01-01-01. You can modify the version identifier using another three-part identifier. Description: The system enters a default description. You can modify this description. Requires RuleSet and Versions : This new RuleSet version must have at least one prerequisite. The system enters the highest Pega-ProcessCommander version in the system. 4. In the Requires RuleSet and Versionssection, replace the system default value with the name and version of your primary application's RuleSet and version. For example, if your primary application's RuleSet and version is OrderEntrySystem:02-03, replace the pre-filled PegaProcessCommander value with OrderEntrySystem:02-03. (Do not choose to update your current application with this new RuleSet.)

5. Click Create. Then, in the RuleSet form, add a description and save the new RuleSet rule and RuleSet Version rule. Step 2: Create a new application rule that is built on the main application rule One way to do this step is: 1. In the Rules Explorer, in the Application Definition category, right-click Application and select New. 2. Enter a name and version for this testing application. For example, if your application is named OrderEntrySystem, you might name the testing application rule OrderEntrySystem-Test. Choose a name that is unique in the system.

3. In the RuleSet field, accept the system default or select a RuleSet to associate with this application rule. (This RuleSet is used by the Export and Import tools. It is not used for rule resolution.) The drop-down menu shows the list of RuleSets that your operator ID can access. 4. Click Create. 5. In the Application rule form, select the name of the application to be tested in the Built on Application field. In the Version field, select the appropriate version. 6. Select Include Parent to ensure that RuleSets from the main application are included in the Application Explorer display. 7. In the Application RuleSets array, select the unit testing RuleSet created in Step 1, and specify the version; for example, OrderEntrySystem-Test:01-01.

8. Save the rule form. Step 3: Provide operator access to the new application To complete this step you create a new access group and add it to the appropriate operator IDs: 1. In the Rules Explorer, in the Security category, right-click Access Group and select New. 2. Enter a name in the form of application:description; for example, OrderEntrySystem-Test:Testers. Click Create.

3. In the Access Group rule form, select the testing application in the Application Name field, and its version in the Version field. 4. In the Work Pools field, select the main application's work pool from the list. 5. In the Roles section, specify the appropriate roles. Usually appropriate roles are the standard roles PegaRULES:SysAdm4 and PegaRULES:AutoTest for performing development and testing tasks in the testing application.

6. 7. 8. 9. 10.

On the Settings tab, select Developer for the Default Portal Layout. Save the form. In the Rules Explorer, in the Organization category, select Operator ID to see all of the available operator ID instances. Open your operator ID instance by selecting its name. In the Access Groups section, click to add a new row and select the new access group.

11. Save the form. 12. Repeat steps 10 through 11 for each operator that needs to access the testing application. To verify that you have access to the testing application: 1. Press F5 to refresh your portal. 2. From the Designer Studio Application menu, select Switch Application and select the name of the testing application.

The Designer Studio refreshes and displays the name of the testing application.

Creating a unit test suite from the Test Manager Summary This article describes the steps available in versions after Process Commander Version 5.4. To create a unit test suite in Version 5.4, see Creating a test suite rule (V5.4). Version 6.1: The Automated Unit Testing landing page replaces the Test Manager. Unit test suites are created from the Schedule gadget of the Automated Unit Testing landing page. Other than that difference, the steps in this article apply to Version 6.1. The Schedule tab of the Test Manager lists each execution of a unit test suite scheduled to run and all unit test suites you have access to. On this tab, you can schedule individual unit test suites or create new ones.

Suggested Approach

To create a new unit test suite in the Test Manager: 1. Click the Scheduletab to view the list of all unit test suites available in the system. In V6.1, step one is different. To create a new unit test suite: Open the Schedule gadget by selecting

> Application > Automated Unit Testing > Schedule.

The rest of the steps for creating a unit test suite are the same for V6.1.

2. Click Create Suite... The New Unit Test Suite rule dialog displays.

3. Enter the name of the unit test suite and select the RuleSet and version. Click Create.

4. On the Contents tab, specify the RuleSets that hold the test cases you want to include in the unit test suite. The default RuleSet for this field is the RuleSet chosen when you created the unit test suite. 5. Enter the user ID the agent uses when running this unit test suite. The user ID gives the agent access to the RuleSet that the unit test suite belongs to as well as the RuleSets listed in the RuleSets field. This field defaults to the Operator ID that created the unit test suite. 6. Choose to not delete the work object created by the unit test suite by clearing the Remove Test Work Objects? box. This box is checked by default. 7. Select the test cases you want to include in this unit test suite in one of the following ways: If the order in which test cases run is significant, do not include entries in the Rule Types section. Specify test cases by application name and version. Specify test cases by rule type. Specify a When rule to further constrain the list of test cases. In this case, the test cases for the rules identified by the When rule are included in the unit test suite. This option is useful when the order in which the test cases run is not significant. Search for and then select individual test cases. Enter the name or partial name in the Test Case Name field and click Query. Select the rules you want from the subsequent list. If the order in which the test cases run matters, be sure to list them in the order in which they are to run. If you configure selections in both the Rule Types section and the Query Test Cases section, note that the test cases defined in the Rule Types section run before the test cases listed in the Query Test Cases section. 8. If you specified individual test cases, their RuleSets appear in the list next to their names. Verify that these RuleSets are included in the RuleSets for Test Cases list at the top of the form. If a RuleSet is not in the list, add it now. Otherwise, the test case rule will not

run when the unit test suite runs.

9. On the History tab, enter a description in the Full Description and Usage fields. 10. Save the unit test suite rule.

Creating flow rule test cases This article describes the steps available in Process Commander versions prior to Version 6.1. To create test cases for flow rules as of Version 6.1, consult How to test flows with Automated Unit Testing (V6). When Automated Unit Testing is enabled, you can use the Run Rule window to save test data as a test case for flow rules. The data saved for the test case consists of both the input values for each step and the actions the user has taken.

Suggested Approach Creating Flow Rule Test Cases The process of creating a test case for a flow rule is quite different than for the other rules: 1. Open the flow rule you want to test. 2. Click the Run toolbar tool . The Run Rule window appears. 3. In the Test Page section, specify which page to use as the main page and click the Reset Page & Run Flow button.

4. Each step from the flow rule appears in sequence in the Run Rule window. After supplying the test data for each step, click Next to move to the next step in the flow. 5. Once you have reached the point in the flow where you are done entering test case data, click the Save Test Case button. The New Test Case dialog box appears.

6. In the Test Case Name field, enter a short description of the test case. 7. Specify the RuleSet you created for test cases and click Create .

Enabling Automated Testing This article describes the steps available in Process Commander versions prior to Version 6.1. To enable Automated Unit Testing as of Version 6.1, consult How to enable Automated Unit Testing (V6). After you enable the Automated Unit Testing feature by assigning the AutomatedTesting privilege to access roles, you can save test data used for certain types of rules as test case rules. The next time you test that rule you can run the test case rather than manually re-entering the test data.

Suggested Approach Assign the AutomatedTesting Privilege The Save as Test Case button in the Run Rule window and the Test Manager option in the Run menu do not appear unless you have the AutomatedTesting privilege associated with your access group. Before enabling Automated Unit Testing, first determine which access roles will be able to use it. After you have chosen the access roles, complete the following steps to enable Automated Unit Testing: 1. From the Rules by Type explorer, select Security > Access of Role to Object. 2. For the role you are enabling, select the access rule that applies to the ultimate base class @baseclass. 3. Select the Privileges tab. Add the AutomatedTesting privilege to the list. Set the Level to 5.

4. Save the rule.

Verify that outbound email is configured correctly The agent activity uses a email account instance named Default to send email messages that contain test suite results. To verify that the default email account is configured correctly for your system, complete the following steps: 1. From the home page of the Developer portal, select Integration. 2. On the Integration slice, under Accelerators, select Email Accelerator and click New. 3. In the Enter Email Processing Information form, select the following options and then click Next: Configure outbound email Default email account The Email Account form appears:

4. In the Email Account form, examine the values in the fields and verify that the account is configured correctly.

Configure the URL for the Link from the Results Email Messages The email messages that contain results from a test suite include a link to a Process Commander report. For the link to work correctly, the value specified for the dynamic system setting named PublicLinkURL must be valid for your system. To verify this, complete the following steps: 1. From the Rules by Type explorer, select SysAdmin > Dynamic System Settings. 2. From the list of settings, select PublicLinkURL. 3. Specify the URL of Process Commander.

4. Save the setting.

Edit the Results Message Text The correspondence rule CompletedTestSuite generates an email message that looks like the one shown below:

To change the text of these messages, locate Data-AutoTest-Result-Suite.CompletedTestSuite. Save this rule into your application’s RuleSets and then edit it according to your requirements.

Enable the Agent Activity The agent activity that runs test suites, RunTestSuitesFromAgent, is included in the activity list of the Pega-ProCom agent but it is not enabled by default. To enable RunTestSuitesFromAgent, complete the following steps: 1. In the Rules by Type explorer, select SysAdmin > Agent Schedule. A list of agents appears. 2. Select the first Pega-ProCom agent schedule in the list and open it. 3. On the Schedule tab, enable the RunTestSuitesFromAgent activity.

4. Set the time interval that determines how frequently the agent activity runs. By default, it is set to run every 300 seconds (five minutes). 5. Click Save and close the form. 6. If your Process Commander system has more than one node, multiple Pega-ProCom agent instances will appear in the list. Repeat steps 2 through 5 for each Pega-ProCom agent in the list.

Executing flow rule test cases This article describes the steps available in Process Commander versions prior to Version 6.1. To run saved test cases for flow rules as of Version 6.1, consult How to test flows with Automated Unit Testing (V6). The Unit Test Manager is part of the Automated Testing facility. When the Unit Test Manager is enabled, you can create and run test cases against flow rules. The process of running a test case for a flow rule is quite different than for decision tree, decision table, or SOAP Service rules, since you must move through each step in the flow.

Suggested Approach Running Test Cases for Flow Rules To run a test case for a flow rule: 1. Open the flow rule you want to test. 2. Click the Run

button. The Run Rule window appears.

3. Select the Run against a saved test case option. Choose a test case from the drop down list.

4. Click Run Test Case. 5. Each step of the flow appears in sequence in the harness window and test navigation window. Click the Next Flow Step button to move through each step.

The Input Values Prior To Step Screen At each step in the flow, the test navigation window displays all input values for that step. Here, any value that a user can enter that was changed is highlighted in green in the harness window. If you make any changes to the input values, you can save them by clicking the Save New Inputs button to add them to the test case.

You can also view a list of all values left blank in this test case by clicking on the Display Blank Values link. Once you have edited the inputs of the step, click Next Flow Step or the arrow button in the top right corner to move to the Results After Step window.

Validating Inputs In the Input Values Prior To Step form, all input values for a step can be edited in the test navigation window. If a user must manually input a value, a validation can be used in place of a value. For example, if you have a date field in your flow rule test case, you can validate that it is always a certain number of days ahead or behind the current date by using the following syntax, $TODAY +/- X where X is an integer value. By using this syntax, you can set the date field to always be five days later than the current date by entering: $TODAY + 5 into the date field.

Calling Activities in Input Fields (Version 5.5+) Version 5.5 supports the ability to call an activity which will fill in the value of an input field using the following syntax: $()

The Results After Step window After displaying the input values, the test navigation window then displays the results screen for that step. Process Commander compares the actual results to the saved results in the test case rule . The results appear in the test navigation window. Any clipboard differences found are highlighted in red in both the harness window and the test navigation window. You can choose to view both the clipboard or database differences by either name or label. By expanding the clipboard and database differences lists, you can choose to ignore each difference found clicking on the checkbox in the Ignore column. After viewing the differences found, you can save all ignored properties by clicking the Save Ignores button. You can also view all ignored properties by selecting the Show/Hide Ignored Properties link in the test navigation window. You can then choose to view all properties that are ignored in the current test case or view all properties that are ignored across all test cases.

You can delete the current work object by checking the Delete Work Objects check box. Select the Finish button to return to the beginning of the current test case. Any changes made in the previous forms can be added permanently to the test case by clicking on the Overwrite Step button.

Ignoring Differences There are two ways to indicate that different values for a specific work object property should be ignored when a flow test case is running: For an individual test case, run the test case and specify which differences to ignore. For all flow test cases for a specific work class, create a model named AutoTestPropsToIgnore for the class. Individual Test Cases To specify which differences an individual test case should ignore, complete the following steps: 1. From the Run Rule window of the flow you want to test, select the test case. 2. Step through each form in the flow by clicking Next Flow Step. When Process Commander finds clipboard or database differences in a flow step, it displays them in the lower section of the Run Rule window.

You can choose to display the properties with differences by either their property name or by their label. 3. Click expand so you can examine the list in detail.

Select or clear the Ignore option as appropriate for each difference. Then click Save Ignores. Process Commander runs the comparison for that step again, this time ignoring the properties you specified. 4. Click contract to close the differences list. 5. Repeat steps 3 through 5 for each flow step that displays differences. When the Automated Unit Test Manager runs a test case for a flow rule, it accesses a set of model rules named AutoTestPropsToIgnore. These model rules indicate which standard flow processing properties are to be ignored by default. The classes @baseclass, Work-, Assign-, andAssign-Worklist each contain an AutoTestPropsToIgnore model; your model can override the standard ones.

If you want Process Commander to ignore differences in additional work object properties when it runs any flow rule test case from a specific work class or class group, do the following: 1. Create a model named AutoTestPropsToIgnore for that class (or class group). 2. List the properties to ignore. Leave the value field blank for each property. 3. Select the Call Superclass Model option.

View Flow Summary At the end of the test case execution, you can view a summary containing the results for each step in the test case by clicking the View Flow Summary button.

The run flow summary window displays all clipboard and database differences found for each step during the play back of the test case. By expanding each step, you can view a list of all the differences found, their expected values, and their actual values. Click Save to save the flow summary results. You can also choose to delete the work object that is created by checking the Delete Work Object checkbox.

How Automated Unit Testing works (6.1) Summary In Version 6.1, Automated Unit Testing is supported by the following items: Three rule types – test case (Rule-AutoTest-Case), test suite (Rule-AutoTest-Suite) and flow marker (Rule-AutoTest-Case-FlowMarker). Automated Unit Testing landing page – A landing page with gadgets with which you can examine the application's test cases and unit test suites, schedule unit test suites, and examine the results of testing runs. Test suite processing – An agent activity (RunTestSuitesFromAgent) that runs unit test suites in the background. Email processing – A standard email account instance which sends completion emails from unit test suite runs, and the CompletedTestSuite correspondence rule used for the email message. For information about how Automated Unit Testing works in releases before 6.1, see How Automated Unit Testing works (V5).

Quick Links Test Case Rules Unit Test Suite Rules Automated Unit Testing Landing Page Unit Test Suite and Email Processing Testing Results Flow Markers Test Case Rules You can test an individual rule on its own before testing it in the context of the entire application you are building. For certain rule types, when using Automated Unit Testing, after testing the individual rule, you can save the test data as a test case rule. Then, the next time you test that rule, you can run the test case rather than manually re-entering the test data. Test case rules contain the clipboard and database pages that existed when the test case was created, the user input, and the results. When you run a test case, Process Commander uses the saved test data when testing the rule and then compares the results to those saved in the test case. If the results do not match, you investigate the tested rule to see what changed and determine if there is a problem. Test case rules exist only in the context of the rule they test. You create test cases from the Test Cases tab of the rule form of the rule you are testing. For example, to create a test case for a decision table, open the decision table rule, go to the Test Cases tab, and click Record New Test Case. Running individual test cases You can run individual test cases manually from the Test Cases tab of the rule form of the rule you are testing. On the the Test Casestab, click the name of the test case to run it. The test case runs in the foreground using your Operator ID credentials. To run a set of test cases at the same time, use a unit test suite.

Unit Test Suite Rules Unit test suite rules identify: A set of test cases and their RuleSets An Operator ID used to run the test cases You can create a unit test suite that includes all the test cases for a specific rule type or you can select individual rules and specify the sequence in which to run them. Running unit test suites When you run a unit test suite immediately (using the Run Now option in the Schedule Unit Test Case window), the test cases in that unit test suite run in the foreground using your Operator ID credentials. When you schedule a unit test suite to run at a point in time, the scheduled unit test suite runs: Initiated by an agent (the Pega-AutoTest agent) In the background (initiated by the agent) Using the Operator ID specified in the unit test suite rule (by default), or using the Operator ID specified in the schedule settings for that unit test suite run. You can schedule a unit test suite to run once, or to run according to a recurrent pattern such as weekly or monthly. Automated Unit Testing Landing Page In Version 6.1, the Automated Unit Testing landing page replaces the Test Manager from previous versions. The Automated Unit Testing landing page has four gadgets: The Automated Unit Tests gadget shows all of the application's test cases and unit test suites. From this gadget, you can view information about a particular test case or unit test suite, including which test cases belong to a test suite and any test case saved results. The Dashboard gadget lists the results of your ten most recent unit test suite runs. Also, if you ran all test cases for a specific rule, those results are also listed in this gadget. You can drill down into the details of the run for each test case in the suite. If differences are found in a particular test case run, you can choose which differences to ignore in future runs. The Reports gadget displays the results from the last fifty (50) runs of a specific unit test suite. You can drill down into the details of the run for each test case in the suite. If differences are found in a particular test case run, you can choose which differences to ignore in future runs.

The Schedule gadget lists all the unit test suites the user has access to and all unit test suites that are currently scheduled to run. Use this tab to schedule when to run unit test suites, and also to run a unit test suite immediately. To see the Automated Unit Testing landing page, select

> Application > Automated Unit Testing.

Unit Test Suite and Email Processing The activities that initiate runs of unit test suites and that send email correspondence are in an agent named Pega-AutoTest. The PegaAutoTest agent activity named RunTestSuitesFromAgent initiates the runs for scheduled unit test suites. When you schedule a unit test suite to run at a future point in time, a schedule request is created and queued up. When the agent activity runs, it runs any unit test suite request whose scheduled time is due. On multiple-node systems, more than one instance of the Pega-AutoTest agent is running. In this situation, Process Commander uses queue and unit test suite locking. While an agent instance is selecting a suite request from the queue, the entire queue is locked so other agents cannot select a request. After the agent selects a request, it releases the lock on the queue but it holds a lock on the unit test suite request. When the unit test suite completes its run, the RunTestSuitesFromAgent activity uses an email correspondence rule named CompletedTestSuiteto generate an email message that contains the results of the test. The agent then uses a standard outbound email account to send this completion email to the operator who scheduled the test suite, and to any additional email addresses specified in the schedule settings for the run. The name of the standard email account is:

Default, for Version 6.1 and Version 6.1 SP1 systems AutomatedUnitTesting, for Version 6.1 SP2 systems

Testing Results When you run a test case, Process Commander displays the differences in the lower section of the Run Rule window. Flow test cases do not use the Run Rule window. Differences are displayed in another way. See How to test flows with Automated Unit Testing. Each time a unit test suite runs, Process Commander records the results from running the suite's test cases in instances of the DataAutoTest-Result-Case class. When a flow rule test case is included in a unit test suite, the system runs through the entire test case unless it finds a difference between the current results and those stored in the test case. If differences are found in the flow rule test case, the system stops running that test case and begins running the next test case in the unit test suite. Flow Markers A flow marker allows you to jump directly to a specific point in the flow process without having to input the same information every time in order to reach that point. To use a flow marker, go to the Test Cases tab of the flow rule, and in the Flow Markers table, select the name of the flow marker you want to jump to. You will be brought to the step saved in the flow marker. From that step you can continue testing the flow rule normally.

How automated testing works Summary Note: This article describes how Automated Unit Testing works in Process Commander versions prior to Version 6.1. For information about how Automated Unit Testing works as of Version 6.1, consult How Automated Unit Testing works (V6.1). Part of Automated Unit Testing, the Test Manager is supported by the following items: Three rule types – test case (Rule-AutoTest-Case), test suite (Rule-AutoTest-Suite) and flow marker (Rule-AutoTest-Case-FlowMarker). Test Manager – A dashboard that you can use to schedule test suites and examine their results. Test suite processing – An agent activity ( RunTestSuitesFromAgent) that runs test suites in the background. Email processing – An activity (RunTestSuitesFromAgent) that generates email messages that summarize test suite results for an operator.

Quick Links Test Case Rules Test Suite Rules Test Manager Test Suite and Email Processing Results Flow Markers

Suggested Approach Test Case Rules The Run Rule feature enables you to test an individual rule on its own before testing it in the context of the entire application you are building. When Automated Testing is enabled, you can save the test data that you used for certain types of rules as test case rules. Then, the next time you test that rule, you can run the test case rather than manually re-entering the test data. Test case rules contain the clipboard and database pages that existed when the test case was created, the user input, and the results. When you run a test case, Process Commander uses the saved test data when testing the rule and then compares the results to those saved in the test case. If the results do not match, you investigate the tested rule to see what changed and determine if there is a problem. Test case rules exist only in the context of the rule they test. You can create test cases from the Run Rule window only.

Test Suite Rules

Test suite rules identify: A set of test cases and their RuleSets An operator ID used to run the test cases (in V5.4, the access group is used instead) You can create a test suite that includes all the test cases for a specific rule type or you can select individual rules and specify the sequence in which to run them. When you run individual test cases from the Run Rule window of a rule, the test case runs in your session as your requestor ID. Test suites are , however, run in the background as the batch requestor of the Pega-ProCom agent. When the agent runs the test suite, it uses the access group specified in the test suite rather than the access group specified for the Batch requestor type data instance or for the agent queue.

Test Manager You use the Test Manager window to schedule a test suite and examine its results. The Test Manager contains three tabs: The Dashboard tab lists the results of all the test suites that have been scheduled during the past five days. Also, if you ran all test cases for a specific rule, those results are also listed in this window. The Suites tab displays the results of a specific test suite from each time it was run in the past week. The Schedule tab lists all the test suites the user has access to and all test suites that are currently scheduled to run. Use this tab to schedule when to run test suites. To access the Test Manager, select Run > Test Manager.

Test Suite and Email Processing The Pega-ProCom agent activity named RunTestSuitesFromAgent runs test suites. When you schedule a test suite, a schedule request is created and queued up. When the agent activity runs, it runs any test suite request whose scheduled time is due. When more than one instance of the Pega-ProCom agent is running on multiple-node systems, Process Commander uses queue and test suite locking. While an agent instance is selecting a suite request from the queue, the entire queue is locked so other agents cannot select a request. After the agent selects a request, it releases the lock on the queue but it holds a lock on the test suite request. When the test suite completes its run, the RunTestSuitesFromAgent activity uses an email correspondence rule named CompletedTestSuite to generate an email message that contains the results of the test. The agent then uses the standard outbound email account named Default.Notify to send the message to the operator who scheduled the test suite.

Results When you run a test case, Process Commander displays the differences in the lower section of the Run Rule window. Each time the agent runs a test suite, Process Commander records the results in an instance of the Data-AutoTest-Result-Case class. When a flow rule test case is included in a test suite, the agent runs through the entire test case unless it finds a difference between the current results and those stored in the test case. If the agent finds differences in the flow rule test case, the agent stops running that test case and begins running the next test case in the test suite. Flow Markers A flow marker allows you to jump directly to a specific point in the flow process without having to input the same information every time in order to reach that point. In versions prior to V6.1, to use a flow marker you have created, in the Test Page section of the Run Rule window, select which flow marker you want to jump to and click the Jump to Flow Marker button. You will be brought to the step saved in the flow marker. From that step you can continue testing the flow rule normally.

Related Topics Webinar Archive: Test and Project Management Frameworks Overview About Automated Unit Testing

How to define properties to be ignored during flow test case execution Summary By using AutoTestPropsToIgnore model rules, you can specify in advance which property differences you want the system to ignore during test case playback. These model rules indicate which properties are to be ignored by default. For example, every time you run a flow that creates a work object, the pyID property (the work object ID) is set to a new unique value. When the test case is recorded, the pyID value at the time of recording is saved. When you play back that test case, the assigned pyID value during the playback is different than the saved pyID value. Unless the pyID property is specified in an AutoTestPropsToIgnore model rule, the system reports it as a difference. You want to always ignore the difference in the work object ID because you know it will be different each time. There are several standard AutoTestPropsToIgnore model rules in Process Commander. These model rules use inheritance to ignore properties. The standard ones are a starter set of properties that most users want ignored during test case playback. For example, the pyID property is specified in the standard Work-.AutoTestPropsToIgnore model rule. Starting in V6.1 SP2, you can specify which differences to ignore across all test cases in the application from the following places: Within the unit test suite run report, accessed from either the Dashboard or Reports gadgets in the Automated Unit Testing landing page. If a unit test suite run finds differences, they are displayed when you view the report, and you can make your selections of which differences to ignore right there. Within the results window after running an individual test case.

Suggested Approach To specify properties to ignore by default, in addition to the standard ones:

1. In the Rules by Type explorer, select Technical > Model. 2. Click the New button. The New dialog displays. In V6.1, steps 1 and 2 are different. To specify additional default ignored properties in V6.1: 1. In the Rule Explorer, right-click Model in the Technical category. 2. Select New from the context menu. The New window opens. The rest of the steps in this article apply to V6.1.

3. Name the model AutoTestPropsToIgnore and then click Create. 4. On the Definition tab, enter the properties that you want ignored by default during test case playback. Leave the value fields blank. 5. Select the Call superclass model? check box. When this option is selected, the AutoTestPropsToIgnore model rule in the immediate parent class is applied before this one. This chaining of model rules provides the ability to apply any ignored properties specified in a higher class without having to re-specify them. The model in the highest class is applied first. The current model is applied last.

6. Save the model rule.

How to enable Automated Unit Testing (6.1) Summary Before you can take advantage of Automated Unit Testing with your applications, certain configuration steps are necessary. This article presents the steps for configuring those areas of a 6.1 system that enable you to make full use of Automated Unit Testing: Obtain the appropriate access. Enable the Pega-AutoTest agents. Optional: Enable email notification of unit test suite results. Optional: Customize the email notification message. For configuration steps in releases before V6.1, see How to enable Automated Testing (PRPC 5).

Suggested Approach You must obtain the appropriate access for your Operator ID and enable the Pega-AutoTest agents: Access to use the main features of Automated Unit Testing such as test cases and running unit test suites comes from having the AutomatedTesting privilege. The standard PegaRULES:AutoTest access role provides this privilege. Enabling the Pega-AutoTest agents is required for the automation features of running unit test suites and scheduling recurrent runs. To take full advantage of the power of automating unit regression testing, enable the Pega-AutoTest agents. While configuring the elements for email notification of unit test suite results is optional, such notification facilitates the best practices of automated unit regression testing. If a unit test suite run results in differences, you and others can be alerted by email of the need to investigate.

Obtain the appropriate access Add the standard PegaRULES:AutoTest access role to your Operator ID's access group. If you are unable to modify your Operator ID's access group, contact your system administrator. To add the PegaRULES:AutoTest access role to your access group:

1. Open the Access Group form. 2. Add a new line to the Roles array. 3. Select PegaRULES:AutoTest in the new line.

4. Save the form. 5. Log off and log back in to refresh your profile. After logging back in, verify that you have the AutomatedTesting privilege by selecting Testing landing page appears in the list. If not, check with your system administrator.

> Application. Confirm that the Automated Unit

Enable the Pega-AutoTest agents You schedule runs of unit test suites using the Schedule gadget in the Automated Unit Testing landing page. When you schedule a unit test suite to run (instead of running it immediately), it runs as a background process. The Pega-AutoTest agent activities that initiate these backend processes must be enabled to run. These activities are:

Rule-AutoTest-Case.RunTestSuitesFromAgent Rule-AutoTest-Case.RunRecursiveTestSuites To enable these agents:

1. In the Rules Explorer, select SysAdmin > Agent Schedule to see the list of agent schedule instances. 2. Select the Pega-AutoTestagent schedule in the list to open its rule form. If your system has more than one node, there are multiple Pega-AutoTest agent schedule instances listed. Select any one. If there are no agent schedule instances listed, this system might have all agents disabled system-wide. If you encounter this situation, check with your system administrator to ensure that the prconfig.xml file for your system does not contain the following line: <env name="agent/enable" value="false" />

If the prconfig.xml file contains that line, then all of the agents in the system are disabled. Set the value equal to trueto enable agents in your system. 3. On the Schedule tab, enable the agent activities using the Enabled? check boxes.

4. Optionally set the time interval that determines how frequently the agent activities run. By default, it is set to run every five minutes (300 seconds). 5. Save the form.

Enable email notification of unit test suite results

You can configure the system so that when a unit test suite has unexpected results, the Pega-AutoTest agents send a completion email message with the results of the unit test suite run. The email message is sent to the email address in your Operator ID, and additionally to any email addresses specified in the schedule for that unit test suite run. This completion email message provides an alert to any issues encountered in the run. For this email notification to work, a standard outbound email account instance needs to be configured: In a Version 6.1 or 6.1 SP1 system: configure the Default email account. In a Version 6.1 SP2 system: configure the AutomatedUnitTesting email account. Before you begin, gather the following information:

1. The IP address or domain name of your email server. 2. The email account (email address and password) that Process Commander is to use to send these emails. 3. Whether the host is configured to use Secured Socket Layer (SSL). To configure the outbound email account for this purpose:

1. In the Designer Studio, select > Integration > Email > Outbound Email. 2. Verify that the values of the standard email account — either Default for a 6.1 or 6.1 SP1 system or AutomatedUnitTesting for a 6.1 SP2 system — are appropriate to your organization's email system. For example, confirm that the Host Namefield matches the domain name of your email server machine. If there are no values displayed for Email Address or Host Name, click the email account name and specify the appropriate values in the email account form, and save. For example, in the following image from a 6.1 SP2 system, click AutomatedUnitTesting to open its form and specify its values.

To test the connectivity of an account:

1. Click its name in the table on the Outbound email gadget to open its email account form. 2. In the email account form, click Test Connectivity. A window displays stating whether the test is successful.

Customize the email notification message Automated Unit Testing uses a standard correspondence rule, CompletedTestSuite, to generate the email message that the agent sends if there are unexpected results from a unit test suite. The default email message looks similar to this one:

Some standard features of this email message are: The email address in the From: field of the message is the one specified in the Default outbound email instance. The email address in the To: field is the one specified in the operator ID of the operator who scheduled the unit test suite to run. (If additional email addresses are specified in the schedule for the unit test suite run, they also appear in the To: field). The link in the "Click here..." sentence is set to the value of the dynamic system setting named PublicLinkURL of your Process Commander system. This setting provides for direct Web access to your system. For the link in the message to work correctly, that

URL value must be valid and accessible to the recipient of the email message. To customize the contents of the standard message:

1. Using the Rules Explorer, select Process > Correspondence to see the list of correspondence rule instances. Open the Data-AutoTestResult-Suite.CompletedTestSuite rule. 2. Using the Save As icon, save a copy of this rule into one of your application's RuleSets. Do not alter any of the other fields besides the RuleSet and Version fields. A best practice is to save it to the same RuleSet where the test case rules and unit test suite rules are saved. 3. Update the content on the Corrtab to define the contents of the email message. The sentence "Click here for the test suite results display." is defined using JSP tags, and is not displayed in the Corr tab in design mode. To view and update the JSP tags and HTML code, click to enter source mode. For information about typical JSP tags used in correspondence, see Correspondence Reference.

A best business practice for automated email messages is to include a sentence on how the recipient can communicate with the source organization in case questions or concerns arise. Depending on the nature of your business, you might also want to include your organization's confidentiality statement. Here is an example of a customized CompletedTestSuite rule:

To ensure that the link in the JSP-tagged "Click here..." sentence works correctly for the recipient, verify that the value specified for the dynamic system setting named PublicLinkURL is valid for your system: 1. Open the Resource URLs gadget by selecting > System > Settings > URLs. 2. Verify that the URL in the PublicLinkURL field is the appropriate value for your system.

How to test flows with Automated Unit Testing (PRPC 6.1) Summary You can use Automated Unit Testing to record and play back test cases for flow rules in 6.1. By creating and saving test cases for your flow rules, you can automate regression testing on the flows when subsequent development work occurs. For information about working with flow test cases in releases before 6.1, see How to create flow rule test cases using the Run Rule window (V5) and How to execute flow rule test cases using the Run Rule window (V5).

Suggested Approach A typical business situation has an application with a number of work object types and the flows that process those work objects. At a point in time, the flows are working exactly the way that you want, and at this "happy state" point, you record test cases of the flows and save the test cases. Later, after subsequent development, intended changes made to one flow might have unintended impacts on the other work object types and flows. At that point, the flow can be played back against the saved test cases — either manually or automatically as part of a scheduled unit test suite — to quickly discover and address any unanticipated effects. Note: You must have the AutomatedTesting privilege to be able to record and play back test cases. See How to enable Automated Unit Testing (PRPC 6).

Quick links Creating and saving test cases for flow rules Using the Invalid Test Cases report Playing back flow test cases Running through differences Walking through each step Saving results at flow steps

Creating and saving test cases for flow rules You begin recording a test case from a starting flow; the flow's subflows and screen flows are recorded as they are called from the main flow. To record a test case for a flow rule:

1. Open the flow rule and go to the Test Casestab. If this flow has any saved test cases, they appear on this tab.

2. Click Record New Test Case. When you click this button, the system creates a new test page and starts executing the flow, beginning with creating a work object. The type of window, the portal, and the skin rule (styles and appearance) of the work object form depend on the Run Process In settings of your operator preferences. In this example, the work object is a purchase order.

3. Click Createto create the work object and start running through the flow process. After the work object is created, the Save Test Case button is available to save the test case. 4. Enter the test data as required to advance the work object through your flow process. You can stop recording and save a test case at any point after the work object is created. Therefore, you can save separate test cases for different portions of a flow and subsequently do regression testing on the individual portions. For example, you might record and save one test case for the portion of the flow where the work object is assigned to the current operator for collecting input, and then record and save another test case (a "resume test case") for the portion of the flow where the work object is assigned to a manager for review and approval. 5. Save the test case by clicking Save Test Case. The Test Case: New window opens. Specify a name, description, and the RuleSet information for the RuleSet you are using for your test case rules, and click Create. The system saves the new test case rule.

To see the just-created test case rule, return to the Test Cases tab of the flow rule form and click Refresh.

Regression testing flow rules using test cases After subsequent development on the flow rule or related rules occurs, that development might alter the behavior of the flow in unanticipated ways. By playing back your saved test cases and assessing the extent of any changes, you can quickly discover and address any unwanted effects. When you suspect flow behavior might have changed — using the Invalid Test Cases report When you suspect the flow's behavior might have changed, you can view the Invalid Test Cases report and see the list of the flow's test cases that might need to be updated, or deleted and re-recorded. When you click Invalid Test Cases on the Test Cases tab, the system collects the set of test cases that were saved before the last time the flow rule was changed. Then, if any of those test cases have results that are different than when they were originally recorded, the system reports a list of those test cases. In this way, you are alerted to which test cases to investigate. Playing back flow test cases To play back a flow test case:

1. Open the flow rule, go to the Test Casestab and click the name of the test case. Before the process playback begins, first choose whether to have the playback run until the system finds a difference or to walk through each step of the flow: Mode

Use to

Quickly verify whether any differences have been introduced into the flow process since the test case Run until differences found was recorded Walk through each step of the Examine each step of the flow and have the opportunity to input new values into the test case prior to flow each step

2. Click Play. The process's work object form appears in the lower half of the window as you progress through the test case. Red highlighting in the work object form reflects differences between the current state and the recorded test case. When

What happens during playback

Running until The system displays that step in the flow where the system detects differences between the current flow's database values differences and Clipboard pages and those values and pages in the stored test case. Click Next Difference to continue. When no are found differences are found, the flow advances to the end of the recorded test case.

Walking When What happens during playback through each The system displays each flow step in sequence. Click Next Flow Step to continue. step of the flow Playback: Running the test case until differences are found If the flow has changed prior to playing back this test case, the system likely finds differences when it compares the current results with the recorded test case. Here is an example where the change in the flow was to change the Confirmation Note on the ManagerApproval assignment from AwaitingVerification to AwaitingApproval.

You can choose to ignore a difference by selecting a radio button in the Ignore for column. For example, if you have a Date type of property that the process sets to the current date plus 5 days, playing back the test case on different days will give different resulting values for that property. Instead of having this flagged as a difference every time, you can choose to ignore that property just for the current test case, or for all saved test cases for this flow rule. Once you have determined which differences to ignore at that particular step in the flow, click Save Ignores to save your selections to the test case. If you determine that the current state of this flow step is what you want, you can permanently overwrite this step in the test case with the current state by clicking Overwrite Step. The system saves the current database values and Clipboard pages to the test case; the display refreshes so you can continue to the next difference. Playback: Walking through each step of the flow As you walk through the recorded test case, the system displays for each flow step: 1. First, the input values that were recorded entering the flow step. 2. Then, after you click Next Flow Step, the result of the process after the step is taken. Example: Input values prior to Step 1

Example: Results after Step 1

If a difference is found in a step's result, you can choose to ignore the difference, and update the test case to save the ignored difference. You can also permanently overwrite the step in the test case with the current state by clicking Overwrite Step. Playback: Changing input values for a test case's flow step As you walk through each step of the flow, you can change the input values that were recorded in the test case for that step by: Directly editing the values in the Input Values Prior to Step section Using the Dynamic Input Builder ( )

At each step, to save any changed input values for that step in the recorded test case, click Save New Inputs. Specifying input values for a test case using the Dynamic Input Builder

To specify a dynamically created input value, click

to use the Dynamic Input Builder to guide you in creating that value.

For example, if you want the test case to validate that a SubmitOrderDate field uses the current date, select Today in the Function field of the Dynamic Input Builder. To use an activity to set the value, select Activity in the Function field, and then select the activity and specify the appropriate values for its parameters. Saving results at flow steps If unexpected results are found at a flow step during test case playback, you can use the Save Results button to save the step's results for future viewing. For example, if there are multiple issues and you want to concentrate on investigating one issue at a time, you can save the results, close the test case playback, and investigate the first issue. Then, when you've completed that item, you can view the saved results and locate the next issue without having to play back the test case again. In a team environment, a developer or tester can save the unexpected

results for another team member to see without the other person having to play back the test case. On the flow rule's Test Cases tab, click View Results to see a test case's saved results:

When you click View Results, a report window shows a summary of previously saved results.

Click a row to see the details of the differences in that saved result.

Viewing a summary of results at the end of test case playback At the end of test case playback, you can view a summary that shows the results at each flow step by clicking View Flow Summary.

This summary displays all database and clipboard differences that are found for each step during the playback. By expanding each item in the summary, you can see the found differences, the values expected in the recorded test case, and the values from this playback.

How to unit test activities with the Automated Testing feature Summary You can unit test an individual activity rule before testing it in the context of the entire application you are building. With Automated Unit Testing, you can save the test data that you use as test case rules. Then, the next time you test that rule, you can run the test case rather than manually re-entering the test data.

Suggested Approach Testing Activities When running an activity to save the run as a test case, it is important to have the clipboard in a known, clear state before starting the run. Otherwise, the clipboard could contain pages from an earlier run that you might not want saved as part of the test case, or which present an inaccurate picture of what the activity's steps do. To unit test an activity and save the test as a test case rule:

1. Open the activity rule you want to test.

2. Click the Run toolbar icon (

). The Run Rule window appears.

In V6.1, step two is different. To create a test case for an activity in V6.1: 1. Go to the Test Cases tab of the opened rule. 2. Click Record New Test Case. The Run Rule window opens. The rest of the steps for creating a test case are applicable to V6.1.

3. In the Test Page section, choose the test page for the activity. Specify whether you are not using a test page, creating a new test page, or using an existing test page. (The Copy existing page option is available if your clipboard has pages that you can copy.) 4. In the Enter Parameterssection, enter values for the parameters that are needed for the activity to run. This section lists all of the parameters defined in the activity rule. Ones that are displayed in bold text are those parameters on the activity rule's Parameters tab that have Required selected on that tab. 5. Click Execute to test the activity rule. The system displays the results from running the activity.

6. Examine the results and determine whether the test data used generated the expected results. 7. When you are satisfied with the results from running the activity, click Save Test Case to save this run as a test case. The new rule dialog displays.

8. Enter the name of the test case, a short description of the test case, and the appropriate RuleSet and version. Then click Create. 9. Optional: You may also add this test case to your list of shortcuts. Click Add to Shortcuts. The Add to Shortcuts dialog opens. Enter the name of the shortcut and click Save.

Running Activity Test Cases After you create a test case for an activity, it will appear in the list for saved test cases in the Run Rule window for the tested rule. In V6.1, the steps for running a test case are different. After you create test cases for a rule, they appear on the Test Cases tab for that rule. To run a test case for an activity in V6.1:

1. Open the rule that you want to test. 2. Go to the Test Cases tab of the opened rule. 3. Click the name of the test case. The Run Rule window opens, the system runs the test case, and displays the results. To run a test case:

1. Open the activity you want to test. 2. Click the Run toolbar icon ( ). The Run Rule window appears. 3. Select the Run against a saved test case option and choose a test case from the list.

Because the test case rule contains the initial pages that were created, loaded, or copied before the rule was run, you do not have to recreate the initial conditions before running the test case. 4. Click Run Test Case. Process Commander runs the test case and displays the results in the Result section of the Run Rule window. If there are any differences found between the current results and the saved test case, a message states that the results were unexpected. In the case of unexpected results, if the new results are valid, you can overwrite the test case so it uses the new information by clicking Overwrite Test Case. You can choose to ignore a particular difference by selecting the check box in the Ignore? column. For example, if you have a Date type of property that the activity sets to the current date plus 5 days, playing back the test case on different days will give different resulting values for that property. Instead of having this flagged as a difference every time, you can choose to have differences in that property ignored. Once you have determined which differences to ignore, click Save Ignores to save your selections to the test case. Starting with Version 6.1 SP2, you have two additional options for ignoring differences in future runs: Ignore differences on a page : In the list of found differences, you can select a page to ignore all differences found on that page. The selection applies only for this specific test case (not across all test cases). If you select to ignore a page, all differences found on that page are ignored each time this test case runs. Ignore differences for all test cases : You can specify that a difference should be ignored for all test cases in the application.

Related Topics About Automated Unit Testing Building and testing activities How to remove all user clipboard pages using an activity

Running all test cases for decision tree and decision table rules

Summary When Automated Unit Testing is enabled, for a particular decision table or decision tree rule, you can run multiple saved test cases at once from the Run Rule window.

Suggested Approach In this situation, the system behaves as if a unit test suite is created using the rule's saved test cases, and then that unit test suite runs. The returned results display on the Dashboard tab of the Test Manager. In V6.1, the Dashboard is a gadget on the Automated Unit Testing landing page. The Dashboard gadget displays the returned results. The option of running multiple test cases at once from the Run Rule window is not available for flow rules. To run multiple test cases at once:

1. From the Run Rule window, select the Run against a saved test case option.

2. Choose All Casesfrom the drop down list. If auto-generated test cases exist for a decision table rule, you can run all or a subset of the auto-generated test cases at once by selecting All Autogenerated Cases in step 2. 3. In the window that opens, all of the test cases are selected to run. You can leave them all selected to run them all, or choose a subset. 4. Click Run. Process Commander runs all selected test cases and displays the results in a pie chart.

To examine the returned results:

To review the test cases that returned unexpected results, click the red section of the pie chart. A window opens with a table listing those test cases. Select a test case to view its detailed results. To review the test cases that returned expected results, click the green section of the pie chart. A window opens with a table listing those test cases. Select a test case to view its detailed results.

Test suite reporting Summary To run test cases in bulk, group them into a unit test suite and then run the unit test suite. Use the Test Manager to work with unit test suites and view their results. In Version 6.1, the Automated Unit Testing landing page replaces the Test Manager, and the landing page's Dashboard, Reports, and Schedule gadgets replace the Test Manager's tabs. Other than those differences, the information in this article applies to Version 6.1, except where noted.

Suggested Approach

To work with unit test suites, use the Test Manager. To access the Test Manager, select Run > Test Manager. In Version 6.1, the Automated Unit Testing landing page replaces the Test Manager. To access the Automated Unit Testing landing page,

> Application > Automated Unit Testing. V6.1 V5.5 and V5.4 Automated Unit Testing landing page Test Manager Automated Unit Tests gadget – Dashboard gadget Dashboard tab Reports gadget Suites tab Schedule gadget Schedule tab

select

Purpose Work with test cases and unit tests suites in the current application See all of the test cases and unit test suites defined in the current application View results from all unit test suites run in the past five days View results of a specific unit test suite Create and schedule unit test suites

You create unit test suites and schedule them to run using the Schedule gadget (tab). Once unit test suites have run, you view their results using the other gadgets (tabs).

Creating and running unit test suites The Schedule tab of the Test Manager lists each execution of a unit test suite scheduled to run and all unit test suites you have access to. With this tab you can schedule individual unit test suites or create new unit test suites (see Creating unit test suites).

Because unit test suites are run by an agent activity, they cannot be run immediately. To run any unit test suite, you must schedule it to run at a specific future time. Because the agent activity usually checks the queue of scheduled unit test suites every five (5) minutes, it is a good practice to set the schedule for a one-time run more than five minutes in advance of the current time. To schedule a unit test suite:

1. Locate the unit test suite you would like to schedule in the list and click the calendar icon in the Schedule column. The Schedule Suite window opens. In V6.1 and V5.5, the window is named Schedule Unit Test Suite, and there is a Pattern section in which you can specify a recurring schedule. For example, you can choose to run the unit test suite daily or weekly.

3. In the Schedule window, click the calendar icon. 4. In the Calendar window, specify the time and date (month-year-day) for the unit test suite run. You must actually click on a day in the calendar display to save the date, even if you are choosing the day that is highlighted in the calendar. 5. In the Schedule window, click Scheduleto add it to the list of currently scheduled unit test suites. In V6.1 and V5.5, click the OK button to close the Schedule Unit Test Suite window and add the unit test suite to the list of currently scheduled ones. The agent activity checks periodically to see what unit test suites are scheduled to run. If any are scheduled to run within the agent's time interval, the agent runs those ones.

Viewing unit test suite results Dashboard

The Dashboard tab displays a chart with the total number of unit test suites run in the past five days. Below the chart is a list of every unit test suite run in the past five days. By clicking on a unit test suite a detailed list of the unit test suite results displays. This detailed view identifies the rule type, the application the unit test suite applies to, the rule name of the unit test suite, the test case, any differences found, and the execution date of the unit test suite.

Suites

In Version 6.1, the Reports gadget replaces the Suites tab. The Suites tab enables you to view the results of a specific unit test suite by selecting it from a drop down list.

After choosing which unit test suite to view, a chart displays the number of expected and unexpected results. By clicking on the expected results column in the chart, a detailed list appears showing all the expected results for that unit test suite. By clicking on the unexpected results column in the chart, a detailed list appears showing all the unexpected results for that unit test suite.

By clicking the Grand Total status line located below the results chart, a detailed list appears showing both the expected and unexpected results of the unit test suite.

On either list, if you click on a test case with unexpected results, the differences found will be listed at the bottom of the Test Manager window. Information about the rule the test case was created from will also display. By clicking on the Open Rule icon, you can view the rule that the test case was based on.

Related Topics About Automated Unit Testing How to enable Automated Unit Testing (V6) How to enable Automated Unit Testing (V5)

Webinar Q&A "Expediting Rollout with Automated Testing" During each developer Webinar, Pegasystems dedicates additional staff to answer your questions via Chat. What appears below is a capture of the relevant questions from the Expediting Rollout with Automated Testing session on 28-NOV-07. To view the webinar recording, see Webinar Archive: Expediting Rollout with Automated Testing. Question

Answer

AVAILABILITY & CONFIGURATION A-01: The Automated Testing product is available starting with the V5.3 release. See "Automated Q-01: Is this feature available in Testing -- Running Test Cases and Test Suites" to identify the features supported. Some of the features PRPC v5.3 SP1? shown during the webinar are not available in the current product and were shown as a technology preview. Q-02: You mentioned PRPC V 5.x. A-02: Customers should go through their sales rep or professional services for product roadmaps. The Do you have roadmaps for Pega's Pega staff would then follow-up with the appropriate product manager. Pegasystems does not publish products? roadmaps to the PDN or corporate Web site. Q-03: Will there be an additional A-03: Yes. charge for this product? Q-04: Is this automated test A-04: Automated Testing is available starting with the PRPC v5.3 release. If frameworks are running product will be compatible with on 5.3 or future releases, then the Automated Testing product will be available for those frameworks. PRPC/Smart Dispute 4.x version? The answer here is that it depends on what version of PRPC the framework is on Q-05: Does this tool only work A-05: The AutoTest features are tied into flow processing activities directly and not PRPC UI directly. with the PRPC UI (i.e. is it tied to However, flow processing out of the box is tied to the PRPC UI. Pega has not tested these features with the Harness) or does it work with anything but the PRPC UI. non UI flows as well? A-06: We deliberately decided against UI testing and instead test the foundation data and database Q-06: How the functional and UI commits. The basic overview of what we call functional testing is that a user runs through a flow while testing are conducted? Is there "recording" it and saves this off as a test case. Future runs of the flow against the test case compare any framework model to achieve the current run against the saved, known to be good test case version. When a flow works end to end, this? it has been tested for functionality. Q-07: We are implementing a pure BRE solution. How does this A-07: Developers can test their decision rules/logic used by the BRE. help me? Q-08: Do we need any additional automation tool such as Quick A-08: No additional testing products are required to run the AutoTest features. Test Pro or Silk Test? Q-09: Do you envision this test framework to be used for testing A-09: The goal of these features is to test a PRPC application. That said, we support SOAP Service software projects that does not testing, so we do see testing other applications that are part of a PRPC solution in that way. Note, too, involve PRPC for the application that this is not a framework. solution? A-10: The process of enablement is subject to change, but it will definitely be enabled by user. Note Q-10: How do I enable the auto that in V5.3, PDN articles document how to enable the AutoTest features by using the test feature? @baseclass.AutoTestEnabled privilege. Q-11: Does it work with A-11: We have chosen to not limit the implementation by integrating only with a few test products. All (integrate with) other data is eligible to be ported into any 3rd party product, however no specific product has been tested or applications as well, (Test is recommended. Director, Etc.) Q-12: Does this need DB configurations to store the values A-12: Results of a test case run (differences, if any) are stored in instances of Data-AutoTest-Resultfor the differences while using Case. Any system that has the AutoTest features will come with this DB configuration set up already. the tool? Where are the values stored? UNIT TESTING Q-13: The presentation showed Flow, Decision Tables, & Decision

Trees. Can itQuestion run activity A-13: In 5.3 Flows, Decision Tables/Trees, and SOAP services are the rules that can have test cases Answer separately and for automated created for them. test cases? Q-14: Does the unit testing feature have a coverage analysis over code, meaning what parts of the code is hit most and what is hit the least and if any exception A-14: No, not at this time. occurs then on which section of the code. This should help in making the code a lot more efficient. A-15: Absolutely, running a record against a test case executes the same rules that running the Q-15: Can you use Tracer on record on its own does and you can use Tracer or any other debugging tool exactly as you would these test case scenarios? otherwise at the same time. Q-16: Can the same Test case originated initially for one version A-16: Yes, test cases are rules and previous versions will continue to work for later versioned rules be used for all higher versions of until/unless a new, higher version is created. the same rule? A-17: To execute many different paths of one record (with different inputs), multiple test cases are Q-17: How can we execute test required. For Decision Tables, you could see in the webinar many test cases created when all possible for multiple input values or in a paths were auto-generated. If these test cases were saved to a Rule-AutoTest-Suite, this suite would loop? be run at one time, looping through the many test cases to test all paths of a record. A-18: We were unable to demonstrate this in the session, but if we had, you would see that no differences would have been found. Any differences that you might expect, such as work object ID, create time, etc, are filtered out by one or more Rule-Obj-Model records named Q-18: Can you re-run the test AutoTestPropsToIgnore. We ship a version at all the major class levels (Work-, Assign-, etc.) that without changing any values? contains every property that we know will be different across test case executions and don't signify a This will show us what the code true difference found. Customers have the ability to add their own models to this chain that can either change differences look like. add additional properties to always ignore or remove the command causing the out of the box property ignores. (note that this is in addition to the "ignore" check boxes that you saw during the webinar which only ignore the specified properties within that current test case). A-19: When executing the test case, the ignore checkbox displays when we report differences found Q-19: The ignore checkbox for a particular step. You will not see the option to ignore differences if no differences are found. Also, doesn't carry from one step to for each step we show the input values prior to step execution followed by results after the step the next? executed. On the input values prior to step execution display, you will see the input values - you will not see a checkbox for ignoring differences Q-20: For Interfaces (SOAP or XML over HTTP) how does this testing feature aid, can the tester A-20: We capture and replicate/compare the input and output as it is sent out of/into PRPC. capture the entire port input and output? Q-21: When you run the flow A-21: As flow markers are a development tool and not a testing tool, in order to successfully use flow marker again and say that some markers to always advance to the saved point in a flow, no differences must be encountered. When a change in rules affect the page, flow marker finds differences in the steps it is skipping over, it will present those differences to the will it reflect the changes? user and leave the flow on the step that first encountered them. Q-22: It was shown how you can refer back to the process flow to see where you are in the A-22: Test cases are currently created by storing a "known good" state from an initial, successful run automation. Do you have to of an existing process. Future enhancements involve more test-driven development features. create a process flow in Pega before you can create a test case? Q-23: Any way to set verification points that will be shown as A-23: Currently, every step is treated as a verification point and if no differences are returned, the passing (when the actual results are "as expected" which could be seen as a "pass". expected results match)? Q-24: Is this only data based A-24: Yes, we deliberately do not validate GUI changes unless those changes cause the underlying testing or can we validate GUI data to change. changes? Q-25: Do the Test Cases work A-25: Sure, there is no special requirement about how the properties are created for them to be when we use generated Java tested. All testing is done on the clipboard or database level, so as long as the properties work, they properties? will be just fine for AutoTesting. Q-26: Can there be multiple flowmarkers in one flow and will A-26: Yes. it stop at each? Q-27: How can I capture or set A-27: Negative result testing would be an enhancement to the current features. Currently, setting a an expected value as an value to be ignored would allow a test case to pass with a difference, but not a true negative test. exception or a negative result? A-28: Test cases are integrated with the Run feature. So whenever you are running a rule manually, if Q-28: How does PRPC know to the user has the auto test feature enabled, they are always either recording test cases (to the open the test page when a rule is clipboard which they can choose to save to the DB) or playing back a previously saved test case that run? you selected form the drop down of test cases available for this rule. Q-29: For Interfaces (SOAP or XML over HTTP) how does this testing feature aid, can the tester A-29: We capture and replicate/compare the input and output as it is sent out of/into PRPC. capture the entire port input and output? Q-30: The presentation showed properties on pyWorkpage, what A-30: Any page of the clipboard that has differences (even embedded ones) work the same as happens to properties on temp pyWorkPage does. pages or pyWorkpage.XXXPage AUTOMATED UNIT TESTING Q-31: Does the tool have the ability to show a comparison of A-31: Yes, when the tests are run via the suite/agent (in the background), results are saved and can the results if the same test was be compared.

run several times? Answer Q-32: Does Question this feature enable us to compare 'desired result' vs. A-32: The AutoTest features compare every clipboard page and database commit that occurs during 'actual result'. Examples of flow execution, so yes, we are comparing events such as a) sending email, b) committing anything to results can be any action/event the database, c) anything to do with an assignment, and d) any exception/failures. When it comes to like (a)Sending email (b)updating screen changes, however, the AutoTest features deliberately work one layer below that so existing commit action to database test cases need not be changed for different style sheets/flow action layouts. Only when the new (c)assignment of a Work Object screen display changes the data (a property entry field is removed from the screen, for example) (d)exception / failure (e)Screen would it be noted. change A-33: The Agent-Queue that runs the test suites is the shipped Pega-ProCom: Correspondence, SLA Q-33: How can I activate the events, & Bulk Processing one, and the activity it executes every 5 minutes is Rule-AutoTestagent to run the test suite? Case.RunTestSuitesFromAgent. If it is not enabled on your server, that's the one you want to target. Q-34: How can you execute A-34: Test Suites specify a user to use for running the tests. If you want to test for multiple users, it condition with multiple user and would be as simple as duplicating a Rule-AutoTest-Suite record for as many user types as you wish to how can you validate the test and changing the user id field in each suite. When you compare results, you would be comparing dependencies on user role? the same tests run with different users. Q-35: Is there any way to group a set of test cases by the name of A-35: Yes, Test cases can be automatically sorted by the record for which they were created - it's part the rule for which the test case or their key structure. is. For example, this set of test cases if for CustomerStatus rule. Q-36: Can you run a test case A-36: Yes, you could set a Suite record up to run a test case this way. Note that this is not a substitute multiple times i.e.: looping - 50 for true application load testing because running them this way does not mimic a true environment. times? Q-37: Can we run the test cases A-37: Yes, running the test cases automatically is done by using a Rule-AutoTest-Suite record to automatically when we would like group/run test cases automatically. to perform volume testing? Q-38: Are the tests data driven, A-38: The features demo'ed in this session require complete, working rules and the test case or is each test case run based on essentially takes a snapshot of how that rule is working which all subsequent runs of the test case are static data? compared against. Q-39: Can we schedule the execution of the test cases at a A-39: No, not at this time. particular time of the day every day? Q-40: In Automated testing how can I use parameterized inputs in A-40: Each test case is an individual path through a record with defined inputs. In the case of date inputs, we allow variable inputs to account for situations where you'd always like to pass in an input as large numbers - and allow "today" or a birth date of "18 years ago yesterday", but otherwise, you should be creating distinct test parameterized values for each cases and then running them together in a test suite to run them all at once. input property to be unique? Q-41: Can I kick off test suite execution via ANT?

A-41: Test suite execution is just handled by an activity initiated by the PRPC agent processing functionality so you can call into your PRPC server however you would like to execute the same activity.

Q-42: How do you do stress test? By this I mean load test with incremental number of users till A-42: We use a tool called OpenSTA which is an open source tool that behaves much like Load system crashes. We would like to Runner. It captures the HTTP traffic and then provides you a scripting language to customize the script know that, for example, current for concurrent user execution. We use this tool and process for both Nightly performance testing and configuration have 5 minute Scale testing and internal. response time for 150 concurrent users and crashes after we have 250 concurrent users. Q-43: Does the tool allow for Test Driven Development (i.e. writing A-43: Currently, "No". A rule definition is required for us to record a "known good" to store in the test the test before the rule, and then case that is being created. writing the rule to pass the test)?

Continuous integration and delivery pipelines with third-party automation servers Use DevOps practices such as continuous integration and continuous delivery to quickly move application changes from development, through testing, and to deployment. Use Pega® Platform tools and common third-party tools to implement DevOps. You can set up a continuous integration and delivery (CI/CD) pipeline that uses a Pega repository in which you can store and test software and a third-party automation server such as Jenkins that starts jobs and performs operations on your software. Use a CI/CD pipeline to quickly detect and resolve issues before deploying your application to a production environment. For example, you can configure an automation server with REST services to automatically merge branches after you publish them to a Pega repository. You can also configure Jenkins to create branch reviews, run PegaUnit tests, and return the status of a merge.

Using branches with Pega repositories in a continuous integration and delivery pipeline in Pega 7.4 When you work in a continuous integration and development environment, you can configure a Pega repository on a remote system of record (SOR) to store and test software. You push branches to repositories to store and test them. You can also configure a pipeline with REST services on your automation server to perform branch operations, such as detecting conflicts, merging branches, and creating branch reviews, immediately after you push a branch to the repository. To use branches with Pega repositories, you must perform the following tasks: 1. On Pega® Platform, enable the Pega repository type. For more information, see Enabling the Pega repository type. 2. Create a repository of type Pega. For more information, see Creating a repository connection. 3. On the SOR, create a development application that is built on all the applications that will go into production. You must also create a ruleset in the development application that contains all the rules that you are using for continuous integration. For example, if you have a production application MyCoAppwith rulesets MyCo:01-01 and MyCoInt:01-01, you can create a MyCoDevAppdevelopment application that

is built on MyCoAppand has only one ruleset, MyCoCIDev:01-01. This ruleset contains the data transforms that are needed to set default information, such as the application into which branches will be merged. You can use the branches REST and merge REST services in your pipeline to perform branch operations. The branches REST service provides subresources that you can use to detect conflicts, merge branches, and create branch reviews. You must configure certain settings on the SOR so that you can use the branches REST service. Complete steps 4 through 6. 4. ​Specify the application name and version that you want to use for conflict detection and merging: 1. Search for the pySetApplicationDefaults data transform. 2. Save the data transform to the ruleset in your development application that contains the continuous integration rules. 3. In the Source field for the Param.ApplicationName parameter, enter the name of the application that you want to use for conflict detection and merging. 4. In the Source field for the Param.ApplicationVersion parameter, enter the application version. 5. Save the rule form. 5. Optional: Set the target ruleset version that you want to use for conflict detection and merging. If you do not perform this step, a new ruleset version is created into which rules are merged. Complete the following steps: 1. Search for the pySetVersionDefaults data transform. 2. Save the data transform to the ruleset in your development application that contains the continuous integration rules. 3. In the Source field for the pyTargetRuleSetVersion parameter, enter the ruleset version into which you want to merge. 4. Save the rule form. 6. Optional: Set passwords that are needed during merge operations. As a best practice, lock these rulesets with a password. Complete the following steps: 1. Search for the pySetVersionPasswordDefaults data transform. 2. ​Save the data transform to the ruleset in your development application that contains the continuous integration rules. 3. Specify the passwords that are required for merging. 4. Save the rule form. 7. Configure a continuous integration and development pipeline so that your continuous integration tool, such as Jenkins, starts a job immediately after you push a branch to the SOR. Use the branches REST and merge REST services in the pipeline to perform branch operations, such as detecting conflicts and merging branches. For more information, see the following PDN articles: Remotely starting automation server jobs to perform branch operations and run PegaUnit tests in Pega 7.3.1. Implementation of a continuous integration and development pipeline with the branches REST and merges REST services.

Using branches with Pega repositories in a continuous integration and delivery pipeline in Pega 7.3.1 When you work in a continuous integration and development environment, you can configure a Pega repository on a remote system of record (SOR) to store and test software. You push branches to repositories to store and test them. You can also configure a pipeline with REST services on your automation server to perform branch operations, such as detecting conflicts, merging branches, and creating branch reviews, immediately after you push a branch to the repository. To use branches with Pega repositories, you must perform the following tasks: 1. On Pega® Platform, enable the Pega repository type. For more information, see Enabling the Pega repository type. 2. Create a repository of type Pega and and clickthe Ruleset versions check box. For more information, see Creating a repository connection. 3. On the SOR, create a development application that is built on all the applications that will go into production. You must also create a ruleset in the development application that contains all the rules that you are using for continuous integration. For example, if you have a production application MyCoAppwith rulesets MyCo:01-01 and MyCoInt:01-01, you can create a MyCoDevAppdevelopment application that is built on MyCoAppand has only one ruleset, MyCoCIDev:01-01. This ruleset contains the data transforms that are needed to set default information, such as the application into which branches will be merged. You can use the branches REST and merge REST services in your pipeline to perform branch operations. The branches REST service provides subresources that you can use to detect conflicts, merge branches, and create branch reviews. You must configure certain settings on the SOR so that you can use the branches REST service. Complete steps 4 through 6. 4. ​Specify the application name and version that you want to use for conflict detection and merging: 1. Search for the pySetApplicationDefaults data transform. 2. Save the data transform to the ruleset in your development application that contains the continuous integration rules. 3. In the Source field for the Param.ApplicationName parameter, enter the name of the application that you want to use for conflict detection and merging. 4. In the Source field for the Param.ApplicationVersion parameter, enter the application version. 5. Save the rule form. 5. Optional: Set the target ruleset version that you want to use for conflict detection and merging. If you do not perform this step, a new ruleset version is created into which rules are merged. Complete the following steps: 1. Search for the pySetVersionDefaults data transform. 2. Save the data transform to the ruleset in your development application that contains the continuous integration rules. 3. In the Source field for the pyTargetRuleSetVersion parameter, enter the ruleset version into which you want to merge. 4. Save the rule form. 6. Optional: Set passwords that are needed during merge operations. As a best practice, lock these rulesets with a password. Complete the following steps: 1. Search for the pySetVersionPasswordDefaults data transform. 2. ​Save the data transform to the ruleset in your development application that contains the continuous integration rules. 3. Specify the passwords that are required for merging. 4. Save the rule form. 7. Configure a continuous integration and development pipeline so that your continuous integration tool, such as Jenkins, starts a job immediately after you push a branch to the SOR. Use the branches REST and merge REST services in the pipeline to perform branch operations, such as detecting conflicts and merging branches. For more information, see the following PDN articles: Remotely starting automation server jobs to perform branch operations and run PegaUnit tests in Pega 7.3.1. Implementation of a continuous integration and development pipeline with the branches REST and merges REST services.

Remotely starting automation server jobs to perform branch operations and run PegaUnit tests You can start a job remotely from an automation server, such as Jenkins, and configure a continuous integration and development pipeline with the branches REST and merges REST services to merge branches when you push them from your development system to a Pega

repository on a remote system of record (SOR). In a continuous integration and delivery pipeline, repositories provide centralized storage for software that is to be tested, released, or deployed. Pega® Platform can communicate with common repository technologies and also can act as a binary repository. Pega Platform can browse, publish, or fetch artifacts that are created whenever an action creates a RAP file: for example, exporting an application, product, branch, or component into a remote system of record. By starting jobs remotely and using the automation server to detect conflicts and merge branches, your organization can deliver higher-quality software more quickly. For more information about using branches with repositories, see Using branches with Pega repositories in a continuous integration and delivery pipeline. After you push a branch to a system of record, your automation server tool runs a job. Your pipeline can detect conflicts before a merge. If there are conflicts, the merge does not proceed. If there are no conflicts, the merge proceeds on the system of record. Your pipeline can run all PegaUnit test cases or a test suite to validate the quality of your build. After a merge is completed, you can rebase the rules on your development system to import the most recently committed rules from your system of record. For more information, see Rebasing rules to obtain latest versions. In addition, you can configure your pipeline to send emails to users, such as when a job starts or when a conflict is detected. The following figure displays an example workflow of the pipeline:

Workflow of a continuous integration pipeline on a system of record

To start jobs remotely and configure a pipeline, do the following tasks: 1. 2. 3. 4.

Configure your automation server Defining the automation server URL Configure the pyPostPutArtifactSuccess activity Configure a continuous delivery pipeline

The following tasks describe how to configure a pipeline and system of record by using Jenkins as the example automation server.

Configuring your automation server Configure your automation server so that you can remotely start jobs on it. Your configuration depends on the automation server that you use. For example, the following procedure describes how to configure Jenkins. 1. Open a web browser and navigate to the location of the Jenkins server. 2. Install the Build Authorization Token Root Plugin. 1. Click Manage Jenkins. 2. Click Manage Plugins. 3. On the Available tab, click the Build Authorization Token Root Plugin check box. 4. Specify whether to install the plug-in without restarting Jenkins or download the plug-in and install it after restarting Jenkins. 3. Configure your Jenkins job to use parameters. 1. Open the job and click Configure. 2. On the General tab, click the This project is parameterized check box. 3. Click the Add Parameter drop-down list and click String Parameter. 4. In the Name field, enter notificationSendToID , which is the operator ID of the user who started the Jenkins job. Email notifications about the job are sent to the email address that is associated with the user ID. 5. Click the Add Parameter list, and click String Parameter. 6. In the Name field, enter branchName. 7. Click Save. 4. ​Configure the build trigger for your job. 1. Click Configure. On the General tab, in the Build Triggers section, click the Trigger builds remotely (e.g., from scripts) check box. 2. In the Authentication Token field, enter an authentication token, which can be any string. 3. Click Save.

Defining the automation server URL Configure a Dynamic System Setting on the system of record to define your automation server URL. Your configuration depends on the automation server that you use. For example, the following procedure describes how to configure settings if you are using Jenkins. 1. 2. 3. 4. 5. 6. 7.

Click Create+ > Sysadmin > Dynamic System Settings. Enter a description in the Short description field. In the Owning Ruleset field, enter Pega-API. In the Setting Purpose field, enter JenkinsURL. Click Create and open. On the Settings tab, in the Value field, enter http://<em>myJenkinsServerURL/buildByToken/buildWithParameters. Click Save.

Configuring the pyPostPutArtifactSuccess activity If you are using Jenkins, configure the pyPostPutArtifactSuccess activity on your system of record to create a job after a branch is published on the system of record. If you are using other automation servers, create and call a Connector that is supported by your continuous integration tool. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

11. 12. 13.

14.

15. 16.

Click App > Settings. In the search field, enter Pega-RepositoryManagement . Expand Technical > Activity. Click pyPostPutArtifactSuccess. Save the activity to your application ruleset. On the Steps tab, in the Method field, enter Call pxImportArchive. Expand the arrow to the left of the Method field. Click the Pass current parameter page check box to import the archive that was published to the system of record. If there are errors during import, you can exit the activity. ​Ensure that the session authenticated by the Pega Repository Service Package has access to the ruleset that contains the pyPostPutArtifactSuccess activity. For more information about configuring authentication on service packages, see Service Package form - Completing the Context tab. Define the page and its class. 1. Click the Pages & Classes tab. 2. In the Page name field, enter a name for the page. 3. In the Class field, enter Pega-API-CI-AutomationServer. Click the Steps tab. Add a step to create the new page on the clipboard. 1. In the Method field, press the Down Arrow key and click Property-Set. 2. In the Step page field, enter the name of the page that you entered on the Pages & Classes tab. Configure the parameters to pass to the pzTriggerJenkins activity. 1. Click Add a step. 2. In the Method field, press the Down Arrow key and click Property-Set. 3. Click the arrow to the left of the Method field to open the Method Parameters section. 4. In the PropertiesName field, enter Param.Job. 5. In the PropertiesValue field, enter the name of your project. 6. Click the plus sign. 7. In the PropertiesName field, enter Param.Token. 8. In the PropertiesValue field, enter the authentication token that you provided for your project. 9. Click the plus sign. 10. In the PropertiesName field, enter Param.BranchName. 11. In the PropertiesValue field, enter @whatComesBeforeFirst(Param.ArtifactName,'_'). 12. Optional: To specify a different URL from the JenkinsURL Dynamic System Setting that you created in the first step of the Configuring Dynamic System Settings, click the Plus sign icon. 13. In the PropertiesName field, enter Param.OverrideEndPointURL. 14. In the PropertiesValue field, enter the endpoint URL. 15. Optional: To send notifications to users if you are calling the activity in a context where there is no operator ID page, click the Plus sign icon. 16. In the PropertiesName field, enter Param.OverrideNotificationSendToID. 17. In the PropertiesValue field, enter Param.PutArtifactOperatorID. ​Add a step to call the pzTriggerJenkinsJob activity. 1. Click Add a step. 2. In the Method field, enter Call pzTriggerJenkinsJob. 3. In the Step page field, enter the name of the page. 4. Click the arrow to the left of the Method field to expand it. 5. Click the Pass current parameter page check box. Configure other activity settings, as appropriate. For more information, see Creating an activity. Save the rule form.

Example of a configured activity

Configuring a continuous delivery pipeline After you configure your automation server and your remote system of record, you can configure a pipeline on your job to automate the testing and merging of rules. You can do the following actions: Send a notification with the job URL to the user who published the branch or started the job. Call the branches REST service with GET /branches/{ID}/conflicts to obtain a list of conflicts. If there are no conflicts, you can continue the job; otherwise, you can end the job and send a notification to the user to indicate that the job failed. Use the merges subresource for the branches REST service to merge branches. Call the merges REST service with GET /branches/{ID}/merge to obtain the status of a merge. Use the reviews subresource for the branches REST service to create a branch review. Use the Execute Tests service to run PegaUnit test cases or test suites. For more information, see Running PegaUnit test cases and test suites with the Execute Tests service. Set up Jenkins to poll the job, using the unique ID that the branches service returned when you merged the branch, until the status is no longer set to Processing. If the merge is successful, you can continue the job; otherwise, you can send a notification to the user to indicate that the job failed. Publish the rulesets into which the branches were merged to a repository such as JFrog Artifactory. Notify the user that the job is complete. For more information about the branches REST and merges REST services, see Implementation of a continuous integration and delivery pipeline with the branches REST and merges REST services.

Implementation of a continuous integration and development pipeline with the branches REST and merges REST services After you have configured an automation server and system of record (SOR) so that you can remotely start jobs on the automation server, you can implement a continuous integration and development pipeline with the branches REST and merges REST services. These services detect potential conflicts before a merge, merge rules in a branch, obtain the status of the merge, and create branch reviews. By remotely starting jobs that automatically perform branch operations, your organization can deliver higher-quality software more quickly. To access the documentation about the data model, click Resources > Pega API. For more information about response codes, see the Pega API HTTP status codes and errors help topic. Before you begin, you must have created a repository on the SOR, configured data transforms on the SOR so that you can use the branches REST service, and configured your continuous integration environment. For more information, see the following PDN articles: Using branches with Pega repositories in a continuous integration and delivery pipeline Remotely starting automation server jobs to perform branch branch operations and run PegaUnit tests

Branches REST service

You can use the branches REST service to retrieve a list of conflicts before you run tests and merge branches.

Conflicts subresource You can use the conflicts subresource to retrieve a list of conflicts before running tests, allowing the pipeline to fail more quickly so that you can correct errors faster. Request – http://serverURL/prweb/api/v1/branches/{id}/conflicts Parameter – ID. The name of the branch for which you want to receive conflicts. This parameter is required. Response – The conflicts subresource returns the number of conflicts.

Merge subresource Use the merge subresource to perform additional tests on conflicts, and then perform a merge operation. Request – http://serverURL/prweb/api/v1/branches/{id}/merge Parameter – ID. The name of the branch that you want to merge. This parameter is required. Response – The merge subresource returns a unique ID after a validation event occurs. During the merge, the status is saved to an instance of the System-Queue-Merge class. To verify the status of a merge, use the Merges REST service, using the ID returned by the response. You can also use the Queue Management landing page to view information about and remove merge requests without needing to know the response ID. Open the landing page by clicking Designer Studio > System > Operations > Queue Management. You can also update logging levels to INFO on the pzMergeServicePostActionProcessing activity to log informational messages. These messages could provide information about why exceptions are occurring and also act as a reference that you can use if you are working with Pegasystems Global Customer Support. For more information about logging levels, see Logging Level Settings tool.

Review subresource Use the review subresource to create a branch review. Request – http://serverURL/prweb/api/v1/branches/{id}/review Parameter – ID. The name of the branch for which you want to create a review. This parameter is required. Request body – The email account of the user creating the review and the users who are reviewing the branches. Use the following format: {
"author": "<<em>your_userid>",
"description": "<em><description of the review >",
"reviewers": [
{
"ID": "<em>reviewer_userid"
}
]
}

Response – The review subresource returns the ID of the branch review.

Merges REST service Use the merges REST service to obtain the status of the merge that you created by using the merge subresource. Request – http://serverURL/prweb/api/v1/merges/{ID} Parameter – ID. The unique identifier that you obtained by running the merge subresource of the branches REST service. This parameter is required. Response – The merges REST service returns the status from the System-Queue-Merge instance.

Deployment Manager Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your Pega applications from within Pega Platform ™ . You can create a standardized deployment process so that you can deploy predictable, high-quality releases without using third-party tools. With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging, application package generation, artifact management, and package promotion to different stages in the workflow. You can download Deployment Manager for Pega Platform from the Deployment Manager Pega Exchange page.

Deployment Manager release notes These release notes provide information about enhancements, known issues, issues related to updating from a previous release, and issues that were resolved in each release of Deployment Manager. See the following topics for more information: Deployment Deployment Deployment Deployment Deployment Deployment Deployment Deployment Deployment Deployment Deployment Deployment

Manager Manager Manager Manager Manager Manager Manager Manager Manager Manager Manager Manager

04.03.02 04.03.01 04.02.01 04.01.01 03.04.01 03.03.01 03.02.01 03.01.01 02.01.03 02.01.02 01.01.03 01.01.02

For questions or issues, send an email to [email protected].

Deployment Manager 04.03.02

See the following topics for more information: Resolved issues

Resolved issues The following issue has been resolved: Pipelines not visible on the Deployment Manager landing page On systems running Pega CRM applications, pipelines were not visible on the Deployment Manager landing page when the datapage/newgenpages dynamic system setting was set to false. This setting disabled the new clipboard implementation for optimized read-only data pages. Pipelines are now visible regardless of the dynamic system setting value.

Deployment Manager 04.03.01 See the following topics for more information: Enhancements

Enhancements The following enhancements are provided in this release: Ability to configure notifications in Deployment Manager You can now configure notifications in Deployment Manager without having to configure an email account and listener in Dev Studio. You can also choose which notifications to receive such as whether Pega unit test tasks succeeded or failed. You can receive notifications through email, in the notification gadget, or both, and you can create custom notification channels to receive notifications through other means such as text messages or mobile push notifications. To use notifications, you must install or upgrade to Pega Platform™ 8.1.3 on the orchestration server. Publishing application changes has been consolidated with viewing application versions in App Studio You can now publish application changes in App Studio and view information about your Deployment Manager application versions on one page. By accessing publishing features and viewing information in one place, you can more intuitively use Deployment Manager with App Studio.

Deployment Manager 04.02.01 See the following topics for more information: Enhancements

Enhancements The following enhancements are provided in this release: Ability to add and manage roles, privileges, and users Deployment Manager now provides default roles that specify privileges for super administrators and application administrators. Super administrators can add roles and specify their privileges, and both super administrators and application administrators can add users and assign them roles for specified applications. By specifying roles and privileges for Deployment Manager users, you can manage your users more effectively by controlling access to features for each type of user. New Deployment Manager portal Deployment Manager now provides a dedicated Deployment Manager portal that does not require access to the Dev Studio portal to access Deployment Manager features. The portal also provides enhancements such as a navigation panel from which you can easily access features such as reports, without having to open specific pipelines. Additionally, when you add a pipeline or modify pipeline settings, you can now open the rule forms for repositories and authentication profiles in Dev Studio from within Deployment Manager. Ability to merge branches that span multiple application layers You can now merge a branch that has rulesets that are in multiple applications if all the rulesets are in the application stack for the pipeline application. By doing so, you can, for example, merge changes that affect both a framework and an application layer. You can also merge test assets with the rules that you are testing without the test assets and rules being in the same application.

Deployment Manager 04.01.01 See the following topics for more information: Enhancements

Enhancements The following enhancements are provided in this release: Redesigned, more intuitive landing page and user interface Deployment Manager has been redesigned to have a more intuitive interface so that you can quickly access features as you interact with your pipeline. The Deployment Manager landing page now displays a snapshot of your pipeline configuration, which provides status information such as whether a deployment failed and on what stage the failure occurred. Additionally, when you click a pipeline to open it, Deployment Manager now displays important information about your pipeline such as the number of branches that are queued for merging on the development system. Manage aged updates You can now manage rules and data types, which are in an application package, that are older than the instances that are on a system. By importing aged updates, skipping the import, or manually deploying application packages on a system, you have more flexibility in determining the application contents that you want to deploy.

New testing tasks, which include running Pega scenario tests Several new test tasks have been added so that you deliver higher quality software by ensuring that your application meets the test criteria that you specify. On the candidate systems in your pipeline, you can now perform the following actions: Run Pega scenario tests, which are end-to-end, UI-based tests that you create within Pega Platform. Start and stop test coverage at the application level to generate a report that identifies the executable rules in your application that are covered or not covered by tests. Refresh the Application Quality dashboard with the latest information so that you can see the health of your application and identify areas that need improvement before you deploy your application. Enhancements to publishing application changes to a pipeline in App Studio You can submit application changes to a pipeline in App Studio to start a deployment in Deployment Manager. The following enhancements have been made: When you submit application changes into a pipeline, patch versions of the main application are now created. You can now add comments, which will be published with your application. You can now associate user stories and bugs with an application. You can now view information such as who published the application and when for the application versions that you have submitted Run Pega unit tests on branches before merging You can now run Pega unit tests on branches before they are merged in the pipeline for either the pipeline application or an application that is associated with an access group. By validating your data against Pega unit tests, you can deploy higher quality applications.

Deployment Manager 03.04.01 See the following topics for more information: Enhancements

Enhancements The following enhancements are provided in this release: Manage aged updates You can now manage rules and data types, which are in an application package, that are older than the instances that are on a system. By importing aged updates, skipping the import, or manually deploying application packages on a system, you have more flexibility in determining the application contents that you want to deploy. Ability to merge branches that span multiple application layers You can now merge a branch that has rulesets that are in multiple applications if all the rulesets are in the application stack for the pipeline application. By doing so, you can, for example, merge changes that affect both a framework and an application layer. You can also merge test assets with the rules that you are testing without the test assets and rules being in the same application.

Deployment Manager 03.03.01 See the following topics for more information: Enhancements Known issues

Enhancements The following enhancements are provided in this release: New Verify security checklist task You can now use the Verify security checklist task to ensure that your pipeline complies with security best practices. It is automatically added to the stage before production when you create a pipeline. Ability to diagnose pipelines You can now diagnose your pipeline to verify information such as whether the target application and product rule are on the development environment, connectivity between systems and repositories is working, and premerge settings are correctly configured. You can also view troubleshooting tips and download logs.

Known issues The following known issue exists in this release: Rollback does not work for Pega CRM applications If you are using a CRM application, you cannot roll back a deployment to a previous deployment.

Deployment Manager 03.02.01 See the following topics for more information: Enhancements

Enhancements The following enhancements are provided in this release: Simplified pipeline setup

Pipeline setup has been simplified when you install Deployment Manager and when you configure pipelines. The following enhancements have been made: Deployment Manager now provides the Pega Deployment Manager application with default operators and authentication profiles when you install it. You do not need to create authentication profiles for communication between candidate systems and the orchestration server. If you are using Pega Cloud, Deployment Manager is automatically populated with the URLs of all the systems in your pipeline so that you do not need to configure them. New Check guardrail compliance task. You can now use the Check guardrail compliance task to ensure that the deployment does not proceed if the application does not comply with best practices for building applications in Pega Platform. This task is automatically added to all the stages in your pipeline. New Approve for production task Deployment Manager now provides an Approve for production task, which is automatically added to the stage before production when you create a pipeline. You can assign this task to a user who approves the application changes before the changes are deployed to production. Ability to specify the test suite ID and access group for Pega unit testing tasks For Pega unit testing tasks, you can now run all the Pega unit tests that are defined in a test suite for the application pipeline. By using a test suite ID, you can run a subset of Pega unit tests instead of all Pega unit tests for a pipeline application. You can also run all the Pega unit tests for an application that is associated with an access group so that you can run Pega unit tests for an application other than the pipeline application. Deployment Manager now supports first time deployments Deployment Manager now supports first-time deployments, so you do not have to import your application into each Pega Platform server on your candidate systems the first time that you configure Deployment Manager.

Deployment Manager 03.01.01 See the following topics for more information: Enhancements

Enhancements The following enhancements are provided in this release: Ability to create custom repository types You can now create custom repository types and manage your artifacts with them when you use Deployment Manager. For example, you can create a Nexus repository type and use it to move your application package between candidate systems in a pipeline. By creating custom repository types, you can use a wider variety of repository types with your artifacts to extend the functionality of Deployment Manager. Use the Merge Branches wizard to submit branches into a continuous integration and delivery pipeline. You can now submit branches into a continuous integration and delivery (CI/CD) pipeline by using the Merge Branches wizard in Designer Studio. Deployment Manager can then run premerge criteria on branches on one system so that you do not need to configure additional systems for both branch development and merging. Support for Pega Cloud. Beginning with Pega 7.4, all current and new Pega Cloud customers have a free dedicated sandbox to run Deployment Manager, which provides the following features: Default repositories that store and move your application package between systems in the pipeline. Ability to view, download, and remove application packages from repositories so that you can manage your cloud storage space. Ability to deploy an existing application package. Ability to create multiple pipelines for one version of an application. Ability to create multiple pipelines for one version of an application. For example, you can create a pipeline with only a production stage if you want to deploy a build to production separately from the rest of the pipeline. Ability to manage application package artifacts. You can now browse, download, and delete application package artifacts from the orchestration server. You do not have to log in to repositories to delete artifacts from them. Ability to move existing artifacts through pipelines. You can move existing artifacts through your pipelines. Existing artifacts are maintained in repositories, and you can move them through progressive stages in the pipeline.

Deployment Manager 02.01.03 Enhancements

Enhancements The following enhancement is provided in this release: Improved structure and content of email notifications. Improvements have been made to email notifications that are sent to users when an event has occurred. For example, the email that is sent when a step on which PegaUnit test task fails now includes an attached log file that provides details of each failed PegaUnit test case.

Deployment Manager 02.01.02 Known issues

Known issues The following issue exists in this release: The PegaDevOps-ReleaseManager agent points to the wrong access group. Because this agent is not associated with the correct access group, it cannot process Deployment Manager activities in the background. To resolve the issue, after you import and install Deployment Manager 02.01.02, perform the following steps on the orchestration server: 1. Update your Pega Platform application so that it is built on PegaDeploymentManager 02.01.02: 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Built on application section, in the Version field, press the Down Arrow key and select 02.01.02. 3. Click Save. 2. Update the agent schedule for the Pega-DevOps-ReleaseManager agent to use the PegaDeploymentManager:Administrators access group. 1. In Designer Studio, click Records > SysAdmin > Agent Schedule. 2. Click the Pega-DevOps-ReleaseManager agent. 3. Click Security. 4. In the Access Group field, press the Down Arrow key and select PegaDeploymentManager:Administrators. 5. Click Save.

Deployment Manager 01.01.03 Enhancements

Enhancements The following enhancement is provided in this release: Improved structure and content of email notifications. Improvements have been made to email notifications that are sent to users when an event has occurred. For example, the email that is sent when a step on which PegaUnit test task fails now includes an attached log file that provides details of each failed PegaUnit test case.

Deployment Manager 01.01.02 See the following topics for more information: Known issues Resolved issues

Known issues The following issue exists in this release: The PegaDevOps-ReleaseManager agent points to the wrong access group. Because this agent is not associated with the correct access group, it cannot process Deployment Manager activities in the background. To resolve the issue, after you import and install Deployment Manager 01.01.02, perform the following steps on the orchestration server: 1. Update your Pega Platform application so that it is built on PegaDeploymentManager 01.01.02: 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Built on application section, in the Version field, press the Down Arrow key and select 01.01.02. 3. Click Save. 2. Update the agent schedule for the Pega-DevOps-ReleaseManager agent to use the PegaDeploymentManager:Administrators access group. 1. In Designer Studio, click Records > SysAdmin > Agent Schedule. 2. Click the Pega-DevOps-ReleaseManager agent. 3. Click Security. 4. In the Access Group field, press the Down Arrow key and select PegaDeploymentManager:Administrators. 5. Click Save.

Resolved issues The following issue was resolved in this release: Selections that were made to the Start build on merge check box were not applied when editing a pipeline. When you edit a pipeline and either select or clear the Start build on merge check box, your changes are now applied. Additionally, the check box is cleared by default.

Deployment Manager architecture and workflows Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your Pega applications from within Pega Platform™. You can create a standardized deployment process so that you can deploy predictable, high-quality releases without using third-party tools. With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging, application package generation, artifact management, and package promotion to different stages in the workflow. Deployment Manager supports artifact management on repository types such as Amazon S3, file system, Microsoft Azure, and JFrog Artifactory. It also supports running automations on Jenkins that are not supported in Pega Platform such as running external regression or performance tests. In addition, Pega Cloud pipelines are preconfigured to use Amazon S3 repositories and are configured to use several best practices related to compliance and automated testing. Deployment Manager is installed on the orchestration server, on which release managers configure and run pipelines. With Deployment Manager, you can see the run-time view of your pipeline as it moves through the CI/CD workflow. Deployment Manager provides key

performance indicators (KPIs) and dashboards that provide performance information such as the deployment success rate, deployment frequency, and task failures. Use this information to monitor and optimize the efficiency of your DevOps process. See the following topics for more information: CI/CD pipelines Systems in the Deployment Manager CI/CD pipeline Repositories in the pipeline Pipelines in a branch-based environment Pipelines in an environment without branches

CI/CD pipelines A CI/CD pipeline models the two key stages of software delivery: continuous integration and continuous delivery. In the continuous integration stage, developers continuously validate and merge branches into a target application. In the continuous delivery stage, the target application is packaged and moved through progressive stages in the pipeline. After application changes have moved through testing cycles, including Pega unit, regression, performance, and load testing, application packages are deployed to a production system either manually or, if you want to continuously deploy changes, automatically.

Systems in the Deployment Manager CI/CD pipeline The CI/CD pipeline comprises several systems and involves interaction with various Pega Platform servers: Orchestration server – Pega Platform system on which the Deployment Manager application runs and on which release managers or application teams model and run their CI/CD pipelines. This system manages the CI/CD workflow involving candidate systems in the pipeline Candidate systems – Pega Platform servers that manage your application's life cycle; they include the following systems: Development system – The Pega Platform server on which developers build applications and merge branches into them. The product rule that defines the application package that is promoted to other candidate systems in the pipeline is configured on this system. Distributed development environment might have multiple development systems. In this environment, developers develop applications on remote Pega Platform development systems and then merge their changes on a main development system, from which they are packaged and moved in the Deployment Manager workflow. QA and staging systems – Pega Platform servers that validate application changes by using various types of testing, such as Pega unit, regression, security, load, and performance testing. Production system – Pega Platform server on which end users access the application.

Repositories in the pipeline Deployment Manager supports Microsoft Azure, JFrog Artifactory, Amazon S3, and file system repositories for artifact management of application packages. For each run of a pipeline, Deployment Manager packages and promotes the application changes that are configured in a product rule. The application package artifact is generated on the development environment, published in the repository, and then deployed to the next stage in the pipeline. A pipeline uses development and production repositories. After a pipeline is started, the application package moves through the pipeline life cycle in the following steps: 1. 2. 3. 4.

The The The The

development system publishes the application package to the development repository. QA system retrieves the artifact from the development repository and performs tasks on the artifact. staging system retrieves the artifact from the development repository and publishes it to the production repository. production system deploys the artifact from the production repository

Pipelines in a branch-based environment If you use branches for application development, you can configure merge criteria on the pipeline to receive feedback about branches, such as whether a branch has been reviewed or meets guardrail compliance scores. If there are no merge conflicts, and merge criteria is met, the branch is merged; the continuous delivery pipeline is then started either manually or automatically. The workflow of tasks in a branch-based pipeline is as follows: 1. One or more developers make changes in their respective branches. 2. Merge criteria, which are configured in Deployment Manager, are evaluated when branches are merged. 3. Continuous delivery starts in one of the following ways: 1. Automatically, after a branch successfully passes the merge criteria. If another continuous delivery workflow is in progress, branches are queued and started after the previous workflow has been completed. 2. Manually, if you have multiple development teams and want to start pipelines on a certain schedule. 4. During a deployment run, branches are queued for merging and merged after the deployment has been completed. The following figure describes the workflow in a branch-based environment.

Workflow in a branch-based environment In a distributed, branch-based environment, you can have multiple development systems, and developers author and test the application on remote Pega Platform development systems. They then merge their changes on a main development system, from which they are packaged and moved in the Deployment Manager workflow. The following figure describes the workflow in a distributed, branch-based environment.

Workflow in a distributed, branch-based environment

Pipelines in an environment without branches If you do not use branches for application development, but you use ruleset-based development instead, you configure the continuous delivery pipeline in Deployment Manager. The workflow of tasks in this pipeline is as follows: 1. Developers update rules and check them in directly to the application rulesets on the development system. 2. The product rule that contains the application rules to be packaged and moved through the systems in the pipeline is on the development system. 3. Continuous delivery is started manually at a defined schedule by using Deployment Manager. The following figure describes the workflow of a pipeline in an environment without branches.

Workflow in an environment without branches

Best practices for using branches with Deployment Manager Follow these best practices when you use branches in your Deployment Manager pipelines. The specific practices depend on whether you have a single development team or multiple development teams in a distributed environment. For more information about best practices to follow in the DevOps pipeline, see Development workflow in the DevOps pipeline.

Branch-based environment If you use branches for application development, developers work on branches and merge them on the development system, after which the continuous delivery pipeline is started automatically or manually. In general, perform the following steps: 1. In Deployment Manager, create a pipeline for the production application. If your application consists of multiple built-on applications, it is recommended that you create separate pipelines for each application. By using separate pipelines for built-on applications, you can perform targeted testing of each built-on application, and other developers can independently contribute to application development. For more information about multiple built-on applications, see Using multiple built-on applications. In Deployment Manager 4.1 (for Pega Platform™ 8.1) and Deployment Manager 3.3.1 and earlier (for Pega Platform 7.4), you must create separate pipelines and branches for each application. 2. Ensure that the production application is password-protected on all your systems in the pipeline. 1. Optional: In Designer Studio (if you are using Deployment Manager 3.4.x) or Dev Studio (if you are using Deployment Manager 4.1.x or later), switch to the production application by clicking the name of the application in the header, clicking Switch Application, and then clicking the production application. 2. In the Designer Studio or Dev Studio header, click the name of the production application, and then click Definition. 3. Click Integration & Security. 4. In the Edit Application form, click the Require password to update application checkbox. 5. Click Update password. 6. In the Update password dialog box, enter a password, reenter it to confirm it, and click Submit. 7. Save the rule form. 3. On the main development system, create a team application that is built on top of the main production application. 4. Include the PegaDevOpsFoundation application as a built-on application for either the team application or the production application. 1. In either the team application or production application, in the header, click the application, and then click Definition. 2. In the Edit Application form, on the Definition tab, in the Built on applications section, click Add application. 3. In the Name field, press the Down Arrow key and select PegaDevOpsFoundation. 4. In the Version field, press the Down Arrow key and select the version for the Deployment Manager version that you are using. 5. Save the rule form. 5. Create a branch of your production rulesets in the team application. For more information, see Adding branches to your application. You should create separate branches for each target pipeline. 6. Perform all development work in the branch. 7. Submit the branches that you want to validate and merge in the application pipeline by using the Merge Branches wizard. For more information, see Submitting a branch into a pipeline.

Distributed branch-based environment In a distributed branch-based environment, you can have multiple development systems, and developers author and test the application on a remote development system. They then merge their changes on a main development system, from which the changes are packaged and moved in the Deployment Manager workflow. In general, perform the following steps: 1. In Deployment Manager, create a pipeline for the production application. If your application consists of multiple built-on applications, it is recommended that you create separate pipelines for each application. By using separate pipelines for built-on applications, you can perform targeted testing of each built-on application, and other developers can independently contribute to application development. For more information about multiple built-on applications, see Using multiple built-on applications.

In Deployment Manager 4.1 (for Pega Platform™ 8.1) and Deployment Manager 3.3.1 and earlier (for Pega Platform 7.4), you must create separate pipelines and branches for each application. 2. Ensure that the production application is password-protected on all your systems in the pipeline. 1. Optional: In Designer Studio (if you are using Deployment Manager 3.4.x) or Dev Studio (if you are using Deployment Manager 4.1.x or later), switch to the production application by clicking the name of the application in the header, clicking Switch Application, and then clicking the production application. 2. In the Designer Studio or Dev Studio header, click the name of the production application, and then click Definition. 3. Click Integration & Security. 4. In the Edit Application form, click the Require password to update application checkbox. 5. Click Update password. 6. In the Update password dialog box, enter a password, reenter it to confirm it, and click Submit. 7. Save the rule form. 3. On the main development system, create a team application that is built on top of the main production application. 4. Include the PegaDevOpsFoundation application as a built-on application for either the team application or the production application. 1. In either the team application or production application, in the header, click the application, and then click Definition. 2. In the Edit Application form, on the Definition tab, in the Built on applications section, click Add application. 3. In the Name field, press the Down Arrow key and select PegaDevOpsFoundation. 4. In the Version field, press the Down Arrow key and select the version for the Deployment Manager version that you are using. 5. Save the rule form. 5. Import the application package, including both the production and team applications, into the remote development system. 6. Add branches to the team application on the remote development system. For more information, see Adding branches to your application. If you are using multiple built-on applications, maintain separate branches for each production application. For more information, see Using multiple built-on applications. 7. Perform all development work in the branch. 8. If you are using one pipeline per application and application version: 1. On the remote development system, create a Pega repository that points to the production application on the main development system. For more information, see Creating a repository for file storage and knowledge management. 2. On the remote development system, publish the branch to the repository on the main development system to start the pipeline. For more information, see Publishing a branch to a repository. 3. If there are merge conflicts, log in to the team application on the main development system, add the branch to the application, resolve the conflict, and then merge the branch. 9. If you are using multiple pipelines per application and application version: Package the branch on the remote development system. For more information, see Packaging a branch. Export the branch. Import the branch to the main development system and add it to the team application. For more information, see Import wizard landing page. Merge branches into the production application to start the pipeline by using the Merge Branches wizard. For more information, see Submitting a branch into a pipeline.

Creating and using custom repository types in Deployment Manager In Deployment Manager 3.1.x and later, you can create custom repository types to store and move your artifacts. For example, you can create a Nexus repository and use it similarly to how you would use a Pega Platform™-supported repository type such as file system. By creating custom repository types, you can extend the functionality of Deployment Manager through the use of a wider variety of repository types with your artifacts. To create a custom repository type to use with Deployment Manager, complete the following steps: 1. Create a custom repository type. For more information, see Creating custom repository types. 2. If you are using Deployment Manager 3.3.x, 4.1.x, or 4.2.x, on each candidate system, add the ruleset that contains the custom repository type as a production ruleset to the PegaDevOpsFoundation:Administrators access group. 1. In either Designer Studio (if you are using Deployment Manager 3.1.x through 3.3.x) or Dev Studio (if you are using Deployment Manager 4.1.x or later), click Records > Security > Access Group. 2. Click PegaDevOpsFoundation:Administrators. 3. Click Advanced. 4. In the Run time configuration section, click the Production Rulesets field, press the Down Arrow key, and select the ruleset that contains the custom repository type. 5. Save the rule form. 3. Import the ruleset on which the custom repository is configured into the orchestration system and add the ruleset to the PegaDeploymentManager application stack. 1. On the orchestration system, import the ruleset by using the Import wizard. For more information, see Import wizard landing page. 2. In either the Designer Studio or Dev Studio header, in the Application field, click PegaDeploymentManager, and then click Definition. 3. On the Edit Application rule form, in the Application rulesets field, click Add ruleset. 4. Click the field that is displayed, press the Down Arrow key, and select the ruleset that contains the custom repository type. 5. Save the rule form.

Deployment Manager 1.x.x and 2.x.x Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your Pega® applications from within Pega Platform. You can create a standardized deployment process so that you can deploy predictable, high-quality releases without using third-party tools. With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging, application package generation, artifact management, and package promotion to different stages in the workflow. Deployment Manager supports artifact management on repository types such as Amazon S3, file system (supported in Pega 7.3.1 and later), and JFrog Artifactory. It also supports running automations on Jenkins such as external regression or performance tests that are not supported in Pega Platform . Deployment Manager is installed on the orchestration server, on which release managers configure and run pipelines. With Deployment Manager, you can see the run-time view of your pipeline as it moves through the CI/CD workflow. Deployment Manager provides key performance indicators (KPIs) and dashboards that provide performance information such as the deployment success rate, deployment frequency, and task failures. Use this information to monitor and optimize the efficiency of your DevOps process. Deployment Manager version 1.x.x is supported on Pega 7.3, and Deployment Manager version 2.x.x is supported on Pega 7.3.1. You can download Deployment Manager from the Deployment Manager Pega Exchange page. For more information, see the following PDN articles:

Installing and configuring Deployment Manager 1.x.x and 2.x.x Using Deployment Manager 1.x.x and 2.x.x This document describes the features that are available in the most current releases of Deployment Manager 1.x.x and 2.x.x.

CI/CD pipelines A CI/CD pipeline models the two key stages of software delivery: continuous integration and continuous delivery. In the continuous integration stage, developers continuously validate branches into a target application. In the continuous delivery stage, the target application is packaged and moved through progressive stages in the pipeline. After application changes have moved through testing cycles, including Pega unit, regression, performance, and load testing, application packages are deployed to a production system either manually or, if you want to continuously deploy changes, automatically.

Systems in the Deployment Manager CI/CD pipeline The CI/CD pipeline comprises several systems and involves interaction with various Pega Platform servers: Orchestration server – Pega Platform system on which the Deployment Manager application runs and on which release managers or application teams model and run their CI/CD pipelines. This system manages workflows on the candidate systems in the pipeline. Candidate systems – Pega Platform servers that manage your application's life cycle; they include the following systems: Development system – The Pega Platform server on which developers build applications and merge branches into them. The product rule that defines the application package that is promoted to other candidate systems in the pipeline is configured on this system. Distributed development environment might have multiple development systems. In this environment, developers develop applications on remote Pega Platform development systems and then merge their changes on a main development system, from which they are packaged and moved in the Deployment Manager workflow. QA and staging systems – Pega Platform servers that validate application changes by using various types of testing, such as Pega unit, regression, security, load, and performance testing. Production system – Pega Platform server on which end users access the application.

Repositories in the pipeline Deployment Manager supports JFrog Artifactory, Amazon S3, and file system (supported in Pega 7.3.1 and later) repositories for artifact management of application packages. For each run of a pipeline, Deployment Manager packages and promotes the application changes that are configured in a product rule. The application package artifact is generated on the development environment, published in the repository, and then deployed to the next stage in the pipeline. A pipeline uses development and production repositories. After a pipeline is started, the application package moves through the pipeline life cycle in the following steps: 1. 2. 3. 4.

The The The The

development system publishes the application package to the development repository. QA system retrieves the artifact from the development repository and performs tasks on the artifact. staging system retrieves the artifact from the development repository and publishes it to the production repository. production system deploys the artifact from the production repository.

Pipelines in a branch-based environment If you use branches for application development, you can configure merge criteria on the pipeline to receive feedback about branches, such as whether a branch has been reviewed or meets guardrail compliance scores. If there are no merge conflicts, and merge criteria is met, the branch is merged. The continuous delivery pipeline is then started either manually or automatically. The workflow of tasks in a branch-based pipeline is as follows: 1. One or more developers make changes in their respective branches. 2. Merge criteria, which are configured in Deployment Manager, are evaluated when branches are merged. 3. Continuous delivery starts in one of the following ways: Automatically, after a branch successfully passes the merge criteria. If another continuous delivery workflow is in progress, branches are queued and started after the previous workflow has been completed. Manually, if you have multiple development teams and want to start pipelines on a certain schedule. 4. During a build run, branches are queued for merging and merged after the build has been completed. The following figure shows the workflow in a branch-based environment:

Workflow in a branch-based environment

In a distributed, branch-based environment, you can have multiple development systems, and developers author and test the application on remote Pega Platform development systems. They then merge their changes on a main development system, from which they are packaged and moved in the Deployment Manager workflow. The following figure shows the workflow in a distributed branch-based environment:

Workflow in a distributed branch-based environment

Pipelines in an environment without branches If you do not use branches for application development, but you use ruleset-based development instead, you configure the continuous delivery pipeline in Deployment Manager. The workflow of tasks in this pipeline is as follows: 1. Developers update rules and check them in directly to the application rulesets on the development system. 2. The product rule that contains the application rules to be packaged and moved through the systems in the pipeline is on the development system. 3. Continuous delivery is started manually at a defined schedule by using Deployment Manager. The following figure shows the workflow in an environment without branches:

Workflow in an environment without branches

Installing and configuring Deployment Manager 1.x.x and 2.x.x Use Deployment Manager to create continuous integration and continuous delivery (CI/CD) pipelines, which automate tasks so that you can quickly deploy high-quality software to production. See the following topics for more information: Step 1: Installing Deployment Manager Step 2: Configuring systems in the pipeline Step 3: Configuring systems for branch-based development (optional and in addition to Step 2: Configuring systems in the pipeline, if you are using branches in a distributed or nondistributed environment) Step 4: Configuring additional settings

Installing Deployment Manager To install Deployment Manager, you must first install Pega® Platform on all systems in the pipeline, and then import certain files into Pega Platform. 1. Install Pega Platform 7.3 or Pega 7.3.1 on all systems in the CI/CD pipeline. 2. Browse to the Deployment Manager Pega Exchange page, and then download one of the following files to your local disk on each system: DeploymentManager_01.0x.0x.zip if you are installing Deployment Manager on Pega 7.3. DeploymentManager_02.0x.0x.zip if you are installing Deployment Manager on Pega 7.3.1. 3. Use the Import wizard to import files into the appropriate systems. For more information about the Import wizard, see Importing a file by using the Import wizard. 4. Do one of the following actions: If you are installing Deployment Manager on Pega 7.3, complete the following tasks: 1. On the orchestration server, import the following files: PegaDevOpsFoundation_01.x.x.zip PegaDeploymentManager_01.x.x.zip PegaDevOpsSupport_7.3.zip In addition, import the following hotfixes, which are not provided in the DeploymentManager_01.0x.0x.zip file. HFIX-38491 HFIX-36965 HFIX-36674 3. On the development, QA, staging, and production systems, and on the remote development system (if you are using branches in a distributed environment), import the following files: PegaDevOpsFoundation_01.x.x.zip PegaDevOpsSupport_7.3.zip In addition, import the HFIX-38491 hotfix, which is not provided in the DeploymentManager_01.0x.0x.zip file. If you are installing Deployment Manager on Pega 7.3.1, do the following tasks: 1. On the development, QA, staging, and production systems, import the PegaDevOpsFoundation02xx.zip file. 2. On the orchestration server, import the following files: PegaDevOpsFoundation02xx.zip PegaDeploymentManager02xx.zip

Step2: Configuring systems in the pipeline Complete the following steps to set up a pipeline for all supported CI/CD workflows. If you are using branches, you must configure additional settings after you perform the required steps. 1. Step 2a: Configuring the orchestration server

2. Step 2b: Configuring candidate systems 3. Step 2c: Creating repositories on the orchestration and candidate systems

Step 2a: Configuring the orchestration server The orchestration server is the system on which release managers configure and manage CI/CD pipelines. 1. Create an application that the release manager uses for creating, managing, and running pipelines, by using the New Application wizard. For more information, see Creating an application with the New Application wizard. 2. Add the PegaDeploymentManager application to your application stack. 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Built on application section, click Add application. 3. In the Name field, press the Down Arrow key and select PegaDeploymentManager. 4. In the Version field, press the Down Arrow key and select the version of Deployment Manager that you are using. 5. Click Save. Ensure that this application remains unlocked and has at least one unlocked ruleset. 3. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages: 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Add the PegaRULES:RepositoryAdministrator, PegaRULES:PegaAPI, and PegaRULES:SecurityAdministrator roles to the Administrator access groups that were generated by the New Application wizard: 1. Click Designer Studio > Org & Security > Groups & Roles > Access Groups. 2. Click an access group to open it. 3. In the Available roles section, click Add role. 4. In the field that is displayed, press the Down Arrow key and select PegaRULES:RepositoryAdministrator. 5. Click Add role. 6. In the field that is displayed, press the Down Arrow key and select PegaRULES:PegaAPI. 7. Click Add role. 8. In the field that is displayed, press the Down Arrow key and select PegaRULES:SecurityAdministrator. 9. Save the Edit Access Group rule form. 5. If you are using Deployment Manager for Pega 7.3, configure the api service package to use the PegaDevOpsFoundation:Administrators access group: 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, in the Service access group field, enter PegaDevOpsFoundation:Administrators. This access group is used during rule resolution to find the correct service rule at run time. 4. Click Save. 5. In the Methods section, select Rule-Service-REST from the Service type list and verify that the following methods are listed: Class name Method name

Ruleset

Endpoint

v1

assignments

Pega-API:07-10-19

/api/v1/assignments

v1

authenticate

Pega-API-07:10-31

/api/v1/authenticate

v1

branches

Pega-DevOpsSupport73:0-01-01 /api/v1/branches

v1

cases

Pega-API:07-10-19

/api/v1/cases

v1

casetypes

Pega-API:-07-10-19

/api/v1/casetypes

v1

data

Pega-API:07-0-19

v1

docs

Pega-API:07-10-17

/api/v1/docs

v1

merges

Pega-API:07-10-31

/api/v1/merges

v1

users

Pega-API:07-10-27

/api/v1/users

/api/v1/data

5. Create an authentication profile on the orchestration server that references an operator ID whose access group points to the target application on each candidate system. For example, if the operator that is on the candidate systems has the credentials janedoe/rules, you must create an authentication profile on the orchestration server that is also configured with the janedoe/rules credentials. For more information about configuring authentication profiles, see Authentication Profile data instances - Completing the New or Save As forms. If the operator IDs and passwords are different on the candidate systems, you must create multiple authentication profiles. 6. Configure the candidate systems in your pipeline. For more information, see Step 2b: Configuring candidate systems.

Step 2b: Configuring candidate systems

Configure each system that is used for the development, QA, staging, and production stages in the pipeline. 1. Use the Import wizard to import your target application into each candidate system. For more information about the Import wizard, see Importing a file by using the Import wizard. Deployment Manager does not support first-time deployment, so you must import the application into each Pega Platform server the first time that you configure Deployment Manager. 2. On each candidate system, add the PegaDevOpsFoundation application to your application stack: 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Built on application section, click Add application. 3. In the Name field, press the Down Arrow key and select PegaDevOpsFoundation. 4. In the Version field, press the Down Arrow key and select the version of Deployment Manager that you are using. 5. Click Save. 3. On each candidate system, add the PegaRULES:RepositoryAdministrator, PegaRULES:PegaAPI, and PegaRULES:SecurityAdministrator roles to the operator that you use to access your application on each system. 1. Log in to each Pega Platform server with an operator whose default access group points to your application. This is the same operator that you configured on the orchestration server whose authentication profile points to this system from the orchestration server. 2. Click your user profile and select Access group. 3. In the Available roles section, click Add role. 4. In the field that is displayed, press the Down Arrow key and select PegaRULES:RepositoryAdministrator. 5. Click Add role. 6. In the field that is displayed, press the Down Arrow key and select PegaRULES:PegaAPI. 7. Click Add role. 8. In the field that is displayed, press the Down Arrow key and select PegaRULES:SecurityAdministrator. 9. Click Save. 4. On each candidate system, create an authentication profile and configure it with the operator ID and password of the release manager operator.Use the operator ID and password of the administrative operator that was generated when you created a new application on the orchestration server. All candidate systems use this authentication profile to communicate with the orchestration server about the status of the tasks in the pipeline. 1. Click Create > Security > Authentication Profile. 2. In the name field, enter ReleaseManager. 3. Configure the authentication profile to use the release manager operator ID and password, and configure other information, as appropriate. For example, if the credentials of the release manager are rmanager/rules, configure each authentication profile on the candidate systems with the rmanager/rules credentials. For more information about creating authentication profiles, see Authentication Profile data instances - Completing the New or Save As forms. 5. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages: 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 6. On the development system, create a product rule that defines the application package that will be moved through repositories in the pipeline. For more information, see Product rules: Completing the Create, Save As, or Specialization form. Do not include the operator, which is referenced in the authentication profile that you created on the orchestration system, in the product rule. 7. Configure repositories through which to move artifacts in your pipeline. For more information, see Step 2c: Creating repositories on the orchestration and candidate systems.

Step 2c: Creating repositories on the orchestration server and candidate systems Create repositories on the orchestration server and all candidate systems to move your application between all the systems in the pipeline. For more information about creating a repository, see Creating a repository connection. When you create repositories, note the following information: The Pega repository type is not supported. Ensure that each repository has the same name on all systems. When you create JFrog Artifactory repositories, ensure that you create a Generic package type in JFrog Artifactory. Also, when you create the authentication profile for the repository in Pega Platform, you must select the Preemptive authentication check box. After you configure a pipeline, verify that the repository connects to the URL of the development and production repositories by clicking Test Connectivity on the Repository rule form.

Step 3: Configuring systems for branch-based development After you configure the orchestration server and all your candidate systems, configure additional settings so that you can use pipelines if you are using branches in a distributed or nondistributed environment. 1. Step 3a: Configuring the development system for branch-based development 2. Step 3b: Configuring the orchestration server for branch-based development

Configuring the development system for branch-based development Configure the development system if you are using branches either in a nondistributed environment or in a distributed environment in which you use both a main and remote development system. 1. Perform the following steps on either the development system (in a nondistributed environment) or the main development system (in a distributed environment). 1. Create a Dynamic System Setting that defines the rulesets that are hosted on the main development system: 1. Click Records > SysAdmin > Dynamic System Settings.

2. 3. 4. 5.

In the Owning Ruleset field, enter Pega-ImportExport. In the Setting Purpose field, enter HostedRulesetsList. Click Create and open. On the Settings tab, in the Value field, enter a comma-separated list of the rulesets on the remote development system. Enclose each ruleset value within double quotation marks, for example, “HRApp”. 6. Click Save. 2. Create a Dynamic System Setting to define the URL of the orchestration server, even if the orchestration server and the development system are the same system: 1. 2. 3. 4. 5.

Click Create > Records > SysAdmin > Dynamic System Settings. In the Owning Ruleset field, enter Pega-DevOps-ReleaseManager. In the Setting Purpose field, enter RMURL. Click Create and open. On the Settings tab, in the Value field, enter the URL of the orchestration server in the following format: http://hostname:port/prweb/PRRestService. 6. Click Save. 2. Perform the following steps on either the development system (in a nondistributed environment) or the remote development system (in a distributed environment) 1. Use the New Application wizard to create a development application that developers will log in to. This application allows development teams to maintain a list of development branches without modifying the definition of the target application. For more information, see Creating an application with the New Application wizard. 2. Add the target application of the pipeline as a built-on application layer of the development application: 1. Log in to the application. 2. In the Designer Studio header, click the name of your application, and then click Definition. 3. In the Built-on application section, click Add application. 4. In the Name field, press the Down Arrow key and select the name of the target application. 5. In the Version field, press the Down Arrow key and select the target application version. 6. Click Save. 3. Log in to the application that you created in the previous step and create an authentication profile that uses the operator ID and password of an operator whose default access group points to the target application on the development system. For more information, see Authentication Profile data instances: Completing the New or Save As form. 4. Lock the application rulesets to prevent developers from making changes to rules after branches have been merged: 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Application rulesets section, click the Open icon for each ruleset that you want to lock. 3. Click Lock and Save. 5. Enable Pega repository types. For more information, see Enabling the Pega repository type. 6. Create a new Pega repository type. For more information, see Creating a repository connection. Ensure that you do the following tasks: In the Host ID field, enter the URL of the development system. Use the authentication profile that you created by clicking the Use authentication check box, pressing the Down Arrow key in the Authentication profile field, and selecting the authentication profile that you created in step 3-c. 3. Configure the orchestration server. For more information, see Step 3b: Configuring the orchestration server for branch-based development.

Step 3b: Configuring the orchestration server for branch-based development Configure the orchestration server so that you can use pipelines in a branch-based environment. 1. Create a Dynamic System Setting to define the operator who can start queued builds. The build will be started using the operator that you define in this Dynamic System Setting. 2. Click Create > Sysadmin > Dynamic System Settings. 3. In the Owning Ruleset field, enter Pega-DevOps-ReleaseManager. 4. In the Setting Purpose field, enter ReleaseManager. 5. Click Create and open. 6. On the Settings tab, in the Value field, enter the operator ID whose default access group points to the release manager application. 7. Click Save. 8. Save the Pega-DeploymentManager agent to your ruleset and set its access group to the release manager application access group: 1. 2. 3. 4. 5.

Click Records > SysAdmin > Agents. Click the Pega-DeploymentManager agent in the Pega-Deployment Manager ruleset and save it to your application ruleset. Click the Security tab. In the Access Group field, press the Down Arrow key and select the access group of the release manager application. Click Save.

Step 4: Configuring additional settings You can optionally configure email notifications on the orchestration server to notify users when an event occurs, such as a merge failure. You must also configure Jenkins if you are using a Jenkins step in a pipeline. See the following topics for more information: Configuring email notifications on the orchestration server (if you want to notify users when an event, such as a merge failure, occurs) Configuring Jenkins (if you are using a Jenkins step in a pipeline)

Configuring email notifications on the orchestration server Optionally, you can configure email notifications on the orchestration server. For example, users can receive emails when pre-merge criteria are not met and the system cannot create a build. To configure the orchestration server to send emails, complete the following steps: 1. ​Configure an email account and listener by clicking Designer Studio > Integration > Email > Email Wizard. This email account sends notifications to users when events occur, for example, if there are merge conflicts. For detailed information, see the procedure for “Configuring an email account that receives email and creates or manages work” in Entering email information in the Email wizard. 2. From the What would you like to do? list, select Receive an email and create/manage a work object. 3. From the What is the class of your work type? list, select Pega-Pipeline-CD. 4. From the What is your starting flow name? list, select NewWork. 5. From the What is your organization? list, select the organization that is associated with the work item. 6. In the What Ruleset? field, select the ruleset that contains the generated email service rule. This ruleset applies to the work class. 7. In the What RuleSet Version? field, select the version of the ruleset for the generated email service rule. 8. Click Next to configure the email listener.

9. In the Email Account Name field, enter <span><span>Pega-Pipeline-CD, which is the name of the email account that the listener references for incoming and outgoing email. 10. In the Email Listener Name field, enter the name of the email listener. Begin the name with a letter, and use only letters, numbers, the ampersand character (&), and hyphens. 11. In the Folder Name field, enter the name of the email folder that the listener monitors. Typically, this folder is INBOX. 12. In the Service Package field, enter the name of the service package to be deployed. Begin the name with a letter, and use only letters, numbers, and hyphens to form an identifier. 13. In the Service Class field, enter the service class name. 14. In the Requestor User ID field, press the Down Arrow Key and select the operator ID of the release manager operator. 15. In the Requestor Password field, enter the password for the release manager operator. 16. In the Requestor User ID field, enter the operator ID that the email service uses when it runs. 17. In the Password field, enter the password for the operator ID. 18. Click Next to continue the wizard and configure the service package. For more information, see Configuring the service package in the Email wizard. 19. After you complete the wizard, enable the listener that you created in the Email wizard. For more information, see Starting a listener.

Email notifications Emails are also preconfigured with information about each notification type. For example, when a build failure occurs, the email that is sent provides information, such as the pipeline name and URL of the system on which the build failure occurred. Preconfigured emails are sent in the following scenarios: Build start – When a build starts, an email is sent to the release manager and, if you are using branches, to the operator who started a build. Build failure – If any step in the build process is unsuccessful, the build pauses. An email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Build step completion – When a step in a build process is completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Stage completion – When a stage in a build process is completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Build completion – When a build is successfully completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Manual tasks requiring approval – When a manual task requires email approval from a user, an email is sent to the user, who can approve or reject the task from the email. Stopped build – When a build is stopped, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. PegaUnit testing failure – If a PegaUnit test cannot successfully run on a step in the build, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Schema changes required – If you do not have the required schema privileges to deploy them on application packages that require those changes, an email is sent to the operator who started the build.

Configuring Jenkins If you are using a Jenkins task in your pipeline, configure Jenkins so that it can communicate with the orchestration server. 1. On the orchestration server, create an authentication profile that uses Jenkins credentials: 1. Click Create > Security > Authentication Profile. 2. Enter a name, and then click Create and open. 3. In the User name field, enter the user name of the Jenkins user. 4. Click Set password, enter the Jenkins password, and then click Submit. 5. Click the Preemptive authentication check box. 6. Click Save. 2. Because the Jenkins task does not support Cross-Site Request Forgery (CSRF), disable it by completing the following steps: 1. In Jenkins, click Manage Jenkins. 2. Click Configure Global Security. 3. In the CRSF Protection section, clear the Prevent Cross Site Request Forgery exploits check box. 4. Click Save. 3. Install the Post build task plug-in. 4. Install the curl command on the Jenkins server. 5. Create a new freestyle project. 6. On the General tab, select the This project is parameterized check box. 7. Add the BuildID and CallBackURL parameters: 1. Click Add parameter, and then select String parameter. 2. In the String field, enter BuildID. 3. Click Add parameter, and then select String parameter. 4. In the String field, enter CallBackURL. 8. In the Build Triggers section, select the Trigger builds remotely check box. 9. In the Authentication Token field, select the token that you want to use when you start Jenkins jobs remotely. 10. In the Build Environment section, select the Use Secret text(s) or file(s) check box. 11. In the Bindings section, do the following actions: 1. Click Add, and then select User name and password (conjoined). 2. In the Variable field, enter RMCREDENTIALS. 3. In the Credentials field, click Specific credentials. 4. Click Add, and then select Jenkins. 5. In the Add credentials dialog box, in the Username field, enter the operator ID of the release manager operator that is configured on the orchestration server. 6. In the Password field, enter the password. 7. Click Save. 12. In the Post-Build Actions section, do one of the following actions, depending on your operating system: If Jenkins is running on Microsoft Windows, add the following post-build tasks: 1. Click Add post-build action, and then select Post build task. 2. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build fails, for example BUILD FAILURE. 3. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%". 4. Click Add another task.

5. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build is successful, for example BUILD SUCCESS. 6. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%" 7. Click Save. If Jenkins is running on UNIX or Linux, add the following post-build tasks. Use the dollar sign ($) to access the environment variables instead of the percent sign (%): 1. Click Add post-build action, and then select Post build task. 2. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build fails, for example BUILD FAILURE. 3. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 4. Click Add another task. 5. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build is successful, for example BUILD SUCCESS. 6. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 7. Click Save.

Using Deployment Manager 1.x.x and 2.x.x Use Deployment Manager to create continuous integration and continuous delivery (CI/CD) pipelines, which automate tasks so that you can quickly deploy high-quality software to production. On the orchestration server, release managers configure CI/CD pipelines for their Pega® Platform applications from the DevOps landing page. The landing page displays all the running and queued application builds, branches that are to be merged, and reports that provide information about your DevOps environment. See the following topics for more information: Adding an application pipeline on the orchestration system Modifying stages and tasks in your pipeline Modifying application and environment details Manually starting a build Starting a build in a branch-based or distributed branch-based environment Completing or rejecting a manual step in a build Schema changes in application packages Pausing a build Performing actions on a build with errors Viewing branch status Viewing build logs Viewing build reports Viewing reports for all builds Deleting an application pipeline

Adding an application pipeline on the orchestration server When you add a pipeline, you specify both pre-merge and post-merge criteria. For example, you can specify that a branch must be peerreviewed before it can be merged on the remote development system, and you can specify that Pega unit tests are run after a branch is merged and is in the QA stage of the pipeline. 1. In the Designer Studio footer, click DevOps. 2. Click Add application pipeline. 3. Optional: Specify tasks that must be completed before a branch can be merged in the pipeline: 1. Click Add task. 2. From the Type list, select Pega, and then specify the task that you want to perform. To specify that a branch must meet a compliance percentage before it can be merged: 1. From the Task list, select Check for guardrails. 2. In the Weighted Compliance Score field, enter the minimum required compliance percentage. 3. Click Submit. To specify that a branch must be reviewed before it can be merged: 1. From the Task list, select Branch review. 2. Click Submit. 3. Optional: To start a build when a branch is merged, select the Trigger build on merge check box. One of the following results occurs: If no build is running in the pipeline, and a branch is successfully merged, the build is started by the operator who is logged in to the orchestration server. If a build is running, and a branch is successfully merged, the build is queued for processing. The build will be started by using the operator ID that you defined in this Dynamic System Setting. 4. Optional: To skip a build life cycle stage, clear its check box. 5. Optional: In the build life cycle stages, specify the tasks to be performed during each stage of the pipeline: 1. Click Add task. 2. From the Type list, select Pega, and then specify the task that you want to perform: To run all Pega unit tests in the application, from the Task list, select Pega unit testing. To run a Jenkins job, do the following actions: 1. From the Task list, select Jenkins. 2. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins build) that you want to run. 3. In the Token field, enter the Jenkins authentication token. 4. In the Parameters field, enter parameters, if any, to send to the Jenkins job. To add a manual step that a user must perform in the pipeline, do the following tasks: 1. From the Task list, select Manual. 2. In the Job name field, enter text that describes the action that you want the user to take.

3. In the Assigned to field, press the Down Arrow key and select the operator ID to assign the task to. 6. Click Review pipeline. The system generates tasks, which you cannot delete, that the pipeline always performs, for example, deploying the application to each stage in the pipeline. 7. Click Next. 8. If you added a Jenkins step, specify Jenkins server information in the Add application dialog box, in the Jenkins server section: 1. In the URL field, enter the URL of the Jenkins server. 2. In the Authentication profile field, press the Down Arrow key and select the authentication profile on the orchestration server that specifies the Jenkins credentials to use for Jenkins jobs. 9. In the Environments section, specify the URL for your development system and the candidate systems that are in your pipeline, and also specify merge targets. 1. Specify development system information: 1. In the Development field, enter the URL of the development system. 2. In the Authentication profile field, press the Down Arrow key and select the authentication profile that you want to communicate from the orchestration server to the development system. 2. Specify the URL of the candidate systems and the authentication profiles that the orchestration server uses to communicate with candidate systems. Select the authentication profile that you configured in step 4 in the Configuring candidate systems section of Installing and configuring Deployment Manager 01.01.02 and 02.01.01. Fields are displayed only for the pipeline stages that you selected in the build life cycle on the previous page. 10. Specify merge options. 1. Do one of the following actions: To merge branches into the highest existing ruleset in the application, click Highest existing ruleset. To merge branches into a new ruleset, click New ruleset. 2. Optional: In the Password field, enter the password that locks the rulesets. 11. In the Application repository section, select the development and production repositories. 1. In the Dev repository field, press the Down Arrow key and select the repository that connects to a candidate system from the development system. The archived Product rule that contains the application in your pipeline is sent from the development system to the candidate system to which this repository connects. 2. In the Production repository field, press the Down Arrow key and select the production repository. The archived product rule that contains the application is sent from a candidate system to the production system to which this repository connects. 12. Click Next. 13. Specify the application and the application contents that you want to build in your pipeline by completing the following steps: 1. In the Application field, press the Down Arrow key and select your application. 2. In the Version field, press the Down Arrow key and select the application version. 3. In the Product rule field, press the Down Arrow key and select the product rule that defines the contents of the application. 4. In the Version field, press the Down Arrow key and select the product rule version. 5. Optional: Add dependent applications. For more information, see Product rules: Listing product dependencies for Pega-supplied applications. 6. Click Add.

Modifying stages and tasks in your pipeline You can add and remove tasks from the stages in your pipeline, if no builds are running. You can also add and skip pipeline stages. However, if you add a stage that you did not originally configure, you cannot configure details for it. 1. 2. 3. 4. 5. 6.

In the Designer Studio footer, click DevOps. Click the pipeline that you want to modify. Click Actions > Edit pipeline. Optional: Add and remove tasks to the stages in your pipeline. Optional: Add or skip stages in your pipeline. Click Review pipeline.

For detailed information about modifying your pipeline, see Adding an application pipeline. You can modify application and environment details, such as the product rule to use and the URLs of the systems in your pipeline. See Modifying application and environment details for more information.

Modifying application and environment details You can modify application details when no builds are running in a pipeline. 1. 2. 3. 4.

In the Designer Studio footer, click DevOps. Click the pipeline that you want to modify. Click Actions > Settings. To modify the Product rule and version that defines the content of your application, do the following tasks: 1. Click the Edit icon in the Application Details section. 2. Optional: Specify the Product rule version, and add or remove dependent applications. 3. Click Save. 5. To modify environment details, do the following tasks: 1. Click the Edit icon in the Environment Details section. 2. Optional: Specify information such as the URLs of your pipeline systems and the authentication profiles to apply to each system. 3. Click Save. For detailed information about modifying your pipeline, see Adding an application pipeline.

Manually starting a build Start a build manually if you are not using branches and are working directly in rulesets. You can also start a build manually if you do not want builds to start automatically when branches are merged. You must also clear the Trigger build on merge check box in the pipeline configuration. 1. In the Designer Studio footer, click DevOps. 2. Click the pipeline for which you want to start a build. 3. Click Start build.

Starting a build in a branch-based or distributed branch-based environment

If you are using branches, developers can start builds when they publish branches. For more information about publishing branches, see Publishing a branch to a repository.

Completing or rejecting a manual step in a build If a manual step is configured for a build, the build pauses when it reaches the step, and you can either complete it or reject it. For example, if a user was assigned a task and completed it, you can complete the task to continue the build. Deployment Manager also sends you an email when a manual step is n the pipeline. You can complete or reject a step either within the pipeline or through email. Deployment Manager also generates a manual step if schema changes are in the application package that the release manager must apply. For more information, see Schema changes in application packages. To complete or reject a manual step within the pipeline, do the following steps: 1. In the Designer Studio footer, click DevOps. 2. Click a pipeline. 3. Right-click the manual step and select one of the following options: Complete task: Resolve the task so that the build continues through the pipeline. Reject task: Reject the task so that the build does not proceed. To complete or reject a manual step from within an email, click either Accept or Reject.

Schema changes in application packages If an application package that is to be deployed on candidate systems contains schema changes, the Pega Platform orchestration server checks the candidate system to verify that you have the required privileges to deploy the schema changes. One of the following results occurs: If you have the appropriate privileges, schema changes are automatically applied to the candidate system, the application package is deployed to the candidate system, and the pipeline continues. If you do not have the appropriate privileges, Deployment Manager generates an SQL file that lists the schema changes and sends it to your email address. It also creates a manual step and pauses the pipeline, so that you can apply the schema changes. After you complete the step, the pipeline continues. For more information about completing a step, see Completing or rejecting a manual step. You can also configure settings to automatically deploy schema changes so that you do not have to manually apply them if you do not have privileges to do so. For more information, see Configuring settings to automatically deploy schema changes.

Configuring settings to automatically deploy schema changes You can configure settings to automatically deploy schema changes that are in an application package that is to be deployed on candidate systems. Configure these settings so that you do not have to apply schema changes if you do not have the privileges to deploy them. 1. On the orchestration server, in Pega Platform, set the AutoDBSchemaChanges Dynamic System Setting to true to enable schema changes at the system level: 1. In Designer Studio, search for AutoDBSchemaChanges. 2. On the Settings tab, in the Value field, enter true. 3. Click Save. 2. Add the SchemaImport privilege to your access role to enable schema changes at the user level. For more information, see Specifying privileges for an Access or Role to Object rule. These settings are applied sequentially. If the AutoDBSchemaChanges Dynamic System Setting is set to false, you cannot deploy schema changes, even if you have the SchemaImport privilege.

Pausing a build To pause a build, click the Pause button. When you pause a build, the pipeline completes the task that it is running, and stops the build at the next step.

Performing actions on a build with errors If a build has errors, the pipeline stops processing on it. You can do one of the following actions: Ignore the current step and run the next step by clicking the Start button. After fixing the errors, restart the build at the current step by clicking the Start button. Roll back to an earlier build by clicking the Roll back build

button.

Viewing branch status You can view the status of all the branches that are in your pipeline. For example, you can see whether a branch was merged in a build and when it was merged. 1. In the Designer Studio footer, click DevOps. 2. Click a pipeline. 3. Click Actions > View branches.

Viewing report build logs View logs for a build to see the completion status of operations, for example, when a build is moved to a new stage. You can change the logging level to control which events are displayed in the log. For example, you can change logging levels of your builds from INFO to DEBUG for troubleshooting purposes. For more information, see Logging Level Settings tool. 1. 2. 3. 4.

In the Designer Studio footer, click DevOps. Click a pipeline. Click the Gear icon for the build for which you want to view the log file. Click View log.

Viewing the report for a build

Reports provide information about all the builds in your pipeline. You can view the following key performance indicators (KPIs): Deployment Success – Percentage of deployments that are successfully deployed to production. Deployment Frequency – Frequency of new deployments to production. Deployment Speed – Average time taken to deploy to the build from when it was started to production. Build frequency – Frequency at which new builds are started. Failure rate – Average number of failures per build. To view reports, do the following tasks: 1. 2. 3. 4.

In the Designer Studio footer, click DevOps. Click a pipeline. Click the Gear icon for the build for which you want to view the build report. Click View report.

Viewing reports for all builds Reports provide a variety of information about all the builds in your pipeline. You can view key performance information such as the percentage of deployments that were successful or the failure rate of builds. 1. In the Designer Studio footer, click DevOps. 2. Click a pipeline. 3. Click Actions > View reports.

Deleting an application pipeline When you delete a pipeline, its associated application packages are not removed from the repositories that the pipeline is configured to use. 1. In the Designer Studio footer, click DevOps. 2. Click the Delete icon for the pipeline that you want to delete. 3. Click Submit.

Deployment Manager 4.3.x Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your Pega applications from within Pega Platform™. You can create a standardized deployment process so that you can deploy predictable, high-quality releases without using third-party tools. With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging; application package generation; artifact management; and package promotion, to different stages in the workflow. Deployment Manager 4.3.x is supported on Pega 8.1. You can download it for Pega Platform from the Deployment Manager Pega Exchange page. Each customer VPC on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use it with your Pega Cloud application. To use notifications, you must install or upgrade to Pega 8.1.3 on the orchestration server. For more information about the features in the latest version of Deployment Manager 4.3.x, see the following articles: Deployment Manager release notes Deployment Manager architecture and workflows Best practices for using branches with Deployment Manager Creating custom repository types for Deployment Manager Installing, upgrading, and configuring Deployment Manager 4.3.x Using Deployment Manager 4.3.x

Installing, upgrading, and configuring Deployment Manager 4.3.x Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks and allow you to quickly deploy high-quality software to production. Each customer virtual private cloud (VPC) on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use it with your Pega Cloud application. This document describes the features for the latest version of Deployment Manager 4.3.x. To use notifications, you must install or upgrade to Pega 8.1.3 on the orchestration server. See the following topics for more information about installing and configuring Deployment Manager: Step Step Step Step Step

1: 2: 3: 4: 5:

Installing Deployment Manager Upgrading to Deployment Manager 4.3.x (optional) Configuring systems in the pipeline Configuring the development system for branch-based development (optional) Configuring additional settings (optional)

For information about using Deployment Manager, see Using Deployment Manager 4.3.x.

Step 1: Installing Deployment Manager Each customer virtual private cloud (VPC) on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use it with your Pega Cloud application. If you are upgrading from an earlier release to Deployment Manager 4.3.x, contact Pegasystems® Global Customer Support (GCS) support to request a new version. To install Deployment Manager 4.3.x on premises, complete the following steps: 1. Install Pega 8.1 on all systems in the CI/CD pipeline. 2. Browse to the Deployment Manager Pega Exchange page, and then download the DeploymentManager04.03.0x.zip file for your version of Deployment Manager to your local disk on each system. 3. Use the Import wizard to import files into the appropriate systems. For more information about the Import wizard, see Import wizard. 4. On the orchestration server, import the following files: PegaDevOpsFoundation_4.zip

PegaDeploymentManager_4.3.zip 5. On the development, QA, staging, and production systems, import the PegaDevOpsFoundation_4.zip file. 6. Optional: If you are using a distributed development, on the remote development system, import the PegaDevOpsFoundation_4.zip file. 7. Do one of the following actions: 1. If you are upgrading to Deployment Manager 4.3.x, perform the upgrade. For more information, see Upgrading to Deployment Manager 4.3.x. 2. If you are not upgrading Deployment Manager 4.3.x, continue the installation procedure. For more information, see Step 3a: Configuring authentication profiles on the orchestration server and candidate systems.

Step 2: Upgrading to Deployment Manager 4.3.x Before you upgrade, ensure that no deployments are running, have errors, or are paused. To upgrade to Deployment Manager 4.3.x either on Pega Cloud or on premises, perform the following steps: 1. On each candidate system, update the PegaDevOpsFoundation application version to the version of Deployment Manager that you are using. 1. In the Dev Studio header, click the name of your application, and then click Definition. 2. In the Built on application section for the PegaDevOpsFoundation application, in the Version field, press the Down Arrow key and select the version of Deployment Manager that you are using. 3. Click Save. 2. If you are upgrading from Deployment Manager version 1.x, 2.x, 3.x, or 4.1.x and do not see the pipelines that you created in earlier releases, run the pxMigrateOldPipelinesTo42 activity: 1. In Dev Studio, search for pxMigrateOldPipelinesTo42, and then click the activity in the dialog box that displays the results. 2. Click Actions > Run. 3. In the dialog box that is displayed, click Run. 3. If you are upgrading from Deployment Manager version 1.x, 2.x, 3.x, or 4.1.x, on the orchestration server, run the pxUpdateDescription activity. 1. In Dev Studio, search for pxUpdateDescription, and then click the activity in the dialog box that displays the results. 2. Click Actions > Run. 3. In the dialog box that is displayed, click Run. If you are upgrading from Deployment Manager 3.2.1 or a later release, you do not need to do the rest of the steps in this procedure or the required steps in the remainder of this document. If you are upgrading from earlier releases and have pipelines configured, complete this procedure. 4. On the orchestration server, run the pxUpdatePipeline activity. 1. In Dev Studio, search for pxUpdatePipeline, and then click the activity in the dialog box that displays the results. 2. Click Actions > Run. 3. In the dialog box that is displayed, click Run. 5. Modify the current release management application so that it is built on PegaDeploymentManager:04-03-01. 1. In the Dev Studio header, click the name of your application, and then click Definition. 2. In the Edit Application rule form, on the Definition tab, in the Built on application section, for the PegaDeploymentManager application, press the Down Arrow key and select 04.03.01. 3. Click Save. 6. Merge rulesets to the PipelineData ruleset. 1. Click Configure > System > Refactor > Rulesets. 2. Click Copy/Merge RuleSet. 3. Click the Merge Source RuleSet(s) to Target RuleSet radio button. 4. Click the RuleSet Versions radio button. 5. In the Available Source Ruleset(s) section, select the first open ruleset version that appears in the list, and then click the Move icon. 6. All your current pipelines are stored in the first open ruleset. If you modified this ruleset after you created the application, select all the ruleset versions that contain pipeline data. 1. In the target RuleSet/Information section, in the Name field, press the Down Arrow key and select Pipeline Data. 2. In the Version field, enter 01-01-01. 3. For the Delete Source RuleSet(s) upon completion of merge? option, click No. 4. Click Next. 5. Click Merge to merge your pipelines to the PipelineData:01-01-01 rulset. 6. Click Done. 7. Your pipelines are migrated to the Pega Deployment Manager application. Log out of the orchestration server and log back in to it with the DMReleaseAdmin operator ID and the password that you specified for it. For backup purposes, pipelines are still visible in your previous release management application. However, you should not create deployments with this application, because deployments might not work correctly. You do not need to perform any of the required steps in the remainder of this document.

Step 3: Configuring systems in the pipeline Complete the following steps to set up a pipeline for all supported CI/CD workflows. If you are using branches, you must configure additional settings after you perform the required steps. 1. 2. 3. 4.

Step Step Step Step

3a: Configuring authentication profiles on the orchestration server and candidate systems 3b: Configuring the orchestration server 3c: Configuring candidate systems 3d: Creating repositories on the orchestration server and candidate systems

Step 3a: Configuring authentication profiles on the orchestration server and candidate systems When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs, and authentication profiles that communicate between the orchestration server and candidate systems are also installed. On the orchestration server, the following items are installed: The Pega Deployment Manager application. The DMReleaseAdmin operator ID, which release managers use to log in to the Pega Deployment Manager application. You must enable this operator ID and specify its password. The DMAppAdmin authentication profile. The orchestration server uses this authentication profile to communicate with candidate

systems so that it can run tasks in the pipeline. You must update this authentication profile to use the password that you specified for the DMAppAdmin operator ID, which is configured on all the candidate systems. On all the candidate systems, the following items are installed: The PegaDevOpsFoundation application. The DMAppAdmin operator ID, which points to the PegaDevOpsFoundation aplication. You must enable this operator ID and specify its password. The DMReleaseAdmin authentication profile. All candidate systems use this authentication profile to communicate with the orchestration server about the status of the tasks in the pipeline. You must update this authentication profile to use the password that you specified for the DMReleaseAdmin operator ID, which is configured on the orchestration server. The DMReleaseAdmin and DMAppAdmin operator IDs do not have default passwords. Configure the default authentication profile by following these steps: 1. On the orchestration server, enable the DMReleaseAdmin operator ID and specify its password. 1. Log in to the orchestration server with [email protected]/install. 2. In Dev Studio, click Records > Organization > Operator ID, and then click DMReleaseAdmin. 3. On the Edit Operator ID rule form, click the Security tab. 4. Clear the Disable Operator check box. 5. Click Save. 6. Click Update password. 7. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then click Submit. 8. Log out of the orchestration server. 2. On each candidate system, which includes the development, QA, staging, and production systems, enable the DMAppAdmin operator ID. If you want to create your own operator IDs, ensure that they point to the PegaDevOpsFoundation application. 1. Log in to each candidate system with [email protected]/install. 2. In Dev Studio, click Records > Organization > Operator ID, and then click DMAppAdmin. 3. In the Explorer panel, click the operator ID initials, and then click Operator. 4. On the Edit Operator ID rule form, click the Security tab. 5. Clear the Disable Operator check box. 6. Click Save. 7. Click Update password. 8. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then click Submit. 9. Log out of each candidate system. 3. On each candidate system, update the DMReleaseAdmin authentication profile to use the new password. All candidate systems use this authentication profile to communicate with the orchestration server about the status of the tasks in the pipeline. 1. Log in to each candidate system with the DMReleaseAdmin operator ID and the password that you specified. 2. In Dev Studio, click Records > Security > Authentication Profile. 3. Click DMReleaseAdmin. 4. On the Edit Authentication Profile rule form, click Set password. 5. In the Password dialog box, enter the password, and then click Submit. 6. Save the rule form. 4. On the orchestration server, modify the DMAppAdmin authentication profile to use the new password. The orchestration server uses this authentication profile to communicate with candidate systems so that it can run tasks in the pipeline. 1. Log in to the orchestration server with the DMAppAdmin user name and the password that you specified. 2. In Dev Studio, click Records > Security > Authentication Profile. 3. Click DMAppAdmin. 4. On the Edit Authentication Profile rule form, click Set password. 5. In the Password dialog box, enter the password, and then click Submit. 6. Save the rule form. 5. Do one of the following actions: 1. If you are upgrading to Deployment Manager 4.3.x, resume the upgrade procedure from step 2. For more information, see Upgrading to Deployment Manager 4.3.x. 2. If you are not upgrading, continue the installation procedure. For more information, see Step 3b: Configuring the orchestration server.

Step 3b: Configuring the orchestration server The orchestration server is the system on which release managers configure and manage CI/CD pipelines. 1. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages. 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 2. Configure the candidate systems in your pipeline. For more information, see Step 3c: Configuring candidate systems.

Step 3c: Configuring candidate systems Configure each system system that is used for the development, QA, staging, and production stage in the pipeline. 1. On each candidate system, add the PegaDevOpsFoundation application to your application stack. 1. In the Dev Studio header, click the name of your application, and then click Definition. 2. In the Built on application section, click Add application. 3. In the Name field, press the Down Arrow key and select PegaDevOpsFoundation. 4. In the Version field, press the Down Arrow key and select the version of Deployment Manager that you are using. 5. Click Save. 2. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages. 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 3. Optional: If you want to use a product rule other than the default product rule that is created by the New Application wizard, on the

development system, create a product rule that defines the application package that will be moved through repositories in the pipeline. For more information, see Creating a product rule by using the create menu. When you use the New Application wizard, a default product rule is created that has the same name as your application. 4. Configure repositories through which to move artifacts in your pipeline. For more information, see Step 3d: Creating repositories on the orchestration server and candidate systems.

Step 3d: Creating repositories on the orchestration server and candidate systems If you are using Deployment Manager on premises, create repositories on the orchestration server and all candidate systems to move your application between all the systems in the pipeline. You can use a supported repository type that is provided in Pega Platform™, or you can create a custom repository type. If you are using Deployment Manager on Pega Cloud, default repositories are provided. If you want to use repositories other than the ones provided, you can create your own. For more information about creating a supported repository, see the following Creating a repository for file storage and knowledge management. For more information about creating a custom repository type, see Creating custom repository types for Deployment Manager. When you create repositories, note the following information: The Pega repository type is not supported. Ensure that each repository has the same name on all systems. When you create JFrog Artifactory repositories, ensure that you create a Generic package type in JFrog Artifactory. Also, when you create the authentication profile for the repository on Pega Platform, you must select the Preemptive authentication check box. After you configure a pipeline, you can verify that the repository connects to the URL of the development and production repositories by clicking Test Connectivity on the Repository rule form.

Step 4: Configuring the development system for branch-based development (optional) If you are using branches in a distributed or nondistributed branch-based environment, configure the development system to create a pipeline. Complete the following steps: 1. On the development system (in nondistributed environment) or the main development system (in a distributed environment), create a Dynamic System Setting to define the URL of the orchestration server, even if the orchestration server and the development system are the same system. 1. Click Create > Records > SysAdmin > Dynamic System Settings. 2. In the Owning Ruleset field, enter Pega-DevOps-Foundation. 3. In the Setting Purpose field, enter RMURL. 4. Click Create and open. 5. On the Settings tab, in the Value field, enter the URL of the orchestration server. Use this format: http://hostname:port/prweb/PRRestService. 6. Click Save. 2. Complete the following steps on either the development system (in a non-distributed environment) or the remote development system (in a distributed environment). 1. Use the New Application wizard to create a new development application that developers will log in to. This application allows development teams to maintain a list of development branches without modifying the definition of the target application. 2. Add the target application of the pipeline as a built-on application layer of the development application. 1. Log in to the application. 2. In the Dev Studio header, click the name of your application, and then click Definition. 3. In the Built-on application section, click Add application. 4. In the Name field, press the Down Arrow key and select the name of the target application. 5. In the Version field, press the Down Arrow key and select the target application version. 6. Click Save. 3. Lock the application rulesets to prevent developers from making changes to rules after branches have been merged. 1. In the Dev Studio header, click the name of your application, and then click Definition. 2. In the Application rulesets section, click the Open icon for each ruleset that you want to lock. 3. Click Lock and Save. 4. Optional: It is recommended that you merge branches by using the Merge Branch wizard. However, you can publish a branch to the remote development system to start a deployment. Publishing a branch when you have multiple pipelines per application is not supported. 1. In Dev Studio, enable Pega repository types. For more information, see Enabling the Pega repository type. 2. Create a new Pega repository type. For more information, see Creating a repository connection for file storage and knowledge management . Ensure that you do the following tasks: In the Host ID field, enter the URL of the development system. The default access group of the operator that is configured for the authentication profile of this repository should point to the pipeline application on the development system (in a nondistributed environment) or main development system (in a distributed environment).

Step 5: Configuring additional settings (optional) As part of your pipeline, you can optionally send email notifications to users and configure Jenkins if you are using a Jenkins task. See following topics for more information: Configuring email accounts on the orchestration server Configuring Jenkins

Configuring email accounts on the orchestration server Deployment Manager provides the Pega-Pipeline-CD email account and the DMEmailListener email listener. If you are configuring email accounts for the first time, update your email account details in the Deployment Manager portal. For more information, see Configuring email senders and recipients.

If you are upgrading to Deployment Manager 4.3.x and using the Pega-Pipeline-CD email account for sending emails, the DMEmailListener email listener always listens to the Pega-Pipeline-CD account. If you have a different listener for the Pega-Pipeline-CD account, delete that listener by doing the following steps: 1. In Dev Studio, click Configure > Integration > Email > Email listeners. 2. On the Email: Integration page, on the Email Listeners tab, click the listener that you want to delete. 3. Click Delete. If you are upgrading to Deployment Manager and using the Default email account, after you upgrade to Deployment Manager 4.3.x, do the following actions: 1. Update the email sender and recipient in Deployment Manager. For more information, see Configuring email senders and recipients. 2. If you have an email listener that listens to the same email address that you configured in Deployment Manager in the previous step, delete the listener to ensure that the DMEmailListener is listening to the email account that you configured. Email notifications Emails are also preconfigured with information about each notification type. For example, when a deployment failure occurs, the email that is sent provides information, such as the pipeline name and URL of the system on which the deployment failure occurred. Preconfigured emails are sent in the following scenarios: Deployment start – When a deployment starts, an email is sent to the release manager and, if you are using branches, to the operator who started a deployment. Deployment step completion or failure – When a step either completes or fails, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. The deployment pauses if there are any errors. Deployment completion – When a deployment is successfully completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Stage completion or failure – When a stage in a deployment process either succeeds or fails, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Manual tasks requiring approval – When a manual task requires email approval from a user, an email is sent to the user, who can approve or reject the task from the email. Stopped deployment – When a deployment is stopped, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Pega unit testing success or failure – If you are using the Run Pega unit tests task, and the task either succeeds or fails, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Schema changes required – If you do not have the required schema privileges to deploy schema changes on application packages that require those changes, an email is sent to the operator who started the deployment. Guardrail compliance score success or failure – If you are using the Check guardrail compliance task, an email is sent to the release manager if the task either succeeds or fails. Approve for production – If you are using the Approve for production task, which requires approval from a user before application changes are deployed to production, an email is sent to the user. The user can reject or approve the changes. Verify security checklist success or failure – If you are using the Verify security checklist task, which requires that all tasks be completed in the Application Security Checklist to ensure that the pipeline complies with security best practices, an email is sent to the release manager if the test either succeeds or fails. Pega scenario testing success or failure – If you are using the Run Pega scenario tests task, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge, if Pega scenario testing either succeeds or fails. Start test coverage success or failure – If you are using the Enable test coverage task to generate a test coverage report, an email is sent to the release manager if the task either fails or succeeds. Verify test coverage success or failure – If you are using the Verify test coverage task, an email is sent to the release manager if the task either fails or succeeds. Application quality statistics refreshed – If you are using the Refresh application quality statistics task, an email is sent to the release manager when the task is run.

Configuring Jenkins If you are using a Jenkins task in your pipeline, configure Jenkins so that it can communicate with the orchestration server. 1. On the orchestration server, create an authentication profile that uses Jenkins credentials. 1. Click Create > Security > Authentication Profile. 2. Enter a name, and then click Create and open. 3. In the User name field, enter the user name of the Jenkins user. 4. Click Set password, enter the Jenkins password, and then click Submit. 5. Click the Preemptive authentication check box. 6. Click Save. 2. Because the Jenkins task does not support Cross-Site Request Forgery (CSRF), disable it by completing the following steps: 1. In Jenkins, click Manage Jenkins. 2. Click Configure Global Security. 3. In the CRSF Protection section, clear the Prevent Cross Site Request Forgery exploits check box. 4. Click Save. 3. Install the Post build task plug-in. 4. Install the curl command on the Jenkins server. 5. Create a new freestyle project. 6. On the General tab, select the This project is parameterized check box. 7. Add the BuildID and CallBackURL parameters. 1. Click Add parameter, and then select String parameter. 2. In the String field, enter BuildID. 3. Click Add parameter, and then select String parameter. 4. In the String field, enter CallBackURL. 8. In the Build Triggers section, select the Trigger builds remotely check box. 9. In the Authentication Token field, select the token that you want to use when you start Jenkins jobs remotely. 10. In the Build Environment section, select the Use Secret text(s) or file(s) check box. 11. In the Bindings section, do the following actions: 1. Click Add, and then select User name and password (conjoined). 2. In the Variable field, enter RMCREDENTIALS 3. .In the Credentials field, click Specific credentials. 4. Click Add, and then select Jenkins. 5. In the Add credentials dialog box, in the Username field, enter the operator ID of the release manager operator that is configured on the orchestration server. 6. In the Password field, enter the password. 7. Click Save.

12. In the Post-Build Actions section, do one of the following actions, depending on your operating system: If Jenkins is running on Microsoft Windows, add the following post-build tasks: 1. Click Add post-build action, and then select Post build task. 2. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build fails, for example BUILD FAILURE. 3. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%". 4. Click Add another task. 5. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build is successful, for example BUILD SUCCESS. 6. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%" 7. Click Save. If Jenkins is running on UNIX or Linux, add the following post-build tasks. Use the dollar sign ($) instead of the percent sign (%) to access the environment variables. 1. Click Add post-build action, and then select Post build task. 2. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build fails, for example BUILD FAILURE. 3. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 4. Click Add another task. 5. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build is successful, for example BUILD SUCCESS. 6. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 7. Click Save.

Using Deployment Manager 4.3.x Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks so that you can quickly deploy high-quality software to production. On the orchestration server, release managers use the DevOps landing page to configure CI/CD pipelines for their Pega Platform™ applications. The landing page displays all the running and queued application deployments, branches that are to be merged, and reports that provide information about your DevOps environment such as key performance indicators (KPIs). This document describes the features for the latest version of Deployment Manager 4.3.x. To use notifications, you must install or upgrade to Pega 8.1.3 on the orchestration server. For more information about using Deployment Manager to configure and use CI/CD pipelines, see the following topics: Accessing the Dev Studio portal Starting Deployment Manager Roles and users Deployment Manager notifications Configuring an application pipeline Accessing systems in your pipeline Manually starting a deployment Starting a deployment in a branch-based environment Starting a deployment in a distributed, branch-based environment Publishing application changes in App Studio Schema changes in application packages Completing or rejecting a manual step in a deployment Managing aged updates Pausing a deployment Stopping a deployment Performing actions on a deployment with errors Diagnosing a pipeline Viewing merge requests Viewing deployment logs Viewing deployment reports Viewing reports for all deployments Deleting an application pipeline Viewing, downloading and deleting application packages in repositories

Accessing the Dev Studio portal Deployment Manager provides a dedicated portal from which you can access features. From within Deployment Manager, when you configure pipeline details, you can open, modify, and create repositories and authentication profiles in Dev Studio if you have permissions to use the Dev Studio portal. If you add the Developer portal to the PegaDeploymentManager:Administrators access group, all the users that you add in the Deployment Manager portal can also access Dev Studio. To add the Dev Studio portal to an access group of users, complete the following steps: 1. 2. 3. 4. 5.

In Dev Studio, click Configure > Org & Security > Groups & Roles > Access Groups. Click the access group that you want to configure. In the Edit Access Group rule form, on the Definition tab, in the Available Portals field, click Add portal. In the Name field, press the Down Arrow key and select Developer. Save the rule form.

Starting Deployment Manager

Deployment Manager provides a dedicated portal from which you can access features. Depending on your permissions, you log in to either Deployment Manager or Dev Studio. To start Deployment Manager from Dev Studio, in the header, click Launch > Deployment Manager.

Roles and users Deployment Manager provides two default roles, which you cannot modify or delete, that define privileges for super administrators and application administrators. Privileges for super administrators are applied across all applications, and privileges for application administrators are applied to specific applications. Super administrators can also add roles and specify the privileges to assign to them. Super administrators and application administrators can add users and assign them access to the applications that they manage. By defining roles and users, you can manage which users can access Deployment Manager and which features they can access. For example, you can create a role that does not permit users to delete pipelines for a specific application. For more information, see the following topics: Using roles and privileges by creating a dynamic system setting Adding and modifying roles Adding users and specifying their roles Modifying user roles and privileges Modifying your user details and password Deleting users

Using roles and privileges by creating a dynamic system setting To use roles and privileges, you must first create the EnableAttributeBasedSecurity dynamic system setting. 1. 2. 3. 4. 5. 6. 7.

In Dev Studio, click Create > SysAdmin > Dynamic System Settings. In the Short Description field, enter a short description. In the Owning Ruleset field, enter Pega-RulesEngine . In the Setting Purpose field, enter EnableAttributeBasedSecurity. Click Create and open. On the Settings tab, in the value field, enter true. Click Save.

Adding and modifying roles If you are a super administrator, you can add and modify roles. 1. In the Navigation pane, click Users, and then click Roles and privileges. 2. Do one of the following actions: To add a role, Click Add role. To modify a role, click Edit. 3. In the Name field, enter a name for the role. 4. Select the privileges that you want to assign to the role. 5. Click Submit.

Adding users and specifying their roles If you are a super administrator or application administrator, you can add users to Deployment Manager and specify their roles. Only super administrators can create other super administrators or application administrators who can access one or more applications. Application administrators can create other application administrators for their the applications that they manage. 1. In the Navigation pane, click Users, and then click People. 2. On the People page, click Add user. 3. In the Add user dialog box, click the User field, do one of the following actions: Press the Down Arrow key and select the user that you want to add. Enter an email address. 4. Click Add. 5. From the Role list, select the role to assign to the user. 6. Optional: If you selected the App admin role or a custom role, in the Applications field, enter the application name that the user can access. 7. Click Send invite to send an email, which contains the user name and randomly-generated password for the user to log in to Deployment Manager with, to the user.

Modifying user roles and privileges Super administrators can give other users super administrative privileges or assign them as application administrators to any application. Application administrators can assign other users as application administrator for the applications that they manage. 1. 2. 3. 4.

In the Navigation pane, click Users, and then click People. On the People page, click the user. In the Roles and privileges section, modify user role and applications that they can access, as appropriate. Click Save.

Modifying your user details and password You can modify your own user details, such as first and last name, and you can change your password. 1. 2. 3. 4.

In the Navigation pane, click Users, and then click People. On the People page, click your user name. In the Personal details section, modify your name, email address, and phone number, as appropriate. To change your password: 1. Click Update password. 2. In the Change operator ID dialog box, enter your new password, reenter it to confirm it, and then click Submit. 5. Click Save.

Deleting users If you are a super administrator or application administrator, you can delete users for the applications that you manage.

1. In the Navigation pane, click Users, and then click People. 2. On the People page, click the Delete icon for the user that you want to delete.

Deployment Manager notifications You can enable notifications to receive updates about the events that occur in your pipeline. For example, you can choose to receive emails about whether Pega unit tests failed or succeeded. You can receive notifications in the Deployment Manager notifications gadget, through email, or both. By default, all notifications are enabled for users who are configured in Deployment Manager. If users are assigned manual tasks but are not configured as users in Deployment Manager, they receive emails for the manual tasks. For users who are branch authors but are not configured as Deployment Manager users, they receive all Deployment Manager notifications for the pipeline into which they merge branches. See the following topics for more information: Managing Deployment Manager notifications Configuring email senders and recipients Adding custom Deployment Manager notification channels

Managing Deployment Manager notifications To enable notifications and select the notifications that you want to receive, perform the following actions: 1. 2. 3. 4. 5.

In Deployment Manager, in the navigation panel, click your profile icon. Click Notification preferences. Select the events for which you want to receive notifications. Specify how you want to receive notifications. Click Submit.

Configuring email senders and recipients To receive email notifications, first configure the email server from which emails are sent and the senders to which notifications are sent. 1. In Deployment Manager, in the navigation pane, click Settings. 2. Click Email configuration. 3. On the Email configuration page, click the Email provider list and select the email provider. When you make a selection, some fields, such as SMTP host and Port, automatically populate in the Sever details section in the Sender and Receivers sections. You can edit the information in these fields. 4. In the Sender section, in the Identity subsection, configure the email sender identity information to use. 1. In the Email address field, enter the email address from which the email is sent. 2. In the Display name field, enter the display name of the sender. 3. In the From field, enter the email address associated with email sent from this account. 4. In the User ID field, enter the SMTP user ID that sends email from this host. If you do not specify a value, the system uses the value in the From field. 5. In the Password field, enter the sender password. 6. In the Reply to field, enter the email address to which email replies are sent. 5. In the Server details subsection, configure email server information. 1. In the SMTP host field, enter the SMTP host for the email server. 2. In the Port field, enter the SMTP server port number for outgoing email connections. The default options are: 25 (unsecured) 587 (STARTTLS) 465 (SMTPS) 3. Select the Use SMPTS check box to use SSL to send email messages through this server. Do not select this option if the email server uses STARTTLS. 6. Click Test connection to verify that the sender information is configured correctly. 7. In the Receiver section, in the Identity subsection, configure the email recipient information. 1. Select the Use sender's ID and password check box to use the sender ID and password. If you select this check box, the User ID and Password fields are populated by the information that you configured in the Identity subsection in the Sender section. 2. In the User ID field, enter the user ID of the email recipient. 3. In the Password field, enter the password of the email recipient. 8. In the Server details subsection, configure the email server that receives incoming email. 1. In the Host field, enter the POP3 or IMAP mail server host name or IP address that is used to receive incoming email. 2. In the Port field, enter the POP3 or IMAP mail server port number for email connections. IMAP – 143 (unsecured) or 993 (secured with SSL) POP3 – 110 (unsecured) or 995 (secured with SSL) 3. From the Protocol list, select the email server protocol (IMAP or POP3). 4. Select the Use SSL/TLS check box to use SSL to send email messages through this server. Do not select this option if the email server uses STARTTLS. 9. Click Test connection to verify that the receiver information is configured correctly. 10. Click Save.

Adding custom Deployment Manager notification channels You can receive notifications through email, the Deployment Manager notifications gadget, or both. You can create custom notification channels to meet application requirements such as sending notifications as phone text messages or as push notifications on mobile devices. Deployment Manager provides the following notifications to which you can add channels: pyAbortDeployment pyTaskFailure pyTaskFailure pyTaskCompletion pyStartDeployment pyStageCompletion pySchemaChange pyDeploymentCompletion pyAgedUpdateActionTaken pyAgedUpdateActionRequired

To create a custom notification channel, complete the following steps: 1. On the orchestration server, in Pega Platform, create a custom notification channel. For more information, see Adding a custom notification channel. 2. Add the application ruleset, which contains the channel that you created, to the Deployment Manager application. 1. In the Dev Studio header, click Deployment Manager, and then click Definition. 2. On the Edit Application rule form, in the Application rulesets section, click Add ruleset. 3. Press the Down Arrow key and select the ruleset and version that contains the custom notification channel. 4. Save the rule form. 3. Enable the channel that you created on the appropriate notifications. 1. Save the notification in the application ruleset that contains the channel. For example, if you want to use the Mobile channel for the pyStartDeployment notification, save the pyStartDeployment notification in the application ruleset that contains the Mobile channel. 2. Enable the channel on the notification. 1. Open the notification by clicking Records > Notification and then clicking the notification. 2. Click the Channels tab. 3. On the Channel configurations page, select the channel that you want to use. 4. Save the rule form.

Configuring an application pipeline When you add a pipeline, you specify merge criteria and configure stages and steps in the continuous delivery workflow. For example, you can specify that a branch must be peer-reviewed before it can be merged, and you can specify that Pega unit tests must be run after a branch is merged and is in the QA stage of the pipeline. You can create multiple pipelines for one version of an application. For example, you can use multiple pipelines in the following scenarios: To move a deployment to production separately from the rest of the pipeline. You can then create a pipeline that has only a production stage or development and production stages. To use parallel development and hotfix life cycles for your application. For more information, see the following topics: Adding a pipeline on Pega Cloud Adding a pipeline on premises Modifying application details Modifying URLs and authentication profiles Modifying development and production repositories Specifying Jenkins server information Specifying merge options for branches Modifying stages and tasks in the pipeline

Adding a pipeline on Pega Cloud To add a pipeline on Pega Cloud, perform the following steps: 1. Click Pipelines. 2. Click New. 3. Specify the details of the application for which you are creating the pipeline. 1. Optional: If you want to change the URL of your development system, which is populated by default with your development system URL, in the Development environment field, press the Down Arrow key and select the URL. This is the system on which the product rule that defines the application package that moves through the repository is located. 2. In the Application field, press the Down Arrow key and select the name of the application. 3. In the Version field, press the Down Arrow key and select the application version. 4. In the Access group field, press the Down Arrow key and select the access group for which pipeline tasks are run. This access group must be present on all the candidate systems and have at least the sysadmin4 role. 5. In the Pipeline name field, enter the name of the pipeline. This name must be unique. 4. Click Create. The system adds tasks, which you cannot delete, to the pipeline that are required to successfully run a workflow, for example, Deploy and Generate Artifact. For Pega Cloud, it also adds mandatory tasks that must be run on the pipeline, for example, the Check guardrail compliance task and Verify security checklist task. 5. Optional: Add tasks that you want to perform on your pipeline, such as Pega unit testing. For more information, see Modifying stages and tasks in the pipeline.

Adding a pipeline on premises To add a pipeline on premises, complete the following steps: 1. Click Pipelines. 2. Click New. 3. Specify the details of the application for which you are creating the pipeline. 1. In the Development environment field, enter the URL of the development system. This is the system on which the product rule that defines the application package that moves through the repository is located. 2. In the Application field, press the Down Arrow key and select the name of the application. 3. In the Version field, press the Down Arrow key and select the application version. 4. In the Access group field, press the Down Arrow key and select the access group for which pipeline tasks are run. This access group must be present on all the candidate systems and have at least the sysadmin4 role. 5. In the Pipeline name field, enter the name of the pipeline. This name must be unique. 6. In the Product rule field, enter the name of the product rule that defines the contents of the application. 7. In the Version field, enter the product rule version. 4. Optional: If the application depends on other applications, in the Dependencies section, add those applications. 1. Click Dependencies. 2. Click Add. 3. In the Application name field, press the Down Arrow key and select the application name. 4. In the Application version field, press the Down Arrow key and select the application version. 5. In the Repository name field, press the Down Arrow key and select the repository that contains the production-ready artifact of the dependent application. If you want the latest artifact of the dependent application to be automatically populated, ensure that the repository that contains the production-ready artifact of the dependent application is configured to support file updates. 6. In the Artifact name field, press the Down Arrow key and select the artifact.

For more information about dependent applications, see Listing product dependencies. 5. Click Next. 6. In the Environment details section, in the Stages section, specify the URL of each candidate system and the authentication profile that each system uses to communicate with the orchestration system. 1. In the Environments field for the system, press the Down Arrow key and select the URL of the system. 2. Optional: If you are using your own authentication profiles, in the Authentication field for the system, press the Down Arrow key and select the authentication profile that you want to communicate from the orchestration server to the system. By default, the fields are populated with the DMAppAdmin authentication profile. 7. In the Artifact management section, specify the development and production repositories through which the product rule that contains application contents moves through the pipeline. 1. In the Development repository field, press the Down Arrow key and select the development repository. 2. In the Production repository field, press the Down Arrow key and select the production repository. 8. Optional: In the External orchestration server section, if you are using a Jenkins step in a pipeline, specify Jenkins details. 1. In the URL field, enter the URL of the Jenkins server. 2. In the Authentication profile field, press the Down Arrow key and select the authentication profile on the orchestration server that specifies the Jenkins credentials to use for Jenkins jobs. 9. Click Next. 10. Specify if you are using branches in your application. 1. Do one of the following actions: If you are not using branches, click the No radio button. If you are using branches: 1. Click the Yes radio button. 2. Do one of the following actions: To merge branches into the highest existing ruleset in the application, click Highest existing ruleset. To merge branches into a new ruleset, click New ruleset. 3. In the Password field, enter the password that locks the rulesets on the development system. 11. Click Next. The system adds tasks, which you cannot delete, to the pipeline that are required to successfully run a workflow, for example, Deploy and Generate Artifact. The system also adds other tasks to enforce best practices such as Check guardrail compliance and Verify security checklist. 12. Optional: In the Merge criteria pane, specify tasks that must be completed before a branch can be merged in the pipeline. 1. Click Add task. 2. Specify the task that you want to perform. To specify that a branch must meet a compliance score before it can be merged: 1. From the Task list, select Check guardrail compliance. 2. In the Weighted compliance score field, enter the minimum required compliance score. 3. Click Submit. For more information about compliance scores, see Compliance score logic. To specify that a branch must be reviewed before it can be merged: 1. From the Task list, select Check review status. 2. Click Submit. For more information about branch reviews, see Branch reviews. To run Pega unit tests on the branches for the pipeline application or for an application that is associated with an access group before it can be merged: 1. From the Task list, select Pega unit testing. 2. Optional: To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. 3. Click Submit. For more information about creating Pega unit tests, see Creating Pega unit test cases. 13. Optional: To start a deployment automatically when a branch is merged, click the Trigger deployment on merge check box.Do not select this check box if you want to manually start deployments. For more information, see Manually starting a deployment. 14. Optional: Clear a check box for a deployment life cycle stage to skip it. 15. Optional: In the Continuous Deployment section, specify the tasks to be performed during each stage of the pipeline.See the following topics for more information: Adding the Pega unit task Adding the Check guardrail compliance task Adding the Verify security checklist task Adding the Enable test coverage task Adding the Validate test coverage task Adding the Run Pega scenario tests task Adding the Refresh application quality task Modifying the Approve for production task 16. Optional: Clear the Production ready check box if you do not want to generate an application package, which is sent to the production repository. You cannot clear this check box if you are using a production stage in the life cycle. 17. Click Finish. Adding the Pega unit test task To add a Pega unit test task, do the following steps: 1. Do one of the following actions: Click a manually added task, click the More icon, and then click either Add task above or Add task below. Click Add task in the stage. 2. To run Pega unit tests for either the pipeline application or for an application that is associated with an access group, select Pega unit testing from the Task list. 3. Optional: Perform one of the following actions: To run all the Pega unit tests that are in a Pega unit suite for the pipeline application, in the Test Suite ID field, enter the pxInsName of the test suite. You can find this value in the XML document that comprises the test suite by clicking, in Pega Platform, Actions > XML on the Edit Test Suite form. If you do not specify a test suite, all the Pega unit tests for the pipeline application are run. To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. For more information about creating Pega unit tests, see Creating Pega unit test cases. To run a Jenkins job that you have configured, select Jenkins. 1. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins deployment) that you want to run. 2. In the Token field, enter the Jenkins authentication token. 3. In the Parameters field, enter parameters, if any, to send to the Jenkins job. Separate multiple parameters with a comma. 4. Click Submit. 5. Continue configuring your pipeline. For more information, see one of the following topics: Adding a pipeline on premises

Modifying stages and tasks in the pipeline Adding the manual step task To add a manual step that a user must perform in the pipeline, do the following steps: 1. Do one of the following actions: Click a manually added task, click the More icon, and then click either Add task above or Add task below. Click Add task in the stage. 2. From the Task list, select Manual. 3. In the Job name field, enter text that describes the action that you want the user to take. 4. In the Assigned to field, press the Down Arrow key and select the operator ID to assign the task to. 5. Click Submit. 6. Continue configuring your pipeline. For more information, see one of the following topics: Adding a pipeline on premises Modifying stages and tasks in the pipeline Adding the Check guardrail compliance score task To specify that an application must meet a compliance score, do the following steps: 1. Do one of the following actions: Click a manually added task, click the More icon, and then click either Add task above or Add task below. Click Add task in the stage. 2. From the Task list, select Check guardrail compliance. 3. In the Weighted compliance score field, enter the minimum required compliance score. 4. Continue configuring your pipeline. For more information, see one of the following topics: Adding a pipeline on premises Modifying stages and tasks in the pipeline Adding the Verify security checklist task To specify that all the tasks in the Application Security Checklist must be performed so that the pipeline can comply with security best practices, do the following steps. You must log in to the system for which this task is configured, and then mark all the tasks in the Application Security checklist as completed for the pipeline application. For more information about completing the checklist, see Preparing your application for secure deployment. 1. Do one of the following actions: Click a manually added task, click the More icon, and then click either Add task above or Add task below. Click Add task in the stage. 2. From the Task list, select Verify Security checklist. 3. Click Submit. 4. Continue configuring your pipeline. For more information, see one of the following topics: Adding a pipeline on premises Modifying stages and tasks in the pipeline Adding the Enable test coverage task To start a test coverage session at the application level, do the following steps. Starting and stopping test coverage generates a report that identifies the executable rules in your application that are either covered or not covered by tests. 1. Do one of the following actions: Click a manually added task, click the More icon, and then click either Add task above or Add task below. Click Add task in the stage. 2. From the Task list, select Enable test coverage. 3. .Click Submit. 4. Continue configuring your pipeline. For more information, see one of the following topics: Adding a pipeline on premises Modifying stages and tasks in the pipeline Adding the Validate test coverage step To stop a test coverage session, do the following actions. Add this task below the Start test coverage task on the same system. You must add this task to stop a test coverage session if you used the Enable test coverage task. For more information about application-level coverage reports, see Generating an application-level test coverage report. 1. Do one of the following actions: Click a manually added task, click the More icon, and then click either Add task above or Add task below. Click Add task in the stage. 2. From the Task list, select Validate test coverage. 3. Click Submit. 4. Continue configuring your pipeline. For more information, see one of the following topics: Adding a pipeline on premises Modifying stages and tasks in the pipeline Adding the Run Pega scenario tests step To run a Pega scenario tests step, do the following actions. For more information about scenario tests, see Creating a scenario test. 1. Do one of the following actions: Click a manually added task, click the More icon, and then click either Add task above or Add task below. Click Add task in the stage. 2. From the Task list, select Run Pega scenario tests. 3. In the User name field, enter the user name for the Pega Platform instance on which you are running Pega scenario tests. 4. In the Password field, enter the Pega Platform password. 5. From the Test Service Provider field, select the browser that you are using to run the Pega scenario tests in the pipeline. 6. In the Provider auth name field, enter the auth name that you you use to log in to the test service provider. 7. In the Provider auth key field, enter the key for the test service provider. 8. Click Submit. 9. Continue configuring your pipeline. For more information, see one of the following topics:

Adding a pipeline on premises Modifying stages and tasks in the pipeline Adding the Refresh application quality task To refresh the Application Quality dashboard, which provides information about the health of your application, on the candidate system, do the following steps. Add this task after you have run Pega unit tasks, checked guardrail compliance, run Pega scenario tests, and started and stopped test coverage.< 1. Do one of the following actions: Click a manually added task, click the More icon, and then click either Add task above or Add task below. Click Add task in the stage. 2. From the Task list, select Refresh application quality. 3. Click Submit. 4. Continue configuring your pipeline. For more information, see one of the following topics: Adding a pipeline on premises Modifying stages and tasks in the pipeline Modifying the Approve for production task To modify the Approve for production task, which is added to the stage before production and which you use so that a user must approve application changes before they are sent to production, do the following steps: 1. Click the Info icon. 2. In the Job name field, enter a name for the task. 3. In the Assign to field, press the Down Arrow key and select the user who approves the application for production. An email is sent to this user, who can approve or reject application changes from within the email. 4. Click Submit. 5. Continue configuring your pipeline. For more information, see one of the following topics: Adding a pipeline on premises Modifying stages and tasks in the pipeline

Modifying application details You can modify application details, such as the product rule that defines the content of the application that moves through the pipeline. 1. 2. 3. 4. 5. 6. 7. 8.

Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. Click Actions > Pipeline settings. Click Application details. Optional: In the Development environment field, enter the URL of the development system, which is the system on which the product rule that defines the application package that moves through the repository is located. Optional: In the Version field, press the Down Arrow key and select the application version. Optional: In the Product rule field, enter the product rule that defines the contents of the application. Optional: In the Version field, enter the product rule version. Optional: If the application depends on other applications, in the Dependencies section, add those applications. 1. Click Add. 2. In the Application name field, press the Down Arrow key and select the application name. 3. In the Application version field, press the Down Arrow key and select the application version. 4. In the Repository name field, press the Down Arrow key and select the repository that contains the production-ready artifact of the dependent application. If you want the latest artifact of the dependent application to be automatically populated, ensure that the repository that contains the production-ready artifact of the dependent application is configured to support file updates. 5. In the Artifact name field, press the Down Arrow key and select the artifact.

For more information about dependent applications, see Listing product dependencies.

Modifying URLs and authentication profiles You can modify the URLs of your development and candidate systems and the authentication profiles that are used to communicate between those systems and the orchestration server. 1. 2. 3. 4. 5.

Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. Click Actions > Pipeline settings. Click Deployment stages. In the Environments field for the system, press the Down Arrow key and select the URL of the system. In the Authentication field for the system, press the Down Arrow key and select the authentication profile that you want to communicate from the orchestration server to the system. 6. Click Save.

Modifying development and production repositories You can modify the development and production repositories through which the product rule that contains application contents moves through the pipeline. All the generated artifacts are archived in the Development repository, and all the production-ready artifacts are archived in the Production repository. You do not need to configure repositories if you are using Pega Cloud but can use different repositories other than the default ones that are provided. 1. 2. 3. 4.

Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. Click Actions > Pipeline settings. Click Artifact Management. Do one of the following actions to select a repository: If you are using Deployment Manager on premises, or on Pega Cloud with default repositories, complete the following tasks: 1. In the Application repository section, in the Development repository field, press the Down Arrow key and select the development repository 2. In the Production repository field, press the Down Arrow key and select the production repository. If you are using Deployment Manager on Pega Cloud and want to use different repositories other than the default repositories, complete the following tasks: 1. In the Artifact repository section, click Yes. 2. In the Development repository field, press the Down Arrow key and select the development repository. 3. In the Production repository field, press the Down Arrow key and select the production repository.

5. Click Save.

Specifying Jenkins server information If you are using a Jenkins step, specify details about the Jenkins server such as its URL. 1. 2. 3. 4. 5.

Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. Click Actions > Pipeline settings. Click External orchestration server. In the URL field, enter the URL of the Jenkins server. In the Authentication profile field, press the Down Arrow key and select the authentication profile on the orchestration server that specifies the Jenkins credentials to use for Jenkins jobs. 6. Click Save.

Specifying merge options for branches If you are using branches in your application, specify options for merging branches into the base application. 1. 2. 3. 4.

Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. Click Actions > Pipeline settings. Click Merge policy. Specify if you are using branches in your application. 1. Do one of the following actions: If you are not using branches, click the No radio button. If you are using branches, do the following actions: 1. Click Yes. 2. Do one of the following actions: To merge branches into the highest existing ruleset in the application, click Highest existing ruleset. To merge branches into a new ruleset, click New ruleset. 3. In the Password field, enter the password that locks the rulesets on the development system. 5. Click Save.

Modifying stages and tasks in the pipeline You can modify the stages and the tasks that are performed in each stage of the pipeline. For example, you can skip a stage or add tasks such as Pega unit testing to be done on the QA stage. 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Click Pipeline model. 3. Optional: In the Merge criteria pane, specify tasks that must be completed before a branch can be merged in the pipeline. 1. Click Add task. 2. Specify the task that you want to perform. To run Pega unit tests on the branches for the pipeline application or for an application that is associated with an access group before it can be merged: 1. From the Task list, select Pega unit testing. 2. Optional: To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. For more information about creating Pega unit tests, see Creating Pega unit test cases.

4. 5. 6.

7. 8.

3. Click Submit. To specify that a branch must meet a compliance score before it can be merged: 1. From the Task list, select Check guardrail compliance. 2. In the Weighted compliance score field, enter the minimum required compliance score. 3. Click Submit. For more information about compliance scores, see Compliance score logic. To specify that a branch must be reviewed before it can be merged: 1. From the Task list, select Check review status. 2. Click Submit. For more information about branch reviews, see Branch reviews. Optional: To start a deployment automatically when a branch is merged, click the Trigger deployment on merge check box. Do not select this check box if you want to manually start a deployment. For more information, see Manually starting a deployment. Optional: Clear a check box for a deployment life cycle stage to skip it. Optional: In the Continuous Deployment section, specify the tasks to be performed during each stage of the pipeline.See the following topics for more information: Adding the Pega unit task Adding the Check guardrail compliance task Adding the Verify security checklist task Adding the Enable test coverage task Adding the Validate test coverage task Adding the Run Pega scenario tests task Adding the Refresh application quality task Modifying the Approve for production task Optional: Clear the Production ready check box if you do not want to generate an application package, which is sent to the production repository. You cannot clear this check box if you are using a production stage in the life cycle. Click Finish.

Accessing systems in your pipeline You can open the systems in your pipeline and log in to the Pega Platform instances. 1. Optional: If the pipeline is not already open, in the Navigation pane, click Pipelines. 2. Click the pop-out arrow for the system that you want to open.

Manually starting a deployment Start a deployment manually if you are not using branches and are working directly in rulesets. You can also start a deployment manually if you do not want deployments to start automatically when branches are merged. You must also clear the Trigger deployment on merge check box in the pipeline configuration. 1. Do one of the following actions: If the pipeline that you want to start is open, click Start deployment.

Click Pipelines, and then click Start deployment for the pipeline that you want to start. 2. In the Start deployment dialog box, start a new deployment or deploy an existing application by completing one of the following actions: To start a deployment and deploy a new application package, do the following steps: 1. Click Generate new artifact. 2. In the Deployment name field, enter the name of the deployment. 3. Click Deploy. To deploy an application package that is on a cloud repository, do the following steps: 1. Click Deploy an existing artifact. 2. In the Deployment name field, enter the name of the deployment. 3. In the Select a repository field, press the Down Arrow key and select the repository. 4. In the Select an artifact field, press the Down Arrow key and select the application package. 3. Click Deploy.

Starting a deployment in a branch-based environment In non-distributed, branch-based environments, you can immediately start a deployment by submitting a branch into a pipeline in the Merge Branches wizard. For more information, see Submitting a branch into a pipeline. The wizard displays the merge status of branches so that you do not need to open Deployment Manager to view it.

Starting a deployment in a distributed branch-based environment If you are using Deployment Manager in a distributed, branch-based environment and using multiple pipelines per application, first export the branch to the main development system, and then merge it. 1. On the remote development system, package the branch. For more information, see Packaging a branch. 2. Export the branch. 3. On the main development system, import the branch by using the Import wizard. For more information, see Import wizard landing page. 4. On the main development system, start a deployment by using the Merge Branches wizard. For more information, see Submitting a branch into a pipeline. The wizard displays the merge status of branches so that you do not need to open Deployment Manager to view it. If you are using one pipeline per application, you can publish a branch to start the merge. For more information, see Publishing a branch to a repository.

Publishing application changes in App Studio You can publish application changes that you make in App Studio to the pipeline. Publishing your changes creates a patch version of the application and starts a deployment. For example, you can change a life cycle, data model, or user interface elements in a screen and submit those changes to systems in the pipeline. When you publish an application to a stage, your rules are deployed immediately to that system. To allow stakeholders to inspect and verify changes before they are deployed the stage, configure a manual task in on the previous stage. When the pipeline runs, it is paused during a manual step that is assigned to a user, which allows stakeholders to review your changes before they approve the step and resume running the pipeline. Your pipeline should have at least a quality assurance or staging stage with a manual task so that you do not deploy changes to production that have not been approved by stakeholders. You can submit applications to a pipeline when there is only one unlocked ruleset version in each ruleset of your application. 1. In App Studio, do one of the following actions: Click Turn editing on, and then, in the Navigation panel, click Settings > Versions. In the App Studio header, click Publish. The Settings page displays the stages that are enabled in the application pipeline in Deployment Manager. The available stages are, in order, quality assurance, staging, and production. It also displays the application versions that are on each system. The version numbers are taken from the number at the end of each application deployment name in Deployment Manager. For example, if a deployment has a name of "MyNewApp:01_01_75", the dialog box displays "v75". 3. Submit an application from development to quality assurance or staging in your pipeline by completing the following steps: a. Click either Publish to QA or Publish to staging. b. Optional: To add a comment, which will be published when you submit the application, add a comment in the Publish confirmation dialog box. c. Optional: If Agile Workbench has been configured, associate a bug or user story with the application, in the Associated User stories/Bugs field, press the Down Arrow key and select the bug or user story. d. Click OK. Each unlocked ruleset version in your application is locked and rolled to the next highest version and is packaged and imported into the system. The amount of time that publishing application changes takes depends on the size of your application. A new application is also copied from the application that is defined on the pipeline in Deployment Manager. The application patch version is updated to reflect the version of the new rulesets; for example, if the ruleset versions of the patch application are 01-01-15, the application version is updated to be 01.01.15. In addition, this application is locked and cannot be unlocked. You can use this application to test specific patch versions of your application on quality assurance or staging systems. You can also use it to roll back a deployment. 4. Optional: Make changes to your application in the unlocked rulesets, which you can publish again into the pipeline. If an application is already on the system, it is overridden by the new version that you publish. 5. Optional: If you configured a manual step, request that stakeholders review and test your changes. After they communicate to you that they have completed testing, you can publish your changes to the next stage in the pipeline. 6. Publish the application to the next stage in the pipeline by clicking the link that is displayed. The name of the link is the Job name field of the manual task that is defined on the stage. If you do not have a manual task defined, the application automatically moves to the next stage.

Schema changes in application packages If an application package that is to be deployed on candidate systems contains schema changes, the Pega Platform orchestration server checks the candidate system to verify that you have the required privileges to deploy the schema changes. One of the following results occurs: If you have the appropriate privileges, schema changes are automatically applied to the candidate system, the application package is deployed to the candidate system, and the pipeline continues. If you do not have the appropriate privileges, Deployment Manager generates an SQL file that lists the schema changes and sends it to your email address. It also creates a manual step, pausing the pipeline, so that you can apply the schema changes. After you complete the step, the pipeline continues. For more information about completing a step, see Completing or rejecting a manual step. You can also configure settings to automatically deploy schema changes so that you do not have to manually apply them if you do not have the required privileges. For more information, see Configuring settings to automatically deploy schema changes.

Configuring settings to automatically deploy schema changes You can configure settings to automatically deploy schema changes that are in an application package that is to be deployed on candidate systems. Configure these settings so that you do not have to apply schema changes if you do not have the privileges to deploy them. 1. On the orchestration server, in Pega Platform, set the AutoDBSchemaChanges dynamic system setting to true to enable schema changes at the system level. 1. In Dev Studio, search for AutoDBSchemaChanges. 2. In the dialog box that appears for the search results, click AutoDBSchemaChanges. 3. On the Settings tab, in the Value field, enter true. 4. Click Save. 2. Add the SchemaImport privilege to your access role to enable schema changes at the user level. For more information, see Specifying privileges for an Access or Role to Object rule. These settings are applied sequentially. If the AutoDBSchemaChanges dynamic system setting is set to false, you cannot deploy schema changes, even if you have the SchemaImport privilege.

Completing or rejecting a manual step in a deployment If a manual step is configured on a stage, the deployment pauses when it reaches the step, and you can either complete it or reject it. For example, if a user was assigned a task and completed it, you can complete the task to continue the deployment. Deployment Manager also sends you an email when there is a manual step in the pipeline. You can complete or reject a step either within the pipeline or through email. Deployment Manager also generates a manual step if there are schema changes in the application package that the release manager must apply. For more information, see Schema changes in application packages. To complete or reject a manual step within the deployment, do the following steps: 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Click one of the following links: Complete: Resolve the task so that the deployment continues through the pipeline. Reject: Reject the task so that the deployment does not proceed. To complete or reject a manual step from within an email, click either Accept or Reject.

Managing aged updates An aged update is a rule or data instance in an application package that is older than an instance that is on a system to which you want to deploy the application package. By being able to import aged updates, skip the import, or manually deploy your application changes, you now have more flexibility in determining the rules that you want in your application and how you want to deploy them. For example, you can update a dynamic system setting on a quality assurance system, which has an application package that contains the older instance of the dynamic system setting. Before Deployment Manager deploys the package, the system detects that the version of the dynamic system setting on the system is newer than the version in the package and creates a manual step in the pipeline. To import aged updates: 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Optional: Click View aged updates to view a list of the rules and data instances, which are in the application package, that are older than the instances that are on the system. 3. Click the More icon and select one of the following options: Click Overwrite aged updates to import the older rule and data instances that are in the application package into the system, which overwrites the newer versions that are on the system. Click Skip aged updates to skip the import. Click Deploy manually and resume to manually deploy the package from the Import wizard on the system. Deployment Manager does not run the Deploy step on the stage.

Pausing a deployment When you pause a deployment, the pipeline completes the task that it is running, and stops the deployment at the next step. To pause a deployment: 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Click the pipeline. 3. Click Pause.

Stopping a deployment To stop a deployment: 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Click the More icon, and then click Abort.

Performing actions on a deployment that has errors If a deployment has errors, the pipeline stops processing on it. You can perform actions on it, such as rolling back the deployment or skipping the step on which the error occurred. 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Click the More icon, and then click one of the following options: Resume from current task - Resume running the pipeline from the task. Skip current task and continue - Skip the step and continue running the pipeline. Rollback - Roll back to an earlier deployment. Abort - Stop running the pipeline.

Diagnosing a pipeline You can diagnose your pipeline to verify that your pipeline is configured properly such as whether the target application and product rule are in the development environment, connectivity between systems and repositories is working, and premerge settings are correctly configured. 1. 2. 3. 4.

Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. Click Actions > Diagnose pipeline. In the Diagnose application pipeline dialog box, review the errors, if any. Optional: To view troubleshooting tips about errors, hover your mouse over the Troubleshooting tips link.

If the RMURL dynamic system setting is not configured, Deployment Manager displays a message that you can disregard if you are not using branches, because you do not need to configure the dynamic system setting.

Viewing merge requests You can view the status of the merge requests for a pipeline. For example, you can see whether a branch was merged in a deployment and when it was merged. 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. In the Development stage, click X Merges in queue to view all the branches that are in the queue or for which merge is in progress. 3. In the Merge requests ready for deployment dialog box, click View all merge requests to view all the branches that are merged into the pipeline.

Viewing deployment logs View logs for a deployment to see the completion status of operations, for example, when a deployment is moved to a new stage. You can change the logging level to control which events are displayed in the log. For example, you can change logging levels of your deployment from INFO to DEBUG for troubleshooting purposes. For more information, see Logging Level Settings tool. 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Perform one of the following actions: To view the log for the current deployment, click the More icon, and then click View logs. To view the log for a previous deployment, expand the Deployment History pane and click Logs for the appropriate deployment.

Viewing deployment reports Deployment reports provide information about a specific deployment. You can view information such as the number of tasks that you configured on a deployment that have been completed and when each task started and ended. 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Perform one of the following actions: To view the report for the current deployment, click the More icon, and then click View report. To view the report for a previous deployment, expand the Deployment History pane and click Reports for the appropriate deployment.

Viewing reports for all deployments Reports provide a variety of information about all the deployments in your pipeline. You can view the following key performance indicators (KPI): Deployment Success - Percentage of deployments that are successfully deployed to production Deployment Frequency – Frequency of new deployments to production Deployment Speed - Average time taken to deploy to production Start frequency - Frequency at which new deployments are triggered Failure rate - Average number of failures per deployment Merges per day - Average number of branches that are successfully merged per day To view reports, do the following tasks: 1. Do one of the following actions: If the pipeline open, Click Actions > View report. If a pipeline is not open, in the Navigation pane, click Reports. Next, in the Pipeline field, press the Down Arrow key and select the name of the pipeline for which to view the report. 2. Optional: From the list that appears in the top right of the Reports page, select whether you want to view reports for all deployments, the last 20 deployments, or the last 50 deployments.

Deleting an application pipeline When you delete a pipeline, its associated application packages are not removed from the repositories that the pipeline is configured to use. 1. In the Navigation pane, click Pipelines. 2. Click the Delete icon for the pipeline that you want to delete. 3. Click Submit.

Viewing, downloading, and deleting application packages in repositories You can view, download, and delete application packages in repositories that are on the orchestration server. If you are using Deployment Manager on Pega Cloud, application packages that you have deployed to cloud repositories are stored on Pega Cloud. To manage your cloud storage space, you can download and permanently delete the packages. 1. 2. 3. 4. 5. 6.

Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. Click the pipeline for which you want to download or delete packages. Click Actions > Browse artifacts. Click either Development Repository or Production Repository. To download an application package, click the package, and then save it to the appropriate location. To delete a package, select the check boxes for the packages that you want to delete and click Delete.

Deployment Manager 4.2.x Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your Pega applications from within Pega Platform™. You can create a standardized deployment process so that you can deploy predictable, high-quality releases without using third-party tools. With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging; application package generation; artifact management; and package promotion, to different stages in the workflow. Deployment Manager 4.2.x is supported on Pega 8.1. You can download it for Pega Platform from the Deployment Manager Pega Exchange page. Each customer VPC on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use it with your Pega Cloud application. For more information about the features in the latest version of Deployment Manager 4.2.x, see the following articles: Deployment Manager release notes Deployment Manager architecture and workflows Best practices for using branches with Deployment Manager Creating custom repository types for Deployment Manager Installing, upgrading, and configuring Deployment Manager 4.2.x Using Deployment Manager 4.2.x

Installing, upgrading, and configuring Deployment Manager 4.2.x Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks and allow you to quickly deploy high-quality software to production. Each customer virtual private cloud (VPC) on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use it with your Pega Cloud application. This document describes the features for the latest version of Deployment Manager 4.2.x. See the following topics for more information about installing and configuring Deployment Manager: Step Step Step Step Step

1: 2: 3: 4: 5:

Installing Deployment Manager Upgrading to Deployment Manager 4.2.x (optional) Configuring systems in the pipeline Configuring the development system for branch-based development (optional) Configuring additional settings

For information about using Deployment Manager, see Using Deployment Manager 4.2.x.

Step 1: Installing Deployment Manager Each customer virtual private cloud (VPC) on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use it with your Pega Cloud application. If you are upgrading from an earlier release to Deployment Manager 4.2.x, contact Pegasystems® Global Customer Support (GCS) support to request a new version. To install Deployment Manager 4.2.x on premises, complete the following steps: 1. Install Pega 8.1 on all systems in the CI/CD pipeline. 2. Browse to the Deployment Manager Pega Exchange page, and then download the DeploymentManager04.02.0x.zip file for your version of Deployment Manager to your local disk on each system. 3. Use the Import wizard to import files into the appropriate systems. For more information about the Import wizard, see Import wizard landing page. 4. On the orchestration server, import the following files: PegaDevOpsFoundation_4.zip PegaDeploymentManager_4.2.zip 5. On the development, QA, staging, and production systems, import the PegaDevOpsFoundation_4.zip file. 6. Optional: If you are using a distributed development, on the remote development system, import the PegaDevOpsFoundation_4.zip file. 7. Do one of the following actions: 1. If you are upgrading to Deployment Manager 4.2.x, perform the upgrade. For more information, see Upgrading to Deployment Manager 4.2.x. 2. If you are not upgrading Deployment Manager 4.2.x, continue the installation procedure. For more information, see Step 3a: Configuring authentication profiles on the orchestration server and candidate systems.

Step 2: Upgrading to Deployment Manager 4.2.x Before you upgrade, ensure that no deployments are running, have errors, or are paused. To upgrade to Deployment Manager 4.2.x either on Pega Cloud or on premises, perform the following steps: 1. On each candidate system, update the PegaDevOpsFoundation application version to the version of Deployment Manager that you are using. 1. In the Dev Studio header, click the name of your application, and then click Definition. 2. In the Built on application section for the PegaDevOpsFoundation application, in the Version field, press the Down Arrow key and

select the version of Deployment Manager that you are using. 3. Click Save. 2. If you do not see the pipelines that you created in earlier releases, run the pxMigrateOldPipelinesTo42 activity: 1. In Dev Studio, search for pxMigrateOldPipelinesTo42, and then click the activity in the dialog box that displays the results. 2. Click Actions > Run. 3. In the dialog box that is displayed, click Run. 3. On the orchestration server, run the pxUpdateDescription activity. 1. In Dev Studio, search for pxUpdateDescription, and then click the activity in the dialog box that displays the results. 2. Click Actions > Run. 3. In the dialog box that is displayed, click Run. If you are upgrading from Deployment Manager 3.2.1 or a later release, you do not need to do the rest of the steps in this procedure or the required steps in the remainder of this document. If you are upgrading from earlier releases and have pipelines configured, complete this procedure. 4. On the orchestration server, log in to the release management application. 5. Run the pxUpdatePipeline activity. 1. In Dev Studio, search for pxUpdatePipeline, and then click the activity in the dialog box that displays the results. 2. Click Actions > Run. 3. In the dialog box that is displayed, click Run. 6. Modify the current release management application so that it is built on PegaDeploymentManager:04-02-01. 1. In the Dev Studio header, click the name of your application, and then click Definition. 2. In the Edit Application rule form, on the Definition tab, in the Built on application section, for the PegaDeploymentManager application, press the Down Arrow key and select 04.02.01. 3. Click Save. 7. Merge rulesets to the PipelineData ruleset. 1. Click Configure > System > Refactor > Rulesets. 2. Click Copy/Merge RuleSet. 3. Click the Merge Source RuleSet(s) to Target RuleSet radio button. 4. Click the RuleSet Versions radio button. 5. In the Available Source Ruleset(s) section, select the first open ruleset version that appears in the list, and then click the Move icon. 6. All your current pipelines are stored in the first open ruleset. If you modified this ruleset after you created the application, select all the ruleset versions that contain pipeline data. 1. In the target RuleSet/Information section, in the Name field, press the Down Arrow key and select Pipeline Data. 2. In the Version field, enter 01-01-01. 3. For the Delete Source RuleSet(s) upon completion of merge? option, click No. 4. Click Next. 5. Click Merge to merge your pipelines to the PipelineData:01-01-01 rulset. 6. Click Done. 7. Your pipelines are migrated to the Pega Deployment Manager application. Log out of the orchestration server and log back in to it with the DMReleaseAdmin operator ID and the password that you specified for it. For backup purposes, pipelines are still visible in your previous release management application. However, you should not create deployments with this application, because deployments might not work correctly. You do not need to perform any of the required steps in the remainder of this document.

Step 3: Configuring systems in the pipeline Complete the following steps to set up a pipeline for all supported CI/CD workflows. If you are using branches, you must configure additional settings after you perform the required steps. 1. 2. 3. 4.

Step Step Step Step

3a: Configuring authentication profiles on the orchestration server and candidate systems 3b: Configuring the orchestration server 3c: Configuring candidate systems 3d: Creating repositories on the orchestration server and candidate systems

Step 3a: Configuring authentication profiles on the orchestration server and candidate systems When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs, and authentication profiles that communicate between the orchestration server and candidate systems are also installed. On the orchestration server, the following items are installed: The Pega Deployment Manager application. The DMReleaseAdmin operator ID, which release managers use to log in to the Pega Deployment Manager application. You must enable this operator ID and specify its password. The DMAppAdmin authentication profile. You must update this authentication profile to use the password that you specified for the DMAppAdmin operator ID, which is configured on all the candidate systems. On all the candidate systems, the following items are installed: The PegaDevOpsFoundation application. The DMAppAdmin operator ID, which points to the PegaDevOpsFoundation aplication. You must enable this operator ID and specify its password. The DMReleaseAdmin authentication profile. You must update this authentication profile to use the password that you specified for the DMReleaseAdmin operator ID, which is configured on the orchestration server. The DMReleaseAdmin and DMAppAdmin operator IDs do not have default passwords. Configure the default authentication profile by following these steps: 1. On the orchestration server, enable the DMReleaseAdmin operator ID and specify its password. 1. Log in to the orchestration server with [email protected]/install. 2. In Dev Studio, click Records > Organization > Operator ID, and then click DMReleaseAdmin. 3. In the Explorer panel, click the operator ID initials, and then click Operator. 4. On the Edit Operator ID rule form, click the Security tab. 5. Clear the Disable Operator check box. 6. Click Save. 7. Click Update password.

2.

3.

4.

5.

8. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then click Submit. 9. Optional: Clear the Force password change on next login check box if you do not want to change the password for the DMReleaseAdmin operator ID the next time that you log in. 10. Log out of the orchestration server. On each candidate system, update the DMReleaseAdmin authentication profile to use the new password. All candidate systems use this authentication profile to communicate with the orchestration server about the status of the tasks in the pipeline. 1. Log in to each candidate system with the DMReleaseAdmin user name and the password that you specified. 2. In Dev Studio, click Records > Security > Authentication Profile. 3. Click DMReleaseAdmin. 4. On the Edit Authentication Profile rule form, click Set password. 5. In the Password dialog box, enter the password, and then click Submit. 6. Save the rule form. On each candidate system, which includes the development, QA, staging, and production systems, enable the DMAppAdmin operator ID. If you want to create your own operator IDs, ensure that they point to the PegaDevOpsFoundation application. 1. Log in to each candidate system with [email protected]/install. 2. In Dev Studio, click Records > Organization > Operator ID, and then click DMAppAdmin. 3. In the Explorer panel, click the operator ID initials, and then click Operator. 4. On the Edit Operator ID rule form, click the Security tab. 5. Clear the Disable Operator check box. 6. Click Save. 7. Click Update password. 8. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then click Submit. 9. Optional: Clear the Force password change on next login check box if you do not want to change the password for the DMAppAdmin operator ID the next time that you log in. 10. Log out of each candidate system. On the orchestration server, modify the DMAppAdmin authentication profile to use the new password. The orchestration server uses this authentication profile to communicate with candidate systems so that it can run tasks in the pipeline. 1. Log in to the orchestration server with the DMAppAdmin user name and the password that you specified. 2. In Dev Studio, click Records > Security > Authentication Profile. 3. Click DMAppAdmin. 4. On the Edit Authentication Profile rule form, click Set password. 5. In the Password dialog box, enter the password, and then click Submit. 6. Save the rule form. Do one of the following actions: 1. If you are upgrading to Deployment Manager 4.2.x, resume the upgrade procedure from step 2. For more information, see Upgrading to Deployment Manager 4.2.x. 2. If you are not upgrading, continue the installation procedure. For more information, see Step 3b: Configuring the orchestration server.

Step 3b: Configuring the orchestration server The orchestration server is the system on which release managers configure and manage CI/CD pipelines. 1. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages. 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 2. Configure the candidate systems in your pipeline. For more information, see Step 3c: Configuring candidate systems. Deployment Manager provides a dedicated portal, pxDeploymentManager, which is applied by default to the PegaDeploymentManager:Administrators access group.

Step 3c: Configuring candidate systems Configure each system system that is used for the development, QA, staging, and production stage in the pipeline. 1. On each candidate system, add the PegaDevOpsFoundation application to your application stack. 1. In the Dev Studio header, click the name of your application, and then click Definition. 2. In the Built on application section, click Add application. 3. In the Name field, press the Down Arrow key and select PegaDevOpsFoundation. 4. In the Version field, press the Down Arrow key and select the version of Deployment Manager that you are using. 5. Click Save. 2. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages. 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 3. Optional: If you want to use a product rule other than the default product rule that is created by the New Application wizard, on the development system, create a product rule that defines the application package that will be moved through repositories in the pipeline. For more information, see Creating a product rule by using the create menu. When you use the New Application wizard, a default product rule is created that has the same name as your application. 4. Configure repositories through which to move artifacts in your pipeline. For more information, see Step 3d: Creating repositories on the orchestration server and candidate systems.

Step 3d: Creating repositories on the orchestration server and candidate systems If you are using Deployment Manager on premises, create repositories on the orchestration server and all candidate systems to move your application between all the systems in the pipeline. You can use a supported repository type that is provided in Pega Platform™, or you can create a custom repository type. If you are using Deployment Manager on Pega Cloud, default repositories are provided. If you want to use repositories other than the ones provided, you can create your own.

For more information about creating a supported repository, see the following Creating a repository for file storage and knowledge management. For more information about creating a custom repository type, see Creating custom repository types for Deployment Manager. When you create repositories, note the following information: The Pega repository type is not supported. Ensure that each repository has the same name on all systems. When you create JFrog Artifactory repositories, ensure that you create a Generic package type in JFrog Artifactory. Also, when you create the authentication profile for the repository on Pega Platform, you must select the Preemptive authentication check box. After you configure a pipeline, you can verify that the repository connects to the URL of the development and production repositories by clicking Test Connectivity on the Repository rule form.

Step 4: Configuring the development system for branch-based development (optional) After you configure the orchestration server and all your candidate systems, configure additional settings so that you can use pipelines if you are using branches in a distributed or non-distributed branch-based environment. You must configure the development system to create a pipeline in a branch-based environment. 1. On the development system (in nondistributed environment) or the main development system (in a distributed environment), create a Dynamic System Setting to define the URL of the orchestration server, even if the orchestration server and the development system are the same system. 1. Click Create > Records > SysAdmin > Dynamic System Settings. 2. In the Owning Ruleset field, enter Pega-DevOps-Foundation. 3. In the Setting Purpose field, enter RMURL. 4. Click Create and open. 5. On the Settings tab, in the Value field, enter the URL of the orchestration server. Use this format: http://hostname:port/prweb/PRRestService. 6. Click Save. 2. Complete the following steps on either the development system (in a non-distributed environment) or the remote development system (in a distributed environment). 1. Use the New Application wizard to create a new development application that developers will log in to. This application allows development teams to maintain a list of development branches without modifying the definition of the target application. 2. Add the target application of the pipeline as a built-on application layer of the development application. 1. Log in to the application. 2. In the Dev Studio header, click the name of your application, and then click Definition. 3. In the Built-on application section, click Add application. 4. In the Name field, press the Down Arrow key and select the name of the target application. 5. In the Version field, press the Down Arrow key and select the target application version. 6. Click Save. 3. Lock the application rulesets to prevent developers from making changes to rules after branches have been merged. 1. In the Dev Studio header, click the name of your application, and then click Definition. 2. In the Application rulesets section, click the Open icon for each ruleset that you want to lock. 3. Click Lock and Save. 4. Optional: It is recommended that you merge branches by using the Merge Branch wizard. However, you can publish a branch to the remote development system to start a deployment. Publishing a branch when you have multiple pipelines per application is not supported. 1. In Dev Studio, enable Pega repository types. For more information, see Enabling the Pega repository type. 2. Create a new Pega repository type. For more information, see Creating a repository connection for file storage and knowledge management. Ensure that you do the following tasks: In the Host ID field, enter the URL of the development system. The default access group of the operator that is configured for the authentication profile of this repository should point to the pipeline application on the development system (in a nondistributed environment) or main development system (in a distributed environment).

Step 5: Configuring additional settings As part of your pipeline, you can optionally send email notifications to users, configure Jenkins if you are using a Jenkins task, and upgrade to the latest version of Deployment Manager if you are using a previous version. See the following topics for more information: Configuring email notifications on the orchestration server Configuring Jenkins

Configuring email notifications on the orchestration server You can optionally configure email notifications on the orchestration server. For example, users can receive emails when pre-merge criteria are not met and the system cannot create a deployment. To configure the orchestration server to send emails, complete the following steps: 1. ​Use the Email wizard to configure an email account and listener by clicking Dev Studio > Integration > Email > Email Wizard. This email account sends notifications to users when events occur, for example, if there are merge conflicts. For detailed information, see the procedure for “Configuring an email account that receives email and creates or manages work” in Entering email information in the Email wizard. 2. From the What would you like to do? list, select Receive an email and create/manage a work object. 3. From the What is the class of your work type? list, select Pega-Pipeline-CD. 4. From the What is your starting flow name? list, select NewWork. 5. From the What is your organization? list, select the organization that is associated with the work item. 6. In the What Ruleset? field, select the ruleset that contains the generated email service rule. This ruleset applies to the work class. 7. In the What RuleSet Version? field, select the version of the ruleset for the generated email service rule. 8. Click Next to configure the email listener. 9. In the Email Account Name field, enter <span><span>Pega-Pipeline-CD, which is the name of the email account that the listener references for incoming and outgoing email. 10. In the Email Listener Name field, enter the name of the email listener. Begin the name with a letter, and use only letters, numbers, the ampersand character (&), and hyphens. 11. In the Folder Name field, enter the name of the email folder that the listener monitors. Typically, this folder is INBOX.

12. In the Service Package field, enter the name of the service package to be deployed. Begin the name with a letter, and use only letters, numbers, and hyphens to form an identifier. 13. In the Service Class field, enter the service class name. 14. In the Requestor User ID field, press the Down Arrow Key, and select the operator ID of the release manager operator. 15. In the Requestor Password field, enter the password for the release manager operator. 16. In the Requestor User ID field, enter the operator ID that the email service uses when it runs. 17. In the Password field, enter the password for the operator ID. 18. Click Next to continue the wizard and configure the service package. For more information, see Configuring the service package in the Email wizard. 19. After you complete the wizard, enable the listener that you created in the Email Wizard. Email notifications Emails are also preconfigured with information about each notification type. For example, when a deployment failure occurs, the email that is sent provides information, such as the pipeline name and URL of the system on which the deployment failure occurred. Preconfigured emails are sent in the following scenarios: Deployment start – When a deployment starts, an email is sent to the release manager and, if you are using branches, to the operator who started a deployment. Deployment step completion or failure – When a step either completes or fails, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. The deployment pauses if there are any errors. Deployment completion – When a deployment is successfully completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Stage completion or failure – When a stage in a deployment process either succeeds or fails, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Manual tasks requiring approval – When a manual task requires email approval from a user, an email is sent to the user, who can approve or reject the task from the email. Stopped deployment – When a deployment is stopped, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Pega unit testing success or failure – If you are using the Run Pega unit tests task, and the task either succeeds or fails, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Schema changes required – If you do not have the required schema privileges to deploy schema changes on application packages that require those changes, an email is sent to the operator who started the deployment. Guardrail compliance score success or failure – If you are using the Check guardrail compliance task, an email is sent to the release manager if the task either succeeds or fails. Approve for production – If you are using the Approve for production task, which requires approval from a user before application changes are deployed to production, an email is sent to the user. The user can reject or approve the changes. Verify security checklist success or failure – If you are using the Verify security checklist task, which requires that all tasks be completed in the Application Security Checklist to ensure that the pipeline complies with security best practices, an email is sent to the release manager if the test either succeeds or fails. Pega scenario testing success or failure – If you are using the Run Pega scenario tests task, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge, if Pega scenario testing either succeeds or fails. Start test coverage success or failure – If you are using the Enable test coverage task to generate a test coverage report, an email is sent to the release manager if the task either fails or succeeds. Verify test coverage success or failure – If you are using the Verify test coverage task, an email is sent to the release manager if the task either fails or succeeds. Application quality statistics refreshed – If you are using the Refresh application quality statistics task, an email is sent to the release manager when the task is run.

Configuring Jenkins If you are using a Jenkins task in your pipeline, configure Jenkins so that it can communicate with the orchestration server. 1. On the orchestration server, create an authentication profile that uses Jenkins credentials. 1. Click Create > Security > Authentication Profile. 2. Enter a name, and then click Create and open. 3. In the User name field, enter the user name of the Jenkins user. 4. Click Set password, enter the Jenkins password, and then click Submit. 5. Click the Preemptive authentication check box. 6. Click Save. 2. Because the Jenkins task does not support Cross-Site Request Forgery (CSRF), disable it by completing the following steps: 1. In Jenkins, click Manage Jenkins. 2. Click Configure Global Security. 3. In the CRSF Protection section, clear the Prevent Cross Site Request Forgery exploits check box. 4. Click Save. 3. Install the Post build task plug-in. 4. Install the curl command on the Jenkins server. 5. Create a new freestyle project. 6. On the General tab, select the This project is parameterized check box. 7. Add the BuildID and CallBackURL parameters. 1. Click Add parameter, and then select String parameter. 2. In the String field, enter BuildID. 3. Click Add parameter, and then select String parameter. 4. In the String field, enter CallBackURL. 8. In the Build Triggers section, select the Trigger builds remotely check box. 9. In the Authentication Token field, select the token that you want to use when you start Jenkins jobs remotely. 10. In the Build Environment section, select the Use Secret text(s) or file(s) check box. 11. In the Bindings section, do the following actions: 1. Click Add, and then select User name and password (conjoined). 2. In the Variable field, enter RMCREDENTIALS 3. .In the Credentials field, click Specific credentials. 4. Click Add, and then select Jenkins. 5. In the Add credentials dialog box, in the Username field, enter the operator ID of the release manager operator that is configured on the orchestration server. 6. In the Password field, enter the password. 7. Click Save. 12. In the Post-Build Actions section, do one of the following actions, depending on your operating system: If Jenkins is running on Microsoft Windows, add the following post-build tasks: 1. Click Add post-build action, and then select Post build task.

2. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build fails, for example BUILD FAILURE. 3. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%". 4. Click Add another task. 5. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build is successful, for example BUILD SUCCESS. 6. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%" 7. Click Save. If Jenkins is running on UNIX or Linux, add the following post-build tasks. Use the dollar sign ($) instead of the percent sign (%) to access the environment variables. 1. Click Add post-build action, and then select Post build task. 2. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build fails, for example BUILD FAILURE. 3. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 4. Click Add another task. 5. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build is successful, for example BUILD SUCCESS. 6. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 7. Click Save.

Using Deployment Manager 4.2.x Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks so that you can quickly deploy high-quality software to production. On the orchestration server, release managers use the DevOps landing page to configure CI/CD pipelines for their Pega Platform™ applications. The landing page displays all the running and queued application deployments, branches that are to be merged, and reports that provide information about your DevOps environment such as key performance indicators (KPIs). This document describes the features for the latest version of Deployment Manager 4.2.x. For more information about using Deployment Manager to configure and use CI/CD pipelines, see the following topics: Accessing the Dev Studio portal Starting Deployment Manager Roles and users Configuring an application pipeline Accessing systems in your pipeline Manually starting a deployment Starting a deployment in a branch-based environment Starting a deployment in a distributed, branch-based environment Publishing application changes in App Studio Schema changes in application packages Completing or rejecting a manual step in a deployment Managing aged updates Pausing a deployment Stopping a deployment Performing actions on a deployment with errors Diagnosing a pipeline Viewing merge requests Viewing deployment logs Viewing deployment reports Viewing reports for all deployments Deleting an application pipeline Viewing, downloading and deleting application packages in repositories

Accessing the Dev Studio portal Deployment Manager provides a dedicated portal from which you can access features. From within Deployment Manager, when you configure pipeline details, you can open, modify, and create repositories and authentication profiles in Dev Studio if you have permissions to use the Dev Studio portal. If you add the Developer portal to the PegaDeploymentManager:Administrators access group, all the users that you add in the Deployment Manager portal can also access Dev Studio. To add the Dev Studio portal to the access group of the users who can configure repositories and authentication profiles, complete the following steps: 1. 2. 3. 4. 5.

In Dev Studio, click Configure > Org & Security > Groups & Roles > Access Groups. Click the access group that you want to configure. In the Edit Access Group rule form, on the Definition tab, in the Available Portals field, click Add portal. In the Name field, press the Down Arrow key and select Developer. Save the rule form.

Starting Deployment Manager Deployment Manager provides a dedicated portal from which you can access features. Depending on your permissions, you log in to either Deployment Manager or Dev Studio. To start Deployment Manager from Dev Studio, in the header, click Launch > Deployment Manager.

Roles and users

Deployment Manager provides two default roles, which you cannot modify or delete, that define privileges for super administrators and application administrators. Privileges for super administrators are applied across all applications, and privileges for application administrators are applied to specific applications. Super administrators can also add roles and specify the privileges to assign to them. Super administrators and application administrators can add users and assign them access to the applications that they manage. By defining roles and users, you can manage which users can access Deployment Manager and which features they can access. For example, you can create a role that does not permit users to delete pipelines for a specific application. For more information, see the following topics: Using roles and privileges by creating a dynamic system setting Adding and modifying roles Adding users and specifying their roles Modifying user roles and privileges Modifying your user details and password Deleting users

Using roles and privileges by creating a dynamic system setting To use roles and privileges, you must first create the EnableAttributeBasedSecurity dynamic system setting. 1. 2. 3. 4. 5. 6. 7.

In Dev Studio, click Create > SysAdmin > Dynamic System Settings. In the Short Description field, enter a short description. In the Owning Ruleset field, enter Pega-RulesEngine . In the Setting Purpose field, enter EnableAttributeBasedSecurity. Click Create and open. On the Settings tab, in the value field, enter true. Click Save.

Adding and modifying roles If you are a super administrator, you can add and modify roles. 1. In the Navigation pane, click Users, and then click Roles and privileges. 2. Do one of the following actions: To add a role, Click Add role. To modify a role, click Edit. 3. In the Name field, enter a name for the role. 4. Select the privileges that you want to assign to the role. 5. Click Submit.

Adding users and specifying their roles If you are a super administrator or application administrator, you can add users to Deployment Manager and specify their roles. Only super administrators can create other super administrators or application administrators who can access one or more applications. Application administrators can create other application administrators for their the applications that they manage. 1. In the Navigation pane, click Users, and then click People. 2. On the People page, click Add user. 3. In the Add user dialog box, click the User field, do one of the following actions: Press the Down Arrow key and select the user that you want to add. Enter an email address. 4. Click Add. 5. From the Role list, select the role to assign to the user. 6. Optional: If you selected the App admin role or a custom role, in the Applications field, enter the application name that the user can access. 7. Click Send invite to send an email, which contains the user name and randomly-generated password for the user to log in to Deployment Manager with, to the user.

Modifying user roles and privileges Super administrators can give other users super administrative privileges or assign them as application administrators to any application. Application administrators can assign other users as application administrator for the applications that they manage. 1. 2. 3. 4.

In the Navigation pane, click Users, and then click People. On the People page, click the user. In the Roles and privileges section, modify user role and applications that they can access, as appropriate. Click Save.

Modifying your user details and password You can modify your own user details, such as first and last name, and you can change your password. 1. 2. 3. 4.

In the Navigation pane, click Users, and then click People. On the People page, click your user name. In the Personal details section, modify your name, email address, and phone number, as appropriate. To change your password: 1. Click Update password. 2. In the Change operator ID dialog box, enter your new password, reenter it to confirm it, and then click Submit. 5. Click Save.

Deleting users If you are a super administrator or application administrator, you can delete users for the applications that you manage. 1. In the Navigation pane, click Users, and then click People. 2. On the People page, click the Delete icon for the user that you want to delete.

Configuring an application pipeline

When you add a pipeline, you specify merge criteria and configure stages and steps in the continuous delivery workflow. For example, you can specify that a branch must be peer-reviewed before it can be merged, and you can specify that Pega unit tests must be run after a branch is merged and is in the QA stage of the pipeline. You can create multiple pipelines for one version of an application. For example, you can use multiple pipelines in the following scenarios: To move a deployment to production separately from the rest of the pipeline. You can then create a pipeline that has only a production stage or development and production stages. To use parallel development and hotfix life cycles for your application. For more information, see the following topics: Adding a pipeline on Pega Cloud Adding a pipeline on premises Modifying application details Modifying URLs and authentication profiles Modifying development and production repositories Specifying Jenkins server information Specifying merge options for branches Modifying stages and tasks in the pipeline

Adding a pipeline on Pega Cloud To add a pipeline on Pega Cloud, perform the following steps: 1. Click Pipelines. 2. Click New. 3. Specify the details of the application for which you are creating the pipeline. 1. Optional: If you want to change the URL of your development system, which is populated by default with your development system URL, in the Development environment field, press the Down Arrow key and select the URL. This is the system on which the product rule that defines the application package that moves through the repository is located. 2. In the Application field, press the Down Arrow key and select the name of the application. 3. In the Version field, press the Down Arrow key and select the application version. 4. In the Access group field, press the Down Arrow key and select the access group for which pipeline tasks are run. This access group must be present on all the candidate systems and have at least the sysadmin4 role. 5. In the Pipeline name field, enter the name of the pipeline. This name must be unique. 4. Click Create. The system adds tasks, which you cannot delete, to the pipeline that are required to successfully run a workflow, for example, Deploy and Generate Artifact. For Pega Cloud, it also adds mandatory tasks that must be run on the pipeline, for example, the Check guardrail compliance task and Verify security checklist task. 5. Optional: Add tasks that you want to perform on your pipeline, such as Pega unit testing. For more information, see Modifying stages and tasks in the pipeline.

Adding a pipeline on premises To add a pipeline on premises, complete the following steps: 1. Click Pipelines. 2. Click New. 3. Specify the details of the application for which you are creating the pipeline. 1. In the Development environment field, enter the URL of the development system. This is the system on which the product rule that defines the application package that moves through the repository is located. 2. In the Application field, press the Down Arrow key and select the name of the application. 3. In the Version field, press the Down Arrow key and select the application version. 4. In the Access group field, press the Down Arrow key and select the access group for which pipeline tasks are run. This access group must be present on all the candidate systems and have at least the sysadmin4 role. 5. In the Pipeline name field, enter the name of the pipeline. This name must be unique. 6. In the Product rule field, enter the name of the product rule that defines the contents of the application. 7. In the Version field, enter the product rule version. 4. Optional: If the application depends on other applications, in the Dependencies section, add those applications. 1. Click Dependencies. 2. Click Add. 3. In the Application name field, press the Down Arrow key and select the application name. 4. In the Application version field, press the Down Arrow key and select the application version. 5. In the Repository name field, press the Down Arrow key and select the repository that contains the production-ready artifact of the dependent application. If you want the latest artifact of the dependent application to be automatically populated, ensure that the repository that contains the production-ready artifact of the dependent application is configured to support file updates. 6. In the Artifact name field, press the Down Arrow key and select the artifact. For more information about dependent applications, see Listing product dependencies. 5. Click Next. 6. In the Environment details section, in the Stages section, specify the URL of each candidate system and the authentication profile that each system uses to communicate with the orchestration system. 1. In the Environments field for the system, press the Down Arrow key and select the URL of the system. 2. Optional: If you are using your own authentication profiles, in the Authentication field for the system, press the Down Arrow key and select the authentication profile that you want to communicate from the orchestration server to the system. By default, the fields are populated with the DMAppAdmin authentication profile. 7. In the Artifact management section, specify the development and production repositories through which the product rule that contains application contents moves through the pipeline. 1. In the Development repository field, press the Down Arrow key and select the development repository. 2. In the Production repository field, press the Down Arrow key and select the production repository. 8. Optional: In the External orchestration server section, if you are using a Jenkins step in a pipeline, specify Jenkins details. 1. In the URL field, enter the URL of the Jenkins server. 2. In the Authentication profile field, press the Down Arrow key and select the authentication profile on the orchestration server that specifies the Jenkins credentials to use for Jenkins jobs. 9. Click Next. 10. Specify if you are using branches in your application.

1. Do one of the following actions: If you are not using branches, click the No radio button. If you are using branches: 1. Click the Yes radio button. 2. Do one of the following actions: To merge branches into the highest existing ruleset in the application, click Highest existing ruleset. To merge branches into a new ruleset, click New ruleset. 3. In the Password field, enter the password that locks the rulesets on the development system. 11. Click Next. The system adds tasks, which you cannot delete, to the pipeline that are required to successfully run a workflow, for example, Deploy and Generate Artifact. The system also adds other tasks to enforce best practices such as Check guardrail compliance and Verify security checklist. 12. Optional: In the Merge criteria pane, specify tasks that must be completed before a branch can be merged in the pipeline. 1. Click Add task. 2. Specify the task that you want to perform. To specify that a branch must meet a compliance score before it can be merged: 1. From the Task list, select Check guardrail compliance. 2. In the Weighted compliance score field, enter the minimum required compliance score. 3. Click Submit. For more information about compliance scores, see Compliance score logic. To specify that a branch must be reviewed before it can be merged: 1. From the Task list, select Check review status. 2. Click Submit. For more information about branch reviews, see Branch reviews. To run Pega unit tests on the branches for the pipeline application or for an application that is associated with an access group before it can be merged: 1. From the Task list, select Pega unit testing. 2. Optional: To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. 3. Click Submit. For more information about creating Pega unit tests, see Creating Pega unit test cases. 13. Optional: To start a deployment automatically when a branch is merged, click the Trigger deployment on merge check box.Do not select this check box if you want to manually start deployments. For more information, see Manually starting a deployment. 14. Optional: Clear a check box for a deployment life cycle stage to skip it. 15. Optional: In the Continuous Deployment section pane, specify the tasks to be performed during each stage of the pipeline. 1. Do one of the following actions: Click a manually added task, click the More icon, and then click either Add task above or Add task below. Click Add task in the stage. 2. From the Task list, select the task that you want to perform. To run Pega unit tests either for the pipeline application or for an application that is associated with an access group, select Pega unit testing: 1. Optional: Perform one of the following actions: To run all the Pega unit tests that are in a Pega unit suite for the pipeline application, in the Test Suite ID field, enter the pxInsName of the test suite. You can find this value in the XML document that comprises the test suite by clicking, in Pega Platform, Actions > XML on the Edit Test Suite form. If you do not specify a test suite, all the Pega unit tests for the pipeline application are run. To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. For more information about creating Pega unit tests, see Creating Pega unit test cases. 2. Click Submit. To run a Jenkins job that you have configured, select Jenkins. 1. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins deployment) that you want to run. 2. In the Token field, enter the Jenkins authentication token. 3. In the Parameters field, enter parameters, if any, to send to the Jenkins job. Separate multiple parameters with a comma. 4. Click Submit. To add a manual step that a user must perform in the pipeline, select Manual. 1. In the Job name field, enter text that describes the action that you want the user to take. 2. In the Assigned to field, press the Down Arrow key and select the operator ID to assign the task to. To specify that the application must meet a compliance score, select Check guardrail compliance. 1. In the Weighted compliance score field, enter the minimum required compliance score. 2. Click Submit. To specify that all the tasks in the Application Security Checklist must be performed so that the pipeline can comply with security best practices, select Verify security checklist, and then click Submit. You must log in to the system for which this task is configured, and then mark all the tasks in the Application Security checklist as completed for the pipeline application. For more information about completing the checklist, see Preparing your application for secure deployment. To start a test coverage session at the application level, select Enable test coverage, and then click Submit. Starting and stopping test coverage generates a report that identifies the executable rules in your application that are either covered or not covered by tests. To stop the test coverage session, select Validate test coverage, and then click Submit. Add this task below the Start test coverage task on the same system. You must add this task to stop a test coverage session if you used the Enable test coverage task. For more information about application-level coverage reports, see Generating an application-level test coverage report. To run a Pega scenario test, select Run Pega scenario tests. 1. In the User name field, enter the user name for the Pega Platform instance on which you are running Pega scenario tests. 2. In the Password field, enter the Pega Platform password. 3. From the Test Service Provider field, select the browser that you are using to run the Pega scenario tests in the pipeline. 4. In the Provider auth name field, enter the auth name that you you use to log in to the test service provider. 5. In the Provider auth key field, enter the key for the test service provider. 6. Click Submit. For more information about scenario tests, see Creating a scenario test. To refresh the Application Quality dashboard, which provides information about the health of your application, on the candidate system, select Refresh application quality, and then click Submit. Add this task after you have run Pega unit tasks, checked guardrail compliance, run Pega scenario tests, and started and stopped test coverage. Optional: To modify the Approve for production task, which is added to the stage before production and which you use so that a user must approve application changes before they are sent to production, do the following actions: 1. Click the Info icon. 2. In the Job name field, enter a name for the task. 3. In the Assign to field, press the Down Arrow key and select the user who approves the application for production. An email is sent to this user, who can approve or reject application changes from within the email.

4. Click Submit. 16. Click Finish.

Modifying application details You can modify application details, such as the product rule that defines the content of the application that moves through the pipeline. 1. 2. 3. 4. 5. 6. 7. 8.

Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. Click Actions > Pipeline settings. Click Application details. Optional: In the Development environment field, enter the URL of the development system, which is the system on which the product rule that defines the application package that moves through the repository is located. Optional: In the Version field, press the Down Arrow key and select the application version. Optional: In the Product rule field, enter the product rule that defines the contents of the application. Optional: In the Version field, enter the product rule version. Optional: If the application depends on other applications, in the Dependencies section, add those applications. 1. Click Add. 2. In the Application name field, press the Down Arrow key and select the application name. 3. In the Application version field, press the Down Arrow key and select the application version. 4. In the Repository name field, press the Down Arrow key and select the repository that contains the production-ready artifact of the dependent application. If you want the latest artifact of the dependent application to be automatically populated, ensure that the repository that contains the production-ready artifact of the dependent application is configured to support file updates. 5. In the Artifact name field, press the Down Arrow key and select the artifact.

For more information about dependent applications, see Listing product dependencies.

Modifying URLs and authentication profiles You can modify the URLs of your development and candidate systems and the authentication profiles that are used to communicate between those systems and the orchestration server. 1. 2. 3. 4. 5.

Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. Click Actions > Pipeline settings. Click Deployment stages. In the Environments field for the system, press the Down Arrow key and select the URL of the system. In the Authentication field for the system, press the Down Arrow key and select the authentication profile that you want to communicate from the orchestration server to the system. 6. Click Save.

Modifying development and production repositories You can modify the development and production repositories through which the product rule that contains application contents moves through the pipeline. All the generated artifacts are archived in the Development repository, and all the production-ready artifacts are archived in the Production repository. You do not need to configure repositories if you are using Pega Cloud but can use different repositories other than the default ones that are provided. 1. 2. 3. 4.

Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. Click Actions > Pipeline settings. Click Artifact Management. Do one of the following actions to select a repository: If you are using Deployment Manager on premises, or on Pega Cloud with default repositories, complete the following tasks: 1. In the Application repository section, in the Development repository field, press the Down Arrow key and select the development repository 2. In the Production repository field, press the Down Arrow key and select the production repository. If you are using Deployment Manager on Pega Cloud and want to use different repositories other than the default repositories, complete the following tasks: 1. In the Artifact repository section, click Yes. 2. In the Development repository field, press the Down Arrow key and select the development repository. 3. In the Production repository field, press the Down Arrow key and select the production repository. 5. Click Save.

Specifying Jenkins server information If you are using a Jenkins step, specify details about the Jenkins server such as its URL. 1. 2. 3. 4. 5.

Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. Click Actions > Pipeline settings. Click External orchestration server. In the URL field, enter the URL of the Jenkins server. In the Authentication profile field, press the Down Arrow key and select the authentication profile on the orchestration server that specifies the Jenkins credentials to use for Jenkins jobs. 6. Click Save.

Specifying merge options for branches If you are using branches in your application, specify options for merging branches into the base application. 1. 2. 3. 4.

Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. Click Actions > Pipeline settings. Click Merge policy. Specify if you are using branches in your application. 1. Do one of the following actions: If you are not using branches, click the No radio button. If you are using branches, do the following actions: 1. Click Yes. 2. Do one of the following actions: To merge branches into the highest existing ruleset in the application, click Highest existing ruleset. To merge branches into a new ruleset, click New ruleset. 3. In the Password field, enter the password that locks the rulesets on the development system.

5. Click Save.

Modifying stages and tasks in the pipeline You can modify the stages and the tasks that are performed in each stage of the pipeline. For example, you can skip a stage or add tasks such as Pega unit testing to be done on the QA stage. 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Click Pipeline model. 3. Optional: In the Merge criteria pane, specify tasks that must be completed before a branch can be merged in the pipeline. 1. Click Add task. 2. Specify the task that you want to perform. To run Pega unit tests on the branches for the pipeline application or for an application that is associated with an access group before it can be merged: 1. From the Task list, select Pega unit testing. 2. Optional: To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. For more information about creating Pega unit tests, see Creating Pega unit test cases. 3. Click Submit. To specify that a branch must meet a compliance score before it can be merged: 1. From the Task list, select Check guardrail compliance. 2. In the Weighted compliance score field, enter the minimum required compliance score. 3. Click Submit. For more information about compliance scores, see Compliance score logic. To specify that a branch must be reviewed before it can be merged: 1. From the Task list, select Check review status. 2. Click Submit. For more information about branch reviews, see Branch reviews. 4. Optional: To start a deployment automatically when a branch is merged, click the Trigger deployment on merge check box. Do not select this check box if you want to manually start a deployment. For more information, see Manually starting a deployment. 5. Optional: Clear a check box for a deployment life cycle stage to skip it. 6. Optional: In the Continuous Deployment section pane, specify the tasks to be performed during each stage of the pipeline. 1. Do one of the following actions: Click a manually added task, click the More icon, and then click either Add task above or Add task below to add the task above or below the existing task. Click Add task in the stage. 2. From the Task list, select the task that you want to perform. To run Pega unit tests either for the pipeline application or for an application that is associated with an access group, select Pega unit testing. 1. Optionally, perform one of the following actions: To run all the Pega unit tests that are in a Pega unit suite for the pipeline application, in the Test Suite ID field, enter the pxInsName of the test suite. You can find this value in the XML document that comprises the test suite by clicking, in Pega Platform, Actions > XML on the Edit Test Suite form. If you do not specify a test suite, all the Pega unit tests for the pipeline application are run. To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. For more information about creating Pega unit tests, see Creating Pega unit test cases. 2. Click Submit. To run a Jenkins job that you have configured, select Jenkins. 1. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins deployment) that you want to run. 2. In the Token field, enter the Jenkins authentication token. 3. In the Parameters field, enter parameters, if any, to send to the Jenkins job. Separate multiple parameters with a comma. 4. Click Submit. To add a manual step that a user must perform in the pipeline, select Manual. 1. In the Job name field, enter text that describes the action that you want the user to take. 2. In the Assigned to field, press the Down Arrow key and select the operator ID to assign the task to. 3. Click Submit. To specify that the application must meet a compliance score, select Check guardrail compliance. 1. In the Weighted compliance score field, enter the minimum required compliance score. 2. Click Submit. For more information about compliance scores, see Compliance score logic. To specify that all the tasks in the Application Security Checklist must be performed so that the pipeline can comply with security best practices, select Verify security checklist, and then click Submit. You must log in to the system for which this task is configured, and then mark all the tasks in the Application Security checklist as completed for the pipeline application. For more information about completing the checklist, see Preparing your application for secure deployment. Optional: To modify the Approve for production task, which is added to the stage before production and which you use so that a user must approve application changes before they are sent to production, do the following actions: 1. Click the Info icon. 2. In the Job name field, enter a name for the task. 3. In the Assign to field, press the Down Arrow key and select the user who approves the application for production. An email is sent to this user, who can approve or reject application changes from within the email. 4. Click Submit. To run a scenario test, select Run Pega scenario tests. 1. In the User name field, enter the user name for the Pega Platform instance on which you are running Pega scenario tests. 2. In the Password field, enter the Pega Platform password. 3. From the Test Service Provider field, select the browser that you are using to run the Pega scenario tests in the pipeline. 4. In the Provider auth name field, enter the auth name that you you use to log in to the test service provider. 5. In the Provider auth key field, enter the key for the test service provider. 6. Click Submit. For more information about scenario tests, see Creating a scenario test. To start a test coverage session at the application level, select Enable test coverage, and then click Submit. Starting and stopping test coverage generates a report that identifies the executable rules in your application that are either covered or not covered by tests. To stop the test coverage session, select Validate test coverage, and then click Submit. Add this task below the Start test coverage task on the same system. You must add this task to stop a test coverage session if you used the Enable test coverage task. For more information about application-level coverage reports, see Generating an application-level test coverage report. To refresh the Application Quality dashboard, which provides information about the health of your application, on the candidate system, select Refresh application quality, and then click Submit. Add this task after you have run Pega unit tasks, checked guardrail compliance, run scenario tests, and started and stopped test coverage.

7. Click Finish.

Accessing systems in your pipeline You can open the systems in your pipeline and log in to the Pega Platform instances. 1. Optional: If the pipeline is not already open, in the Navigation pane, click Pipelines. 2. Click the pop-out arrow for the system that you want to open.

Manually starting a deployment Start a deployment manually if you are not using branches and are working directly in rulesets. You can also start a deployment manually if you do not want deployments to start automatically when branches are merged. You must also clear the Trigger deployment on merge check box in the pipeline configuration. 1. Do one of the following actions: If the pipeline that you want to start is open, click Start deployment. Click Pipelines, and then click Start deployment for the pipeline that you want to start. 2. In the Start deployment dialog box, start a new deployment or deploy an existing application by completing one of the following actions: To start a deployment and deploy a new application package, do the following steps: 1. Click Generate new artifact. 2. In the Deployment name field, enter the name of the deployment. 3. Click Deploy. To deploy an application package that is on a cloud repository, do the following steps: 1. Click Deploy an existing artifact. 2. In the Deployment name field, enter the name of the deployment. 3. In the Select a repository field, press the Down Arrow key and select the repository. 4. In the Select an artifact field, press the Down Arrow key and select the application package. 3. Click Deploy.

Starting a deployment in a branch-based environment In non-distributed, branch-based environments, you can immediately start a deployment by submitting a branch into a pipeline in the Merge Branches wizard. For more information, see Submitting a branch into a pipeline. The wizard displays the merge status of branches so that you do not need to open Deployment Manager to view it.

Starting a deployment in a distributed branch-based environment If you are using Deployment Manager in a distributed, branch-based environment and using multiple pipelines per application, first export the branch to the main development system, and then merge it. 1. On the remote development system, package the branch. For more information, see Packaging a branch. 2. Export the branch. 3. On the main development system, import the branch by using the Import wizard. For more information, see Import wizard landing page. 4. On the main development system, start a deployment by using the Merge Branches wizard. For more information, see Submitting a branch into a pipeline. The wizard displays the merge status of branches so that you do not need to open Deployment Manager to view it. If you are using one pipeline per application, you can publish a branch to start the merge. For more information, see Publishing a branch to a repository.

Publishing application changes in App Studio You can publish application changes that you make in App Studio to the pipeline. Publishing your changes creates a patch version of the application and starts a deployment. For example, you can change a life cycle, data model, or user interface elements in a screen and submit those changes to systems in the pipeline. When you publish an application to a stage, your rules are deployed immediately to that system. To allow stakeholders to inspect and verify changes before they are deployed the stage, configure a manual task in on the previous stage. When the pipeline runs, it is paused during a manual step that is assigned to a user, which allows stakeholders to review your changes before they approve the step and resume running the pipeline. Your pipeline should have at least a quality assurance or staging stage with a manual task so that you do not deploy changes to production that have not been approved by stakeholders. You can submit applications to a pipeline when there is only one unlocked ruleset version in each ruleset of your application. 1. In the App Studio header, click Publish. The dialog box that appears displays the stages that are enabled in the application pipeline in Deployment Manager. The available stages are, in order, quality assurance, staging, and production. It also displays the application versions that are on each system. The version numbers are taken from the number at the end of each application deployment name in Deployment Manager. For example, if a deployment has a name of "MyNewApp:01_01_75", the dialog box displays "v75". You can view application version numbers by clicking Settings > Versions in the navigation panel. 2. Submit an application from development to quality assurance or staging in your pipeline by completing the following steps: a. In the dialog box, click either Publish to QA or Publish to staging. b. Optional: To add a comment, which will be published when you submit the application, add a comment in the Publish confirmation dialog box. c. Optional: If Agile Workbench has been configured, associate a bug or user story with the application, in the Associated User stories/Bugs field, press the Down Arrow key and select the bug or user story. d. Click OK. Each unlocked ruleset version in your application is locked and rolled to the next highest version and is packaged and imported into the system. The amount of time that publishing application changes takes depends on the size of your application.

A new application is also copied from the application that is defined on the pipeline in Deployment Manager. The application patch version is updated to reflect the version of the new rulesets; for example, if the ruleset versions of the patch application are 01-01-15, the application version is updated to be 01.01.15. In addition, this application is locked and cannot be unlocked. You can use this application to test specific patch versions of your application on quality assurance or staging systems. You can also use it to roll back a deployment. 3. Optional: Make changes to your application in the unlocked rulesets, which you can publish again into the pipeline. If an application is already on the system, it is overridden by the new version that you publish. 4. Optional: If you configured a manual step, request that stakeholders review and test your changes. After they communicate to you that they have completed testing, you can publish your changes to the next stage in the pipeline. 5. Publish the application to the next stage in the pipeline by clicking the link that is displayed. The name of the link is the Job name field of the manual task that is defined on the stage. If you do not have a manual task defined, the application automatically moves to the next stage.

Viewing application version information You can view details about the application versions that were submitted into a pipeline. 1. In App Studio, click Turn editing on. 2. In the Navigation panel, click Settings > Versions.

Schema changes in application packages If an application package that is to be deployed on candidate systems contains schema changes, the Pega Platform orchestration server checks the candidate system to verify that you have the required privileges to deploy the schema changes. One of the following results occurs: If you have the appropriate privileges, schema changes are automatically applied to the candidate system, the application package is deployed to the candidate system, and the pipeline continues. If you do not have the appropriate privileges, Deployment Manager generates an SQL file that lists the schema changes and sends it to your email address. It also creates a manual step, pausing the pipeline, so that you can apply the schema changes. After you complete the step, the pipeline continues. For more information about completing a step, see Completing or rejecting a manual step. You can also configure settings to automatically deploy schema changes so that you do not have to manually apply them if you do not have the required privileges. For more information, see Configuring settings to automatically deploy schema changes.

Configuring settings to automatically deploy schema changes You can configure settings to automatically deploy schema changes that are in an application package that is to be deployed on candidate systems. Configure these settings so that you do not have to apply schema changes if you do not have the privileges to deploy them. 1. On the orchestration server, in Pega Platform, set the AutoDBSchemaChanges dynamic system setting to true to enable schema changes at the system level. 1. In Dev Studio, search for AutoDBSchemaChanges. 2. In the dialog box that appears for the search results, click AutoDBSchemaChanges. 3. On the Settings tab, in the Value field, enter true. 4. Click Save. 2. Add the SchemaImport privilege to your access role to enable schema changes at the user level. For more information, see Specifying privileges for an Access or Role to Object rule. These settings are applied sequentially. If the AutoDBSchemaChanges dynamic system setting is set to false, you cannot deploy schema changes, even if you have the SchemaImport privilege.

Completing or rejecting a manual step in a deployment If a manual step is configured on a stage, the deployment pauses when it reaches the step, and you can either complete it or reject it. For example, if a user was assigned a task and completed it, you can complete the task to continue the deployment. Deployment Manager also sends you an email when there is a manual step in the pipeline. You can complete or reject a step either within the pipeline or through email. Deployment Manager also generates a manual step if there are schema changes in the application package that the release manager must apply. For more information, see Schema changes in application packages. To complete or reject a manual step within the deployment, do the following steps: 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Click one of the following links: Complete: Resolve the task so that the deployment continues through the pipeline. Reject: Reject the task so that the deployment does not proceed. To complete or reject a manual step from within an email, click either Accept or Reject.

Managing aged updates An aged update is a rule or data instance in an application package that is older than an instance that is on a system to which you want to deploy the application package. By being able to import aged updates, skip the import, or manually deploy your application changes, you now have more flexibility in determining the rules that you want in your application and how you want to deploy them. For example, you can update a dynamic system setting on a quality assurance system, which has an application package that contains the older instance of the dynamic system setting. Before Deployment Manager deploys the package, the system detects that the version of the dynamic system setting on the system is newer than the version in the package and creates a manual step in the pipeline. To import aged updates: 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Optional: Click View aged updates to view a list of the rules and data instances, which are in the application package, that are older than the instances that are on the system. 3. Click the More icon and select one of the following options: Click Overwrite aged updates to import the older rule and data instances that are in the application package into the system,

which overwrites the newer versions that are on the system. Click Skip aged updates to skip the import. Click Deploy manually and resume to manually deploy the package from the Import wizard on the system. Deployment Manager does not run the Deploy step on the stage.

Pausing a deployment When you pause a deployment, the pipeline completes the task that it is running, and stops the deployment at the next step. To pause a deployment: 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Click the pipeline. 3. Click Pause.

Stopping a deployment To stop a deployment: 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Click the More icon, and then click Abort.

Performing actions on a deployment that has errors If a deployment has errors, the pipeline stops processing on it. You can perform actions on it, such as rolling back the deployment or skipping the step on which the error occurred. 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Click the More icon, and then click one of the following options: Resume from current task - Resume running the pipeline from the task. Skip current task and continue - Skip the step and continue running the pipeline. Rollback - Roll back to an earlier deployment. Abort - Stop running the pipeline.

Diagnosing a pipeline You can diagnose your pipeline to verify that your pipeline is configured properly such as whether the target application and product rule are in the development environment, connectivity between systems and repositories is working, and premerge settings are correctly configured. 1. 2. 3. 4.

Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. Click Actions > Diagnose pipeline. In the Diagnose application pipeline dialog box, review the errors, if any. Optional: To view troubleshooting tips about errors, hover your mouse over the Troubleshooting tips link.

If the RMURL dynamic system setting is not configured, Deployment Manager displays a message that you can disregard if you are not using branches, because you do not need to configure the dynamic system setting.

Viewing merge requests You can view the status of the merge requests for a pipeline. For example, you can see whether a branch was merged in a deployment and when it was merged. 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. In the Development stage, click X Merges in queue to view all the branches that are in the queue or for which merge is in progress. 3. In the Merge requests ready for deployment dialog box, click View all merge requests to view all the branches that are merged into the pipeline.

Viewing deployment logs View logs for a deployment to see the completion status of operations, for example, when a deployment is moved to a new stage. You can change the logging level to control which events are displayed in the log. For example, you can change logging levels of your deployment from INFO to DEBUG for troubleshooting purposes. For more information, see Logging Level Settings tool. 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Perform one of the following actions: To view the log for the current deployment, click the More icon, and then click View logs. To view the log for a previous deployment, expand the Deployment History pane and click Logs for the appropriate deployment.

Viewing deployment reports Deployment reports provide information about a specific deployment. You can view information such as the number of tasks that you configured on a deployment that have been completed and when each task started and ended. 1. Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. 2. Perform one of the following actions: To view the report for the current deployment, click the More icon, and then click View report. To view the report for a previous deployment, expand the Deployment History pane and click Reports for the appropriate deployment.

Viewing reports for all deployments Reports provide a variety of information about all the deployments in your pipeline. You can view the following key performance indicators (KPI): Deployment Success - Percentage of deployments that are successfully deployed to production Deployment Frequency – Frequency of new deployments to production Deployment Speed - Average time taken to deploy to production

Start frequency - Frequency at which new deployments are triggered Failure rate - Average number of failures per deployment Merges per day - Average number of branches that are successfully merged per day To view reports, do the following tasks: 1. Do one of the following actions: If the pipeline open, Click Actions > View report. If a pipeline is not open, in the Navigation pane, click Reports. Next, in the Pipeline field, press the Down Arrow key and select the name of the pipeline for which to view the report. 2. Optional: From the list that appears in the top right of the Reports page, select whether you want to view reports for all deployments, the last 20 deployments, or the last 50 deployments.

Deleting an application pipeline When you delete a pipeline, its associated application packages are not removed from the repositories that the pipeline is configured to use. 1. In the Navigation pane, click Pipelines. 2. Click the Delete icon for the pipeline that you want to delete. 3. Click Submit.

Viewing, downloading, and deleting application packages in repositories You can view, download, and delete application packages in repositories that are on the orchestration server. If you are using Deployment Manager on Pega Cloud, application packages that you have deployed to cloud repositories are stored on Pega Cloud. To manage your cloud storage space, you can download and permanently delete the packages. 1. 2. 3. 4. 5. 6.

Optional: If the pipeline is not open, in the Navigation pane, click Pipelines, and then click the name of the pipeline. Click the pipeline for which you want to download or delete packages. Click Actions > Browse artifacts. Click either Development Repository or Production Repository. To download an application package, click the package, and then save it to the appropriate location. To delete a package, select the check boxes for the packages that you want to delete and click Delete.

Deployment Manager 4.1.x Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your Pega applications from within Pega Platform™. You can create a standardized deployment process so that you can deploy predictable, high-quality releases without using third-party tools. With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging; application package generation; artifact management; and package promotion, to different stages in the workflow. Deployment Manager 4.1.x is supported on Pega 8.1. You can download it for Pega Platform from the Deployment Manager Pega Exchange page. Each customer VPC on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use it with your Pega Cloud application. For more information about the features in the latest version of Deployment Manager 4.1.x, see the following articles: Deployment Manager release notes Deployment Manager architecture and workflows Best practices for using branches with Deployment Manager Creating custom repository types for Deployment Manager Installing, upgrading, and configuring Deployment Manager 4.1.x Using Deployment Manager 4.1.x

Installing, upgrading, and configuring Deployment Manager 4.1.x Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks and allow you to quickly deploy high-quality software to production. Each customer virtual private cloud (VPC) on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use it with your Pega Cloud application. This document describes the features for the latest version of Deployment Manager 4.1.x. See the following topics for more information about installing and configuring Deployment Manager: Step Step Step Step Step

1: 2: 2: 3: 4:

Installing Deployment Manager Upgrading to Deployment Manager 4.1.x (optional) Configuring systems in the pipeline Configuring the development system for branch-based development (optional) Configuring additional settings

For information about using Deployment Manager, see Using Deployment Manager 4.1.x.

Step 1: Installing Deployment Manager Each customer virtual private cloud (VPC) on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use it with your Pega Cloud application. If you are upgrading from an earlier release to Deployment Manager 4.1.x, contact Pegasystems® Global Customer Support (GCS) support to request a new version. To install Deployment Manager 4.1.x on premises, complete the following steps: 1. Install Pega 8.1 on all systems in the CI/CD pipeline. 2. Browse to the Deployment Manager Pega Exchange page, and then download the DeploymentManager04.01.0x.zip file for your version of Pega Platform to your local disk on each system. 3. Use the Import wizard to import files into the appropriate systems. For more information about the Import wizard, see Importing a file by using the Import wizard.

4. On the orchestration server, import the following files: PegaDevOpsFoundation_04.01.0x.zip PegaDeploymentManager_04.01.0x.zip 5. On the development, QA, staging, and production systems, import the PegaDevOpsFoundation_04.01.0x.zip file. 6. Optional: If you are using a distributed development, on the remote development system, import the PegaDevOpsFoundation_04.01.0x.zip file. 7. Do one of the following actions: 1. If you are upgrading to Deployment Manager 4.1.x, perform the upgrade. For more information, see Upgrading to Deployment Manager 4.1.x. 2. If you are not upgrading Deployment Manager 4.1.x, continue the installation procedure. For more information, see Step 3b: Configuring the orchestration server.

Step 2: Upgrading to Deployment Manager 4.1.x Before you upgrade, ensure that no deployments are running, have errors, or are paused. To upgrade to Deployment Manager 4.1.x either on Pega Cloud or on premises, perform the following steps: 1. On each candidate system, update the PegaDevOpsFoundation application version to the version of Deployment Manager that you are using. 1. In the Dev Studio header, click the name of your application, and then click Definition. 2. In the Built on application section for the PegaDevOpsFoundation application, in the Version field, press the Down Arrow key and select the version of Deployment Manager that you are using. 3. Click Save. 2. On the orchestration server, run the pxUpdateDescription activity. 1. In Dev Studio, search for pxUpdateDescription, and then click the activity in the dialog box that displays the results. 2. Click Actions > Run. 3. In the dialog box that is displayed, click Run. If you are upgrading from Deployment Manager 3.2.1 or a later release, you do not need to do the rest of the steps in this procedure or the required steps in the remainder of this document. If you are upgrading from earlier releases and have pipelines configured, complete this procedure. 3. On the orchestration server, log in to the release management application. 4. Run the pxUpdatePipeline activity. 1. In Dev Studio, search for pxUpdatePipeline, and then click the activity in the dialog box that displays the results. 2. Click Actions > Run. 3. In the dialog box that is displayed, click Run. 5. Modify the current release management application so that it is built on PegaDeploymentManager:04-01-01. 1. In the Dev Studio header, click the name of your application, and then click Definition. 2. In the Edit Application rule form, on the Definition tab, in the Built on application section, for the PegaDeploymentManager application, press the Down Arrow key and select 04.01.01. 3. Click Save. 6. Merge rulesets to the PipelineData ruleset. 1. Click Configure > System > Refactor > Rulesets. 2. Click Copy/Merge RuleSet. 3. Click the Merge Source RuleSet(s) to Target RuleSet radio button. 4. Click the RuleSet Versions radio button. 5. In the Available Source Ruleset(s) section, select the first open ruleset version that appears in the list, and then click the Move icon. 6. All your current pipelines are stored in the first open ruleset. If you modified this ruleset after you created the application, select all the ruleset versions that contain pipeline data. 1. In the target RuleSet/Information section, in the Name field, press the Down Arrow key and select Pipeline Data. 2. In the Version field, enter 01-01-01. 3. For the Delete Source RuleSet(s) upon completion of merge? option, click No. 4. Click Next. 5. Click Merge to merge your pipelines to the PipelineData:01-01-01 rulset. 6. Click Done. 7. Your pipelines are migrated to the Pega Deployment Manager application. Log out of the orchestration server and log back in to it with the DMReleaseAdmin operator ID and the password that you specified for it. For backup purposes, pipelines are still visible in your previous release management application. However, you should not create deployments with this application, because deployments might not work correctly. You do not need to perform any of the required steps in the remainder of this document.

Step 3: Configuring systems in the pipeline Complete the following steps to set up a pipeline for all supported CI/CD workflows. If you are using branches, you must configure additional settings after you perform the required steps. 1. 2. 3. 4.

Step Step Step Step

3a: Configuring authentication profiles on the orchestration server and candidate systems 3b: Configuring the orchestration server 3c: Configuring candidate systems 3d: Creating repositories on the orchestration server and candidate systems

Step 3a: Configuring authentication profiles on the orchestration server and candidate systems When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs, and authentication profiles that communicate between the orchestration server and candidate systems are also installed. On the orchestration server, the following items are installed: The Pega Deployment Manager application. The DMReleaseAdmin operator ID, which release managers use to log in to the Pega Deployment Manager application. You must enable this operator ID and specify its password. The DMAppAdmin authentication profile. You must update this authentication profile to use the password that you specified for the DMAppAdmin operator ID, which is configured on all the candidate systems. On all the candidate systems, the following items are installed:

The PegaDevOpsFoundation application. The DMAppAdmin operator ID, which points to the PegaDevOpsFoundation aplication. You must enable this operator ID and specify its password. The DMReleaseAdmin authentication profile. You must update this authentication profile to use the password that you specified for the DMReleaseAdmin operator ID, which is configured on the orchestration server. The DMReleaseAdmin and DMAppAdmin operator IDs do not have default passwords. Configure the default authentication profile by following these steps: 1. On the orchestration server, enable the DMReleaseAdmin operator ID and specify its password. 1. Log in to the orchestration server with [email protected]/install. 2. In Dev Studio, click Records > Organization > Operator ID, and then click DMReleaseAdmin. 3. In the Explorer panel, click the operator ID initials, and then click Operator. 4. On the Edit Operator ID rule form, click the Security tab. 5. Clear the Disable Operator check box. 6. Click Save. 7. Click Update password. 8. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then click Submit. 9. Optional: Clear the Force password change on next login check box if you do not want to change the password for the DMReleaseAdmin operator ID the next time that you log in. 10. Log out of the orchestration server. 2. On each candidate system, update the DMReleaseAdmin authentication profile to use the new password. All candidate systems use this authentication profile to communicate with the orchestration server about the status of the tasks in the pipeline. 1. Log in to each candidate system with the DMReleaseAdmin user name and the password that you specified. 2. In Dev Studio, click Records > Security > Authentication Profile. 3. Click DMReleaseAdmin. 4. On the Edit Authentication Profile rule form, click Set password. 5. In the Password dialog box, enter the password, and then click Submit. 6. Save the rule form. 3. On each candidate system, which includes the development, QA, staging, and production systems, enable the DMAppAdmin operator ID. If you want to create your own operator IDs, ensure that they point to the PegaDevOpsFoundation application. 1. Log in to each candidate system with [email protected]/install. 2. In Dev Studio, click Records > Organization > Operator ID, and then click DMAppAdmin. 3. In the Explorer panel, click the operator ID initials, and then click Operator. 4. On the Edit Operator ID rule form, click the Security tab. 5. Clear the Disable Operator check box. 6. Click Save. 7. Click Update password. 8. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then click Submit. 9. Optional: Clear the Force password change on next login check box if you do not want to change the password for the DMAppAdmin operator ID the next time that you log in. 10. Log out of each candidate system. 4. On the orchestration server, modify the DMAppAdmin authentication profile to use the new password. The orchestration server uses this authentication profile to communicate with candidate systems so that it can run tasks in the pipeline. 1. Log in to the orchestration server with the DMAppAdmin user name and the password that you specified. 2. In Dev Studio, click Records > Security > Authentication Profile. 3. Click DMAppAdmin. 4. On the Edit Authentication Profile rule form, click Set password. 5. In the Password dialog box, enter the password, and then click Submit. 6. Save the rule form. 5. Do one of the following actions: 1. If you are upgrading to Deployment Manager 4.1.x, resume the upgrade procedure from step 2. For more information, see Upgrading to Deployment Manager 4.1.x. 2. If you are not upgrading, continue the installation procedure. For more information, see Step 3b: Configuring the orchestration server.

Step 3b: Configuring the orchestration server The orchestration server is the system on which release managers configure and manage CI/CD pipelines. 1. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages. 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 2. Configure the candidate systems in your pipeline. For more information, see Step 3c: Configuring candidate systems.

Step 3c: Configuring candidate systems Configure each system system that is used for the development, QA, staging, and production stage in the pipeline. 1. On each candidate system, add the PegaDevOpsFoundation application to your application stack. 1. In the Dev Studio header, click the name of your application, and then click Definition. 2. In the Built on application section, click Add application. 3. In the Name field, press the Down Arrow key and select PegaDevOpsFoundation. 4. In the Version field, press the Down Arrow key and select the version of Deployment Manager that you are using. 5. Click Save. 2. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages. 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 3. Optional: If you want to use a product rule other than the default product rule that is created by the New Application wizard, on the development system, create a product rule that defines the application package that will be moved through repositories in the

pipeline. For more information, see Creating a product rule by using the create menu. When you use the New Application wizard, a default product rule is created that has the same name as your application. 1. Configure repositories through which to move artifacts in your pipeline. For more information, see Step 3c: Creating repositories on the orchestration server and candidate systems.

Step 3c: Creating repositories on the orchestration server and candidate systems If you are using Deployment Manager on premises, create repositories on the orchestration server and all candidate systems to move your application between all the systems in the pipeline. You can use a supported repository type that is provided in Pega Platform™, or you can create a custom repository type. If you are using Deployment Manager on Pega Cloud, default repositories are provided. If you want to use repositories other than the ones provided, you can create your own. For more information about creating a supported repository, see the following Creating a repository for file storage and knowledge management. For more information about creating a custom repository type, see Creating custom repository types for Deployment Manager. The Pega repository type is not supported. Ensure that each repository has the same name on all systems. When you create JFrog Artifactory repositories, ensure that you create a Generic package type in JFrog Artifactory. Also, when you create the authentication profile for the repository on Pega Platform, you must select the Preemptive authentication check box. After you configure a pipeline, you can verify that the repository connects to the URL of the development and production repositories by clicking Test Connectivity on the Repository rule form.

Step 4: Configuring the development system for branch-based development (optional) After you configure the orchestration server and all your candidate systems, configure additional settings so that you can use pipelines if you are using branches in a distributed or non-distributed branch-based environment. You must configure the development system to create a pipeline in a branch-based environment. 1. On the development system (in nondistributed environment) or the main development system (in a distributed environment), create a Dynamic System Setting to define the URL of the orchestration server, even if the orchestration server and the development system are the same system. 1. Click Create > Records > SysAdmin > Dynamic System Settings. 2. In the Owning Ruleset field, enter Pega-DevOps-Foundation. 3. In the Setting Purpose field, enter RMURL. 4. Click Create and open. 5. On the Settings tab, in the Value field, enter the URL of the orchestration server. Use this format: http://hostname:port/prweb/PRRestService. 6. Click Save. 2. Complete the following steps on either the development system (in a non-distributed environment) or the remote development system (in a distributed environment). 1. Use the New Application wizard to create a new development application that developers will log in to. This application allows development teams to maintain a list of development branches without modifying the definition of the target application. 2. Add the target application of the pipeline as a built-on application layer of the development application. 1. Log in to the application. 2. In the Dev Studio header, click the name of your application, and then click Definition. 3. In the Built-on application section, click Add application. 4. In the Name field, press the Down Arrow key and select the name of the target application. 5. In the Version field, press the Down Arrow key and select the target application version. 6. Click Save. 3. Lock the application rulesets to prevent developers from making changes to rules after branches have been merged. 1. In the Dev Studio header, click the name of your application, and then click Definition. 2. In the Application rulesets section, click the Open icon for each ruleset that you want to lock. 3. Click Lock and Save. 4. Optional: It is recommended that you merge branches by using the Merge Branch wizard. However, you can publish a branch to the remote development system to start a deployment. Publishing a branch when you have multiple pipelines per application is not supported. 1. In Dev Studio, enable Pega repository types. For more information, see Enabling the Pega repository type. 2. Create a new Pega repository type. For more information, see Creating a repository connection for file storage and knowledge management. Ensure that you do the following tasks: In the Host ID field, enter the URL of the development system. The default access group of the operator that is configured for the authentication profile of this repository should point to the pipeline application on the development system (in a nondistributed environment) or main development system (in a distributed environment).

Step 5: Configuring additional settings As part of your pipeline, you can optionally send email notifications to users, configure Jenkins if you are using a Jenkins task, and upgrade to the latest version of Deployment Manager if you are using a previous version. See the following topics for more information: Configuring email notifications on the orchestration server Configuring Jenkins

Configuring email notifications on the orchestration server You can optionally configure email notifications on the orchestration server. For example, users can receive emails when pre-merge criteria are not met and the system cannot create a deployment. To configure the orchestration server to send emails, complete the following steps: 1. ​Use the Email wizard to configure an email account and listener by clicking Dev Studio > Integration > Email > Email Wizard. This email account sends notifications to users when events occur, for example, if there are merge conflicts. For detailed information, see

2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19.

the procedure for “Configuring an email account that receives email and creates or manages work” in Entering email information in the Email wizard. From the What would you like to do? list, select Receive an email and create/manage a work object. From the What is the class of your work type? list, select Pega-Pipeline-CD. From the What is your starting flow name? list, select NewWork. From the What is your organization? list, select the organization that is associated with the work item. In the What Ruleset? field, select the ruleset that contains the generated email service rule. This ruleset applies to the work class. In the What RuleSet Version? field, select the version of the ruleset for the generated email service rule. Click Next to configure the email listener. In the Email Account Name field, enter <span><span>Pega-Pipeline-CD, which is the name of the email account that the listener references for incoming and outgoing email. In the Email Listener Name field, enter the name of the email listener. Begin the name with a letter, and use only letters, numbers, the ampersand character (&), and hyphens. In the Folder Name field, enter the name of the email folder that the listener monitors. Typically, this folder is INBOX. In the Service Package field, enter the name of the service package to be deployed. Begin the name with a letter, and use only letters, numbers, and hyphens to form an identifier. In the Service Class field, enter the service class name. In the Requestor User ID field, press the Down Arrow Key, and select the operator ID of the release manager operator. In the Requestor Password field, enter the password for the release manager operator. In the Requestor User ID field, enter the operator ID that the email service uses when it runs. In the Password field, enter the password for the operator ID. Click Next to continue the wizard and configure the service package. For more information, see Configuring the service package in the Email wizard. After you complete the wizard, enable the listener that you created in the Email Wizard.

Email notifications Emails are also preconfigured with information about each notification type. For example, when a deployment failure occurs, the email that is sent provides information, such as the pipeline name and URL of the system on which the deployment failure occurred. Preconfigured emails are sent in the following scenarios: Deployment start – When a deployment starts, an email is sent to the release manager and, if you are using branches, to the operator who started a deployment. Deployment step completion or failure – When a step either completes or fails, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. The deployment pauses if there are any errors. Deployment completion – When a deployment is successfully completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Stage completion or failure – When a stage in a deployment process either succeeds or fails, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Manual tasks requiring approval – When a manual task requires email approval from a user, an email is sent to the user, who can approve or reject the task from the email. Stopped deployment – When a deployment is stopped, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Pega unit testing success or failure – If you are using the Run Pega unit tests task, and the task either succeeds or fails, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Schema changes required – If you do not have the required schema privileges to deploy schema changes on application packages that require those changes, an email is sent to the operator who started the deployment. Guardrail compliance score success or failure – If you are using the Check guardrail compliance task, an email is sent to the release manager if the task either succeeds or fails. Approve for production – If you are using the Approve for production task, which requires approval from a user before application changes are deployed to production, an email is sent to the user. The user can reject or approve the changes. Verify security checklist success or failure – If you are using the Verify security checklist task, which requires that all tasks be completed in the Application Security Checklist to ensure that the pipeline complies with security best practices, an email is sent to the release manager if the test either succeeds or fails. Pega scenario testing success or failure – If you are using the Run Pega scenario tests task, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge, if Pega scenario testing either succeeds or fails. Start test coverage success or failure – If you are using the Enable test coverage task to generate a test coverage report, an email is sent to the release manager if the task either fails or succeeds. Verify test coverage success or failure – If you are using the Verify test coverage task, an email is sent to the release manager if the task either fails or succeeds. Application quality statistics refreshed – If you are using the Refresh application quality statistics task, an email is sent to the release manager when the task is run.

Configuring Jenkins If you are using a Jenkins task in your pipeline, configure Jenkins so that it can communicate with the orchestration server. 1. On the orchestration server, create an authentication profile that uses Jenkins credentials. 1. Click Create > Security > Authentication Profile. 2. Enter a name, and then click Create and open. 3. In the User name field, enter the user name of the Jenkins user. 4. Click Set password, enter the Jenkins password, and then click Submit. 5. Click the Preemptive authentication check box. 6. Click Save. 2. Because the Jenkins task does not support Cross-Site Request Forgery (CSRF), disable it by completing the following steps: 1. In Jenkins, click Manage Jenkins. 2. Click Configure Global Security. 3. In the CRSF Protection section, clear the Prevent Cross Site Request Forgery exploits check box. 4. Click Save. 3. Install the Post build task plug-in. 4. Install the curl command on the Jenkins server. 5. Create a new freestyle project. 6. On the General tab, select the This project is parameterized check box. 7. Add the BuildID and CallBackURL parameters. 1. Click Add parameter, and then select String parameter. 2. In the String field, enter BuildID. 3. Click Add parameter, and then select String parameter. 4. In the String field, enter CallBackURL. 8. In the Build Triggers section, select the Trigger builds remotely check box.

9. In the Authentication Token field, select the token that you want to use when you start Jenkins jobs remotely. 10. In the Build Environment section, select the Use Secret text(s) or file(s) check box. 11. In the Bindings section, do the following actions: 1. Click Add, and then select User name and password (conjoined). 2. In the Variable field, enter RMCREDENTIALS 3. .In the Credentials field, click Specific credentials. 4. Click Add, and then select Jenkins. 5. In the Add credentials dialog box, in the Username field, enter the operator ID of the release manager operator that is configured on the orchestration server. 6. In the Password field, enter the password. 7. Click Save. 12. In the Post-Build Actions section, do one of the following actions, depending on your operating system: If Jenkins is running on Microsoft Windows, add the following post-build tasks: 1. Click Add post-build action, and then select Post build task. 2. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build fails, for example BUILD FAILURE. 3. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%". 4. Click Add another task. 5. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build is successful, for example BUILD SUCCESS. 6. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%" 7. Click Save. If Jenkins is running on UNIX or Linux, add the following post-build tasks. Use the dollar sign ($) instead of the percent sign (%) to access the environment variables. 1. Click Add post-build action, and then select Post build task. 2. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build fails, for example BUILD FAILURE. 3. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 4. Click Add another task. 5. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build is successful, for example BUILD SUCCESS. 6. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 7. Click Save.

Using Deployment Manager 4.1.x Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks so that you can quickly deploy high-quality software to production. On the orchestration server, release managers use the DevOps landing page to configure CI/CD pipelines for their Pega Platform™ applications. The landing page displays all the running and queued application deployments, branches that are to be merged, and reports that provide information about your DevOps environment such as key performance indicators (KPIs). This document describes the features for the latest version of Deployment Manager 4.1.x. For more information about using Deployment Manager to configure and use CI/CD pipelines, see the following topics: Configuring an application pipeline Accessing systems in your pipeline Manually starting a deployment Starting a deployment in a branch-based environment Starting a deployment in a distributed, branch-based environment Publishing application changes in App Studio Schema changes in application packages Completing or rejecting a manual step in a deployment Managing aged updates Pausing a deployment Stopping a deployment Performing actions on a deployment with errors Diagnosing a pipeline Viewing merge requests Viewing deployment logs Viewing deployment reports Viewing reports for all deployments Deleting an application pipeline Viewing, downloading and deleting application packages in repositories

Configuring an application pipeline When you add a pipeline, you specify merge criteria and configure stages and steps in the continuous delivery workflow. For example, you can specify that a branch must be peer-reviewed before it can be merged, and you can specify that Pega unit tests must be run after a branch is merged and is in the QA stage of the pipeline. You can create multiple pipelines for one version of an application. For example, you can use multiple pipelines in the following scenarios: To move a deployment to production separately from the rest of the pipeline. You can then create a pipeline that has only a production stage or development and production stages. To use parallel development and hotfix life cycles for your application. See the following topics for more information: Adding a pipeline on Pega Cloud

Adding a pipeline on premises Modifying application details Modifying URLs and authentication profiles Modifying development and production repositories Specifying Jenkins server information Specifying merge options for branches Modifying stages and tasks in the pipeline

Adding a pipeline on Pega Cloud To add a pipeline on Pega Cloud, perform the following steps: 1. In the Dev Studio footer, click Deployment Manager. 2. Click Add pipeline. 3. Specify the details of the application for which you are creating the pipeline. 1. Optional: If you want to change the URL of your development system, which is populated by default with your development system URL, in the Development environment field, press the Down Arrow key and select the URL. This is the system on which the product rule that defines the application package that moves through the repository is located. 2. In the Application field, press the Down Arrow key and select the name of the application. 3. In the Version field, press the Down Arrow key and select the application version. 4. In the Access group field, press the Down Arrow key and select the access group for which pipeline tasks are run. This access group must be present on all the candidate systems and have at least the sysadmin4 role. 5. In the Pipeline name field, enter the name of the pipeline. This name must be unique. 4. Click Create. The system adds tasks, which you cannot delete, to the pipeline that are required to successfully run a workflow, for example, Deploy and Generate Artifact. For Pega Cloud, it also adds mandatory tasks that must be run on the pipeline, for example, the Check guardrail compliance task and Verify security checklist task. 5. Optional: Add tasks that you want to perform on your pipeline, such as Pega unit testing. For more information, see Modifying stages and tasks in the pipeline.

Adding a pipeline on premises To add a pipeline on premises, complete the following steps: 1. In the Dev Studio footer, click Deployment Manager. 2. Click Add pipeline. 3. Specify the details of the application for which you are creating the pipeline. 1. In the Development environment field, enter the URL of the development system. This is the system on which the product rule that defines the application package that moves through the repository is located. 2. In the Application field, press the Down Arrow key and select the name of the application. 3. In the Version field, press the Down Arrow key and select the application version. 4. In the Access group field, press the Down Arrow key and select the access group for which pipeline tasks are run. This access group must be present on all the candidate systems and have at least the sysadmin4 role. 5. In the Pipeline name field, enter the name of the pipeline. This name must be unique. 6. In the Product rule field, enter the name of the product rule that defines the contents of the application. 7. In the Version field, enter the product rule version. 4. Optional: If the application depends on other applications, in the Dependencies section, add those applications. 1. Click Dependencies. 2. Click Add. 3. In the Application name field, press the Down Arrow key and select the application name. 4. In the Application version field, press the Down Arrow key and select the application version. 5. In the Repository name field, press the Down Arrow key and select the repository that contains the production-ready artifact of the dependent application. If you want the latest artifact of the dependent application to be automatically populated, ensure that the repository that contains the production-ready artifact of the dependent application is configured to support file updates. 6. In the Artifact name field, press the Down Arrow key and select the artifact. For more information about dependent applications, see Listing product dependencies. 5. Click Next. 6. In the Environment details section, in the Stages section, specify the URL of each candidate system and the authentication profile that each system uses to communicate with the orchestration system. 1. In the Environments field for the system, press the Down Arrow key and select the URL of the system. 2. Optional: If you are using your own authentication profiles, in the Authentication field for the system, press the Down Arrow key and select the authentication profile that you want to communicate from the orchestration server to the system. By default, the fields are populated with the DMAppAdmin authentication profile. 7. In the Artifact management section, specify the development and production repositories through which the product rule that contains application contents moves through the pipeline. 1. In the Development repository field, press the Down Arrow key and select the development repository. 2. In the Production repository field, press the Down Arrow key and select the production repository. 8. Optional: In the External orchestration server section, if you are using a Jenkins step in a pipeline, specify Jenkins details. 1. In the URL field, enter the URL of the Jenkins server. 2. In the Authentication profile field, press the Down Arrow key and select the authentication profile on the orchestration server that specifies the Jenkins credentials to use for Jenkins jobs. 9. Click Next. 10. Specify if you are using branches in your application. 1. Do one of the following actions: If you are not using branches, click the No radio button. If you are using branches: 1. Click the Yes radio button. 2. Do one of the following actions: To merge branches into the highest existing ruleset in the application, click Highest existing ruleset. To merge branches into a new ruleset, click New ruleset. 3. In the Password field, enter the password that locks the rulesets on the development system. 11. Click Next. The system adds tasks, which you cannot delete, to the pipeline that are required to successfully run a workflow, for example, Deploy and Generate Artifact. The system also adds other tasks to enforce best practices such as Check guardrail compliance and Verify security

checklist. 12. Optional: In the Merge criteria pane, specify tasks that must be completed before a branch can be merged in the pipeline. 1. Click Add task. 2. Specify the task that you want to perform. To specify that a branch must meet a compliance score before it can be merged: 1. From the Task list, select Check guardrail compliance. 2. In the Weighted compliance score field, enter the minimum required compliance score. 3. Click Submit. For more information about compliance scores, see Compliance score logic. To specify that a branch must be reviewed before it can be merged: 1. From the Task list, select Check review status. 2. Click Submit. For more information about branch reviews, see Branch reviews. To run Pega unit tests on the branches for the pipeline application or for an application that is associated with an access group before it can be merged: 1. From the Task list, select Pega unit testing. 2. Optional: To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. 3. Click Submit. For more information about creating Pega unit tests, see Creating Pega unit test cases. 13. Optional: To start a deployment automatically when a branch is merged, click the Trigger deployment on merge check box. 14. Optional: Clear a check box for a deployment life cycle stage to skip it. 15. Optional: In the Continuous Deployment section pane, specify the tasks to be performed during each stage of the pipeline. 1. Do one of the following actions: Click a manually added task, click the More icon, and then click either Add task above or Add task below. Click Add task in the stage. 2. From the Task list, select the task that you want to perform. To run Pega unit tests either for the pipeline application or for an application that is associated with an access group, select Pega unit testing: 1. Optional: Perform one of the following actions: To run all the Pega unit tests that are in a Pega unit suite for the pipeline application, in the Test Suite ID field, enter the pxInsName of the suite ID. You can find this value in the XML document that comprises the test suite by clicking Actions > XML on the Edit Test Suite form. If you do not specify a test suite, all the Pega unit tests for the pipeline application are run. To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. For more information about creating Pega unit tests, see Creating Pega unit test cases. 2. Click Submit. To run a Jenkins job that you have configured, select Jenkins. 1. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins deployment) that you want to run. 2. In the Token field, enter the Jenkins authentication token. 3. In the Parameters field, enter parameters, if any, to send to the Jenkins job. Separate multiple parameters with a comma. 4. Click Submit. To add a manual step that a user must perform in the pipeline, select Manual. 1. In the Job name field, enter text that describes the action that you want the user to take. 2. In the Assigned to field, press the Down Arrow key and select the operator ID to assign the task to. To specify that the application must meet a compliance score, select Check guardrail compliance. 1. In the Weighted compliance score field, enter the minimum required compliance score. 2. Click Submit. To specify that all the tasks in the Application Security Checklist must be performed so that the pipeline can comply with security best practices, select Verify security checklist, and then click Submit. You must log in to the system for which this task is configured, and then mark all the tasks in the Application Security checklist as completed for the pipeline application. For more information about completing the checklist, see Preparing your application for secure deployment. To start a test coverage session at the application level, select Enable test coverage, and then click Submit. Starting and stopping test coverage generates a report that identifies the executable rules in your application that are either covered or not covered by tests. To stop the test coverage session, select Validate test coverage, and then click Submit. Add this task below the Start test coverage task on the same system. You must add this task to stop a test coverage session if you used the Enable test coverage task. For more information about application-level coverage reports, see Generating an application-level test coverage report. To run a Pega scenario test, select Run Pega scenario tests. 1. In the User name field, enter the user name for the Pega Platform instance on which you are running Pega scenario tests. 2. In the Password field, enter the Pega Platform password. 3. From the Test Service Provider field, select the browser that you are using to run the Pega scenario tests in the pipeline. 4. In the Provider auth name field, enter the auth name that you you use to log in to the test service provider. 5. In the Provider auth key field, enter the key for the test service provider. 6. Click Submit. For more information about scenario tests, see Creating a scenario test. To refresh the Application Quality dashboard, which provides information about the health of your application, on the candidate system, select Refresh application quality, and then click Submit. Add this task after you have run Pega unit tasks, checked guardrail compliance, run Pega scenario tests, and started and stopped test coverage. Optional: To modify the Approve for production task, which is added to the stage before production and which you use so that a user must approve application changes before they are sent to production, do the following actions: 1. Click the Info icon. 2. In the Job name field, enter a name for the task. 3. In the Assign to field, press the Down Arrow key and select the user who approves the application for production. An email is sent to this user, who can approve or reject application changes from within the email. 4. Click Submit. 16. Click Finish.

Modifying application details You can modify application details, such as the product rule that defines the content of the application that moves through the pipeline. 1. 2. 3. 4.

In the Dev Studio footer, click Deployment Manager. Click the name of the pipeline. Click Actions > Application details. Optional: In the Development environment field, enter the URL of the development system, which is the system on which the product rule that defines the application package that moves through the repository is located. 5. Optional: In the Version field, press the Down Arrow key and select the application version.

6. Optional: In the Product rule field, enter the product rule that defines the contents of the application. 7. Optional: In the Version field, enter the product rule version. 8. Optional: If the application depends on other applications, in the Dependencies section, add those applications. 1. Click Add. 2. In the Application name field, press the Down Arrow key and select the application name. 3. In the Application version field, press the Down Arrow key and select the application version. 4. In the Repository name field, press the Down Arrow key and select the repository that contains the production-ready artifact of the dependent application. If you want the latest artifact of the dependent application to be automatically populated, ensure that the repository that contains the production-ready artifact of the dependent application is configured to support file updates. 5. In the Artifact name field, press the Down Arrow key and select the artifact. For more information about dependent applications, see Listing product dependencies.

Modifying URLs and authentication profiles You can modify the URLs of your development and candidate systems and the authentication profiles that are used to communicate between those systems and the orchestration server. 1. 2. 3. 4. 5. 6.

In the Dev Studio footer, click Deployment Manager. Click the name of the pipeline. Click Actions > Environment details. Click Stages. In the Environments field for the system, press the Down Arrow key and select the URL of the system. In the Authentication field for the system, press the Down Arrow key and select the authentication profile that you want to communicate from the orchestration server to the system. 7. Click Save.

Modifying development and production repositories You can modify the development and production repositories through which the product rule that contains application contents moves through the pipeline. All the generated artifacts are archived in the Development repository, and all the production-ready artifacts are archived in the Production repository. You do not need to configure repositories if you are using Pega Cloud but can use different repositories other than the default ones that are provided. 1. 2. 3. 4. 5.

In the Dev Studio footer, click Deployment Manager. Click the pipeline. Click Actions > Environment details. Click Artifact Management. Do one of the following actions to select a repository: If you are using Deployment Manager on premises, or on Pega Cloud with default repositories, complete the following tasks: 1. In the Application repository section, in the Development repository field, press the Down Arrow key and select the development repository 2. In the Production repository field, press the Down Arrow key and select the production repository. If you are using Deployment Manager on Pega Cloud and want to use different repositories other than the default repositories, complete the following tasks: 1. In the Artifact repository section, click Yes. 2. In the Development repository field, press the Down Arrow key and select the development repository. 3. In the Production repository field, press the Down Arrow key and select the production repository. 6. Click Save.

Specifying Jenkins server information If you are using a Jenkins step, specify details about the Jenkins server such as its URL. 1. 2. 3. 4. 5. 6. 7. 8.

In the Dev Studio footer, click Deployment Manager. Click the name of the pipeline. Click Actions > Environment details. Click External orchestration server. Click the Jenkins icon. Click OK. In the URL field, enter the URL of the Jenkins server. In the Authentication profile field, press the Down Arrow key and select the authentication profile on the orchestration server that specifies the Jenkins credentials to use for Jenkins jobs. 9. Click Save.

Specifying merge options for branches If you are using branches in your application, specify options for merging branches into the base application. 1. 2. 3. 4.

In the Dev Studio footer, click Deployment Manager. Click the name of the pipeline. Click Actions > Merge policy. Specify if you are using branches in your application. 1. Do one of the following actions: If you are not using branches, click the No radio button. If you are using branches: 1. Click Yes. 2. Do one of the following actions: To merge branches into the highest existing ruleset in the application, click Highest existing ruleset. To merge branches into a new ruleset, click New ruleset. 3. In the Password field, enter the password that locks the rulesets on the development system. 5. Click Save.

Modifying stages and tasks in the pipeline You can modify the stages and the tasks that are performed in each stage of the pipeline. For example, you can skip a stage or add tasks such as Pega unit testing to be done on the QA stage. 1. In the Dev Studio footer, click Deployment Manager.

2. Click the name of the pipeline. 3. Click Pipeline model. 4. Optional: In the Merge criteria pane, specify tasks that must be completed before a branch can be merged in the pipeline. 1. Click Add task. 2. Specify the task that you want to perform. To run Pega unit tests on the branches for the pipeline application or for an application that is associated with an access group before it can be merged: 1. From the Task list, select Pega unit testing. 2. Optional: To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. For more information about creating Pega unit tests, see Creating Pega unit test cases. 3. Click Submit. To specify that a branch must meet a compliance score before it can be merged: 1. From the Task list, select Check guardrail compliance. 2. In the Weighted compliance score field, enter the minimum required compliance score. 3. Click Submit. For more information about compliance scores, see Compliance score logic. To specify that a branch must be reviewed before it can be merged: 1. From the Task list, select Check review status. 2. Click Submit. For more information about branch reviews, see Branch reviews. 5. Optional: To start a deployment automatically when a branch is merged, click the Trigger deployment on merge check box. 6. Optional: Clear a check box for a deployment life cycle stage to skip it. 7. Optional: In the Continuous Deployment section pane, specify the tasks to be performed during each stage of the pipeline. 1. Do one of the following actions: Click a manually added task, click the More icon, and then click either Add task above or Add task below to add the task above or below the existing task. Click Add task in the stage. 2. From the Task list, select the task that you want to perform. To run Pega unit tests either for the pipeline application or for an application that is associated with an access group, select Pega unit testing. 1. Optionally, perform one of the following actions: To run all the Pega unit tests that are in a Pega unit suite for the pipeline application, in the Test Suite ID field, enter the pxInsName of the suite ID. You can find this value in the XML document that comprises the test suite by clicking Actions > XML on the Edit Test Suite form. If you do not specify a test suite, all the Pega unit tests for the pipeline application are run. To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. For more information about creating Pega unit tests, see Creating Pega unit test cases. 2. Click Submit. To run a Jenkins job that you have configured, select Jenkins. 1. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins deployment) that you want to run. 2. In the Token field, enter the Jenkins authentication token. 3. In the Parameters field, enter parameters, if any, to send to the Jenkins job. Separate multiple parameters with a comma. 4. Click Submit. To add a manual step that a user must perform in the pipeline, select Manual. 1. In the Job name field, enter text that describes the action that you want the user to take. 2. In the Assigned to field, press the Down Arrow key and select the operator ID to assign the task to. 3. Click Submit. To specify that the application must meet a compliance score, select Check guardrail compliance. 1. In the Weighted compliance score field, enter the minimum required compliance score. 2. Click Submit. For more information about compliance scores, see Compliance score logic. To specify that all the tasks in the Application Security Checklist must be performed so that the pipeline can comply with security best practices, select Verify security checklist, and then click Submit. You must log in to the system for which this task is configured, and then mark all the tasks in the Application Security checklist as completed for the pipeline application. For more information about completing the checklist, see Preparing your application for secure deployment. Optional: To modify the Approve for production task, which is added to the stage before production and which you use so that a user must approve application changes before they are sent to production, do the following actions: 1. Click the Info icon. 2. In the Job name field, enter a name for the task. 3. In the Assign to field, press the Down Arrow key and select the user who approves the application for production. An email is sent to this user, who can approve or reject application changes from within the email. 4. Click Submit. To run a scenario test, select Run Pega scenario tests. 1. In the User name field, enter the user name for the Pega Platform instance on which you are running Pega scenario tests. 2. In the Password field, enter the Pega Platform password. 3. From the Test Service Provider field, select the browser that you are using to run the Pega scenario tests in the pipeline. 4. In the Provider auth name field, enter the auth name that you you use to log in to the test service provider. 5. In the Provider auth key field, enter the key for the test service provider. 6. Click Submit. For more information about scenario tests, see Creating a scenario test. To start a test coverage session at the application level, select Enable test coverage, and then click Submit. Starting and stopping test coverage generates a report that identifies the executable rules in your application that are either covered or not covered by tests. To stop the test coverage session, select Validate test coverage, and then click Submit. Add this task below the Start test coverage task on the same system. You must add this task to stop a test coverage session if you used the Enable test coverage task. For more information about application-level coverage reports, see Generating an application-level test coverage report. To refresh the Application Quality dashboard, which provides information about the health of your application, on the candidate system, select Refresh application quality, and then click Submit. Add this task after you have run Pega unit tasks, checked guardrail compliance, run scenario tests, and started and stopped test coverage. 8. Click Finish.

Accessing systems in your pipeline You can open the systems in your pipeline and log in to the Pega Platform instances on them. 1. In the Dev Studio footer, click Deployment Manager. 2. Click the pipeline.

3. Click the pop-out arrow for the system that you want to open.

Manually starting a deployment Start a deployment manually if you are not using branches and are working directly in rulesets. You can also start a deployment manually if you do not want deployments to start automatically when branches are merged. You must also clear the Trigger deployment on merge check box in the pipeline configuration. 1. In the Dev Studio footer, click Deployment Manager. 2. In the landing page, click Start deployment for the appropriate pipeline. 3. In the Start deployment dialog box, start a new deployment or deploy an existing application by completing one of the following actions: To start a deployment and deploy a new application package, do the following steps: 1. Click Generate new artifact. 2. In the Deployment name field, enter the name of the deployment. 3. Click Deploy. To deploy an application package that is on a cloud repository, do the following steps: 1. Click Deploy an existing artifact. 2. In the Deployment name field, enter the name of the deployment. 3. In the Select a repository field, press the Down Arrow key and select the repository. 4. In the Select an artifact field, press the Down Arrow key and select the application package. 1. Click Deploy.

Starting a deployment in a branch-based environment In non-distributed, branch-based environments, you can immediately start a deployment by submitting a branch into a pipeline in the Merge Branches wizard. For more information, see Submitting a branch into a pipeline. The wizard displays the merge status of branches so that you do not need to open Deployment Manager to view it.

Starting a deployment in a distributed branch-based environment If you are using Deployment Manager in a distributed, branch-based environment and using multiple pipelines per application, first export the branch to the main development system, and then merge it. 1. On the remote development system, package the branch. For more information, see Packaging a branch. 2. Export the branch. 3. On the main development system, import the branch by using the Import wizard. For more information, see Import wizard landing page. 4. On the main development system, start a deployment by using the Merge Branches wizard. For more information, see Submitting a branch into a pipeline. The wizard displays the merge status of branches so that you do not need to open Deployment Manager to view it. If you are using one pipeline per application, you can publish a branch to start the merge. For more information, see Publishing a branch to a repository.

Publishing application changes in App Studio You can publish application changes that you make in App Studio to the pipeline. Publishing your changes creates a patch version of the application and starts a deployment. For example, you can change a life cycle, data model, or user interface elements in a screen and submit those changes to systems in the pipeline. When you publish an application to a stage, your rules are deployed immediately to that system. To allow stakeholders to inspect and verify changes before they are deployed the stage, configure a manual task in on the previous stage. When the pipeline runs, it is paused during a manual step that is assigned to a user, which allows stakeholders to review your changes before they approve the step and resume running the pipeline. Your pipeline should have at least a quality assurance or staging stage with a manual task so that you do not deploy changes to production that have not been approved by stakeholders. You can submit applications to a pipeline when there is only one unlocked ruleset version in each ruleset of your application. 1. In the App Studio header, click Publish. The dialog box that appears displays the stages that are enabled in the application pipeline in Deployment Manager. The available stages are, in order, quality assurance, staging, and production. It also displays the application versions that are on each system. The version numbers are taken from the number at the end of each application deployment name in Deployment Manager. For example, if a deployment has a name of "MyNewApp:01_01_75", the dialog box displays "v75". You can view application version numbers by clicking Settings > Versions in the navigation panel. 2. Submit an application from development to quality assurance or staging in your pipeline by completing the following steps: a. In the dialog box, click either Publish to QA or Publish to staging. b. Optional: To add a comment, which will be published when you submit the application, add a comment in the Publish confirmation dialog box. c. Optional: If Agile Workbench has been configured, associate a bug or user story with the application, in the Associated User stories/Bugs field, press the Down Arrow key and select the bug or user story. d. Click OK. Each unlocked ruleset version in your application is locked and rolled to the next highest version and is packaged and imported into the system. The amount of time that publishing application changes takes depends on the size of your application. A new application is also copied from the application that is defined on the pipeline in Deployment Manager. The application patch version is updated to reflect the version of the new rulesets; for example, if the ruleset versions of the patch application are 01-01-15, the application version is updated to be 01.01.15. In addition, this application is locked and cannot be unlocked. You can use this application to test specific patch versions of your application on quality assurance or staging systems. You can also use it to roll back a deployment. 3. Optional: Make changes to your application in the unlocked rulesets, which you can publish again into the pipeline. If an application is

already on the system, it is overridden by the new version that you publish. 4. Optional: If you configured a manual step, request that stakeholders review and test your changes. After they communicate to you that they have completed testing, you can publish your changes to the next stage in the pipeline. 5. Publish the application to the next stage in the pipeline by clicking the link that is displayed. The name of the link is the Job name field of the manual task that is defined on the stage. If you do not have a manual task defined, the application automatically moves to the next stage.

Viewing application version information You can view details about the application versions that were submitted into a pipeline. 1. In App Studio, click Turn editing on. 2. In the Navigation panel, click Settings > Versions.

Schema changes in application packages If an application package that is to be deployed on candidate systems contains schema changes, the Pega Platform orchestration server checks the candidate system to verify that you have the required privileges to deploy the schema changes. One of the following results occurs: If you have the appropriate privileges, schema changes are automatically applied to the candidate system, the application package is deployed to the candidate system, and the pipeline continues. If you do not have the appropriate privileges, Deployment Manager generates an SQL file that lists the schema changes and sends it to your email address. It also creates a manual step, pausing the pipeline, so that you can apply the schema changes. After you complete the step, the pipeline continues. For more information about completing a step, see Completing or rejecting a manual step. You can also configure settings to automatically deploy schema changes so that you do not have to manually apply them if you do not have the required privileges. For more information, see Configuring settings to automatically deploy schema changes.

Configuring settings to automatically deploy schema changes You can configure settings to automatically deploy schema changes that are in an application package that is to be deployed on candidate systems. Configure these settings so that you do not have to apply schema changes if you do not have the privileges to deploy them. 1. On the orchestration server, in Pega Platform, set the AutoDBSchemaChanges Dynamic System Setting to true to enable schema changes at the system level. 1. In Dev Studio, search for AutoDBSchemaChanges. 2. On the Settings tab, in the Value field, enter true. 3. Click Save. 2. Add the SchemaImport privilege to your access role to enable schema changes at the user level. For more information, see Specifying privileges for an Access or Role to Object rule. These settings are applied sequentially. If the AutoDBSchemaChanges Dynamic System Setting is set to false, you cannot deploy schema changes, even if you have the SchemaImport privilege.

Completing or rejecting a manual step in a deployment If a manual step is configured on a stage, the deployment pauses when it reaches the step, and you can either complete it or reject it. For example, if a user was assigned a task and completed it, you can complete the task to continue the deployment. Deployment Manager also sends you an email when there is a manual step in the pipeline. You can complete or reject a step either within the pipeline or through email. Deployment Manager also generates a manual step if there are schema changes in the application package that the release manager must apply. For more information, see Schema changes in application packages. To complete or reject a manual step within the deployment, do the following steps: 1. In the Dev Studio footer, click Deployment Manager. 2. Click a pipeline. 3. Click one of the following links: Complete: Resolve the task so that the deployment continues through the pipeline. Reject: Reject the task so that the deployment does not proceed. To complete or reject a manual step from within an email, click either Accept or Reject.

Managing aged updates An aged update is a rule or data instance in an application package that is older than an instance that is on a system to which you want to deploy the application package. By being able to import aged updates, skip the import, or manually deploy your application changes, you now have more flexibility in determining the rules that you want in your application and how you want to deploy them. For example, you can update a Dynamic System Setting on a quality assurance system, which has an application package that contains the older instance of the Dynamic System Setting. Before Deployment Manager deploys the package, the system detects that the version of the Dynamic System Setting on the system is newer than the version in the package and creates a manual step in the pipeline. To import aged updates: 1. In the Dev Studio footer, click Deployment Manager. 2. Click the pipeline. 3. Optional: Click View aged updates to view a list of the rules and data instances, which are in the application package, that are older than the instances that are on the system. 4. Click the More icon and select one of the following options: Click Overwrite aged updates to import the older rule and data instances that are in the application package into the system, which overwrites the newer versions that are on the system. Click Skip aged updates to skip the import. Click Deploy manually and resume to manually deploy the package from the Import wizard on the system. Deployment Manager does not run the Deploy step on the stage.

Pausing a deployment

When you pause a deployment, the pipeline completes the task that it is running, and stops the deployment at the next step. To pause a deployment: 1. In the Dev Studio footer, click Deployment Manager. 2. Click the pipeline. 3. Click Pause.

Stopping a deployment To stop a deployment: 1. In the Dev Studio footer, click Deployment Manager. 2. Click the pipeline. 3. Click the More icon, and then click Abort.

Performing actions on a deployment that has errors If a deployment has errors, the pipeline stops processing on it. You can perform actions on it, such as rolling back the deployment or skipping the step on which the error occurred. 1. In the Dev Studio footer, click Deployment Manager. 2. Click a pipeline. 3. Click the More icon, and then click one of the following options: Resume from current task - Resume running the pipeline from the task. Skip current task and continue - Skip the step and continue running the pipeline. Rollback - Roll back to an earlier deployment. Abort - Stop running the pipeline.

Diagnosing a pipeline You can diagnose your pipeline to verify that your pipeline is configured properly such as whether the target application and product rule are in the development environment, connectivity between systems and repositories is working, and premerge settings are correctly configured. 1. 2. 3. 4. 5.

In the Dev Studio footer, click Deployment Manager. Click a pipeline. Click Actions > Diagnose pipeline. In the Diagnose application pipeline dialog box, review the errors, if any. Optional: To view troubleshooting tips about errors, hover your mouse over the Troubleshooting tips link.

If the RMURL Dynamic System Setting is not configured, Deployment Manager displays a message that you can disregard if you are not using branches, because you do not need to configure the Dynamic System Setting.

Viewing merge requests You can view the status of the merge requests for a pipeline. For example, you can see whether a branch was merged in a deployment and when it was merged. 1. 2. 3. 4.

In the Dev Studio footer, click Deployment Manager. Click a pipeline. In the Development stage, click X Merges in queue to view all the branches that are in the queue or for which merge is in progress. In the Merge requests ready for deployment dialog box, click View all merge requests to view all the branches that are merged into the pipeline.

Viewing deployment logs View logs for a deployment to see the completion status of operations, for example, when a deployment is moved to a new stage. You can change the logging level to control which events are displayed in the log. For example, you can change logging levels of your deployment from INFO to DEBUG for troubleshooting purposes. For more information, see Logging Level Settings tool. 1. In the Dev Studio footer, click Deployment Manager. 2. Click a pipeline. 3. Perform one of the following actions: To view the log for the current deployment, click the More icon, and then click View logs. To view the log for a previous deployment, expand the Deployment History pane and click Logs for the appropriate deployment.

Viewing deployment reports Deployment reports provide information about a specific deployment. You can view information such as the number of tasks that you configured on a deployment that have been completed and when each task started and ended. 1. In the Dev Studio footer, click Deployment Manager. 2. Click a pipeline. 3. Perform one of the following actions: To view the reportfor the current deployment, click the More icon, and then click View report. To view the report for a previous deployment, expand the Deployment History pane and click Reports for the appropriate deployment.

Viewing reports for all deployments Reports provide a variety of information about all the deployments in your pipeline. You can view the following key performance indicators (KPI): Deployment Success - Percentage of deployments that are successfully deployed to production Deployment Frequency – Frequency of new deployments to production Deployment Speed - Average time taken to deploy to production Start frequency - Frequency at which new deployments are triggered

Failure rate - Average number of failures per deployment Merges per day - Average number of branches that are successfully merged per day To view reports, do the following tasks: 1. In the Dev studio footer, click Deployment Manager. 2. Click a pipeline. 3. Click Actions > View report.

Deleting an application pipeline When you delete a pipeline, its associated application packages are not removed from the repositories that the pipeline is configured to use. 1. In the Dev Studio footer, click Deployment Manager. 2. Click the Delete icon for the pipeline that you want to delete. 3. Click Submit.

Viewing, downloading, and deleting application packages in repositories You can view, download, and delete application packages in repositories that are on the orchestration server. If you are using Deployment Manager on Pega Cloud, application packages that you have deployed to cloud repositories are stored on Pega Cloud. To manage your cloud storage space, you can download and permanently delete the packages. 1. 2. 3. 4. 5. 6.

In the Dev Studio footer, click Deployment Manager. Click the pipeline for which you want to download or delete packages. Click Actions > Browse artifacts. Click either Development Repository or Production Repository. To download an application package, click the package, and then save it to the appropriate location. To delete a package, select the check boxes for the packages that you want to delete and click Delete.

Deployment Manager 3.4.x Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your Pega applications from within Pega Platform™. You can create a standardized deployment process so that you can deploy predictable, high-quality releases without using third-party tools. With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging, application package generation, artifact management, and package promotion to different stages in the workflow. Deployment Manager 3.4.x is supported on Pega 7.4. You can download it for Pega Platform from the Deployment Manager Pega Exchange page. Each customer VPC on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use with your Pega Cloud application. This document describes the features for the latest version of Deployment Manager 3.4.x. For more information about the features in latest version of Deployment Manager 3.4.x, see the following articles: Deployment Manager release notes Deployment Manager architecture and workflows Best practices for using branches with Deployment Manager Creating custom repository types for Deployment Manager Installing, upgrading, configuring Deployment Manager 3.4.x Using Deployment Manager 3.4.x

Installing, upgrading, and configuring Deployment Manager 3.4.x Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks and allow you to quickly deploy high-quality software to production. This document describes the features for the latest version of Deployment Manager 3.4.x. See the following topics for more information about installing and configuring Deployment Manager: Step Step Step Step Step

1: 2: 3: 4: 5:

Installing Deployment Manager Upgrading to Deployment Manager 3.4.x Configuring systems in the pipeline Configuring the development system for branch-based development (optional) Configuring additional settings

For information about using Deployment Manager, see Using Deployment Manager 3.4.x.

Step 1: Installing Deployment Manager Each customer virtual private cloud (VPC) on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use it with your Pega Cloud application. If you are upgrading from an earlier release to Deployment Manager 3.4.x, contact Pegasystems® Global Customer Support (GCS) to request a new version. If you are upgrading from Deployment Manger 3.2.1, after you import files on premises or Deployment Manager 3.4.x is deployed on Pega Cloud, finish the upgrade immediately so that your pipelines work in Deployment Manager 3.4.x. If you are using Deployment Manager on premises, complete the following steps to install it. 1. Install Pega 7.4 on all systems in the CI/CD pipeline. 2. Browse to the Deployment Manager Pega Exchange page, and then download the DeploymentManager03.04.0x.zip file for your version of Deployment Manager to your local disk on each system. 3. Use the Import wizard to import files into the appropriate systems. For more information about the Import wizard, see Importing a file

by using the Import wizard. 4. On the orchestration server, import the following files: PegaDevOpsFoundation_03.04.0x.zip PegaDeploymentManager_03.04.0x.zip 5. On the development, QA, staging, and production systems, import the PegaDevOpsFoundation_03.04.0x.zip file. 6. Optional: If you are using a distributed development, on the remote development system, import the PegaDevOpsFoundation_03.04.0x.zip file. 7. Do one of the following actions: 1. If you are upgrading to Deployment Manager 3.4.x, perform the upgrade. For more information, see Upgrading to Deployment Manager 3.4.x. 2. If you are not upgrading Deployment Manager 3.4.x, continue the installation procedure. For more information, see Step 3b: Configuring the orchestration server.

Step 2: Upgrading to Deployment Manager 3.4.x Before you upgrade, ensure that no deployments are running, have errors, or are paused. To upgrade to Deployment Manager 3.4.x either on Pega Cloud or on premises, perform the following steps: 1. Enable default operators and configure authentication profiles on the orchestration server and candidate systems. For more information, see Step 3a: Configuring authentication profiles on the orchestration server and candidate systems. 2. On each candidate system, add the PegaDevOpsFoundation application to your application stack. 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Built on application section, click Add application. 3. In the Name field, press the Down Arrow key and select PegaDevOpsFoundation. 4. In the Version field, press the Down Arrow key and select the version of Deployment Manager that you are using. 5. Click Save. If you are upgrading from Deployment Manager 3.2.1, you do not need to do the rest of the steps in this procedure or the required steps in the remainder of this document. If you are upgrading from earlier releases and have pipelines configured, complete this procedure. 3. 4. 5. 6. 7.

On the orchestration server, log in to the release management application. In Designer Studio, search for pxUpdatePipeline, and then click the activity in the dialog box that displays the results. Click Actions > Run. In the dialog box that is displayed, click Run. Modify the current release management application so that it is built on PegaDeploymentManager:03-04-01. 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Edit Application rule form, on the Definition tab, in the Built on application section, for the PegaDeploymentManager application, press the Down Arrow key and select 03.04.01. 3. Click Save. 8. Merge rulesets to the PipelineData ruleset. 1. Click Designer Studio > System > Refactor > Rulesets. 2. Click Copy/Merge RuleSet. 3. Click the Merge Source RuleSet(s) to Target RuleSet radio button. 4. Click the RuleSet Versions radio button. 5. In the Available Source Ruleset(s) section, select the first open ruleset version that appears in the list, and then click the Move icon. All your current pipelines are stored in the first open ruleset. If you modified this ruleset after you created the application, select all the ruleset versions that contain pipeline data. 9. 10. 11. 12. 13. 14. 15.

In the target RuleSet/Information section, in the Name field, press the Down Arrow key and select Pipeline Data. In the Version field, enter 01-01-01. For the Delete Source RuleSet(s) upon completion of merge? option, click No. Click Next. Click Merge to merge your pipelines to the PipelineData:01-01-01 rulset. Click Done. Your pipelines are migrated to the Pega Deployment Manager application. Log out of the orchestration server and log back in to it with the DMReleaseAdmin operator ID and the password that you specified for it.

For backup purposes, pipelines are still visible in your previous release management application. However, you should not create deployments with this application, because deployments might not work correctly. You do not need to perform any of the required steps in the remainder of this document.

Step 3: Configuring systems in the pipeline Complete the following steps to set up a pipeline for all supported CI/CD workflows. If you are using branches, you must configure additional settings after you perform the required steps. 1. 2. 3. 4.

Step Step Step Step

3a: Configuring authentication profiles on the orchestration server and candidate systems 3b: Configuring the orchestration server 3c: Configuring candidate systems 3d: Creating repositories on the orchestration server and candidate systems

Step 3a: Configuring authentication profiles on the orchestration server and candidate systems When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs, and authentication profiles that communicate between the orchestration server and candidate systems are also installed. On the orchestration server, the following items are installed: The Pega Deployment Manager application. The DMReleaseAdmin operator ID, which release managers use to log in to the Pega Deployment Manager application. You must enable this operator ID and specify its password. The DMAppAdmin authentication profile. You must update this authentication profile to use the password that you specified for the DMAppAdmin operator ID, which is configured on all the candidate systems. On all the candidate systems, the following items are installed: The PegaDevOpsFoundation application.

The DMAppAdmin operator ID, which points to the PegaDevOpsFoundation application. You must enable this operator ID and specify its password. The DMReleaseAdmin authentication profile. You must update this authentication profile to use the password that you specified for the DMReleaseAdmin operator ID, which is configured on the orchestration server. The DMReleaseAdmin and DMAppAdmin operator IDs do not have default passwords. Configure the default authentication profile by doing these steps: 1. On the orchestration server, enable the DMReleaseAdmin operator ID and specify its password. 1. Log in to the orchestration server with [email protected]/install. 2. In Designer Studio, click Records > Organization > Operator ID, and then click DMReleaseAdmin. 3. In the Designer Studio header, click the operator ID initials, and then click Operator. 4. On the Edit Operator ID rule form, click the Security tab. 5. Clear the Disable Operator check box. 6. Click Save. 7. Click Update password. 8. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then click Submit. 9. Optional: Clear the Force password change on next login check box if you do not want to change the password for the DMReleaseAdmin operator ID the next time that you log in. 10. Log out of the orchestration server. 2. On each candidate system, update the DMReleaseAdmin authentication profile to use the new password. All candidate systems use this authentication profile to communicate with the orchestration server about the status of the tasks in the pipeline. 1. Log in to each candidate system with the DMReleaseAdmin user name and the password that you specified. 2. Click Records > Security > Authentication Profile. 3. Click DMReleaseAdmin. 4. On the Edit Authentication Profile rule form, click Set password. 5. In the Password dialog box, enter the password, and then click Submit. 6. Save the rule form. 3. On each candidate system, which includes the development, QA, staging, and production systems, enable the DMAppAdmin operator ID. If you want to create your own operator IDs, ensure that they point to the PegaDevOpsFoundation application. 1. Log in to each candidate system with [email protected]/install. 2. In Designer Studio, click Records > Organization > Operator ID, and then click DMAppAdmin. 3. In the Designer Studio header, click the operator ID initials, and then click Operator. 4. On the Edit Operator ID rule form, click the Security tab. 5. Clear the Disable Operator check box. 6. Click Save. 7. Click Update password. 8. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then click Submit. 9. Optional: Clear the Force password change on next login check box if you do not want to change the password for the DMAppAdmin operator ID the next time that you log in. 10. Log out of each candidate system. 4. On the orchestration server, modify the DMAppAdmin authentication profile to use the new password. The orchestration server uses this authentication profile to communicate with candidate systems so that it can run tasks in the pipeline. 1. Log in to the orchestration server with the DMAppAdmin user name and the password that you specified. 2. Click Records > Security > Authentication Profile. 3. Click DMAppAdmin. 4. On the Edit Authentication Profile rule form, click Set password. 5. In the Password dialog box, enter the password, and then click Submit. 6. Save the rule form. 5. Do one of the following actions: 1. If you are upgrading to Deployment Manager 3.4.x, resume the upgrade procedure from step 2. For more information, see Upgrading to Deployment Manager 3.4.x. 2. If you are not upgrading, continue the installation procedure. For more information, see Step 3b: Configuring the orchestration server.

Step 3b: Configuring the orchestration server The orchestration server is the system on which release managers configure and manage CI/CD pipelines. 1. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages. 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 2. Configure the candidate systems in your pipeline. For more information, see Step 3c: Configuring candidate systems.

Step 3c: Configuring candidate systems Configure each system system that is used for the development, QA, staging, and production stage in the pipeline. 1. On each candidate system, add the PegaDevOpsFoundation application to your application stack. 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Built on application section, click Add application. 3. In the Name field, press the Down Arrow key and select PegaDevOpsFoundation. 4. In the Version field, press the Down Arrow key and select the version of Deployment Manager that you are using. 5. Click Save. 2. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages. 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 3. Optional: If you want to use a product rule other than the default product rule that is created by the New Application wizard, on the development system, create a product rule that defines the application package that will be moved through repositories in the pipeline. For more information, see Product rules: Completing the Create, Save As, or Specialization form.

When you use the New Application wizard, a default product rule is created that has the same name as your application. 4. Configure repositories through which to move artifacts in your pipeline. For more information, see Step 3d: Creating repositories on the orchestration server and candidate systems.

Step 3d: Creating repositories on the orchestration server and candidate systems If you are using Deployment Manager on premises, create repositories on the orchestration server and all candidate systems to move your application between all the systems in the pipeline. You can use a supported repository type that is provided in Pega Platform™, or you can create a custom repository type. If you are using Deployment Manager on Pega Cloud, default repositories are provided. If you want to use repositories other than the ones provided, you can create your own. For more information about creating a supported repository type, see Creating a repository connection. For more information about creating a custom repository type, see Creating custom repository types for Deployment Manager. When you create repositories, note the following information: The Pega repository type is not supported. Ensure that each repository has the same name on all systems. When you create JFrog Artifactory repositories, ensure that you create a Generic package type in JFrog Artifactory. Also, when you create the authentication profile for the repository on Pega Platform, you must select the Preemptive authentication check box. After you configure a pipeline, you can verify that the repository connects to the URL of the development and production repositories by clicking Test Connectivity on the Repository rule form.

Step 4: Configuring the development system for branch-based development (optional) After you configure the orchestration server and all your candidate systems, configure additional settings so that you can use pipelines if you are using branches in a distributed or non-distributed branch-based environment. You must configure the development system to create a pipeline in a branch-based environment. 1. On the development system (in nondistributed environment) or the main development system (in a distributed environment), create a Dynamic System Setting to define the URL of the orchestration server, even if the orchestration server and the development system are the same system. 1. Click Create > Records > SysAdmin > Dynamic System Settings. 2. In the Owning Ruleset field, enter Pega-DevOps-Foundation. 3. In the Setting Purpose field, enter RMURL. 4. Click Create and open. 5. On the Settings tab, in the Value field, enter the URL of the orchestration server. Use this format: http://hostname:port/prweb/PRRestService. 6. Click Save. 2. Complete the following steps on either the development system (in a non-distributed environment) or the remote development system (in a distributed environment). 1. Use the New Application wizard to create a new development application that developers will log in to. This application allows development teams to maintain a list of development branches without modifying the definition of the target application. 2. Add the target application of the pipeline as a built-on application layer of the development application. 1. Log in to the application. 2. In the Designer Studio header, click the name of your application, and then click Definition. 3. In the Built-on application section, click Add application. 4. In the Name field, press the Down Arrow key and select the name of the target application. 5. In the Version field, press the Down Arrow key and select the target application version. 6. Click Save. 3. Lock the application rulesets to prevent developers from making changes to rules after branches have been merged. 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Application rulesets section, click the Open icon for each ruleset that you want to lock. 3. Click Lock and Save. 4. Optional: It is recommended that you merge branches by using the Merge Branch wizard. However, you can publish a branch to the remote development system to start a deployment. Publishing a branch when you have multiple pipelines per application is not supported. 1. In Designer Studio, enable Pega repository types. For more information, see Enabling the Pega repository type. 2. Create a new Pega repository type. For more information, see Creating a repository connection. Ensure that you do the following tasks: In the Host ID field, enter the URL of the development system. The default access group of the operator that is configured for the authentication profile of this repository should point to the pipeline application on the development system (in a nondistributed environment) or main development system (in a distributed environment).

Step 5: Configuring additional settings As part of your pipeline, you can optionally send email notifications to users or configure Jenkins if you are using a Jenkins task. See the following topics for more information: Configuring email notifications on the orchestration server Configuring Jenkins

Configuring email notifications on the orchestration server You can optionally configure email notifications on the orchestration server. For example, users can receive emails when pre-merge criteria are not met and the system cannot create a deployment. To configure the orchestration server to send emails, complete the following steps: 1. ​Use the Email wizard to configure an email account and listener by clicking Designer Studio > Integration > Email > Email Wizard. This email account sends notifications to users when events occur, for example, if there are merge conflicts. For detailed information, see the procedure for “Configuring an email account that receives email and creates or manages work” in Entering email information

2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19.

in the Email wizard. From the What would you like to do? list, select Receive an email and create/manage a work object. From the What is the class of your work type? list, select Pega-Pipeline-CD. From the What is your starting flow name? list, select NewWork. From the What is your organization? list, select the organization that is associated with the work item. In the What Ruleset? field, select the ruleset that contains the generated email service rule. This ruleset applies to the work class. In the What RuleSet Version? field, select the version of the ruleset for the generated email service rule. Click Next to configure the email listener. In the Email Account Name field, enter <span><span>Pega-Pipeline-CD, which is the name of the email account that the listener references for incoming and outgoing email. In the Email Listener Name field, enter the name of the email listener. Begin the name with a letter, and use only letters, numbers, the ampersand character (&), and hyphens. In the Folder Name field, enter the name of the email folder that the listener monitors. Typically, this folder is INBOX. In the Service Package field, enter the name of the service package to be deployed. Begin the name with a letter, and use only letters, numbers, and hyphens to form an identifier. In the Service Class field, enter the service class name. In the Requestor User ID field, press the Down Arrow key, and select the operator ID of the release manager operator. In the Requestor Password field, enter the password for the release manager operator. In the Requestor User ID field, enter the operator ID that the email service uses when it runs. In the Password field, enter the password for the operator ID. Click Next to continue the wizard and configure the service package. For more information, see Configuring the service package in the Email wizard. After you complete the wizard, enable the listener that you created in the Email Wizard. For more information, see Starting a listener.

Email notifications Emails are also preconfigured with information about each notification type. For example, when a deployment failure occurs, the email that is sent provides information, such as the pipeline name and URL of the system on which the deployment failure occurred. Preconfigured emails are sent in the following scenarios: Deployment start – When a deployment starts, an email is sent to the release manager and, if you are using branches, to the operator who started a deployment. Deployment step failure – If any step in the deployment process is unsuccessful, the deployment pauses. An email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Deployment step completion – When a step in a deployment process is completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Deployment completion – When a deployment is successfully completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Stage completion – When a stage in a deployment process is completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Stage failure – If a stage fails to be completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Manual tasks requiring approval – When a manual task requires email approval from a user, an email is sent to the user, who can approve or reject the task from the email. Stopped deployment – When a deployment is stopped, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Pega unit testing failure – If a Pega unit test cannot successfully run on a step in the deployment, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Pega unit testing success – If a Pega unit test is successfully run on a step in the deployment, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Schema changes required – If you do not have the required schema privileges to deploy the changes on application packages that require those changes, an email is sent to the operator who started the deployment. Guardrail compliance score failure – If you are using the Check guardrail compliance task, and the compliance score is less than the score that is specified in the task, an email with the score is sent to the release manager. Guardrail compliance score success – If you are using the Check guardrail compliance task, and the task is successful, an email with the score is sent to the release manager. Approve for production – If you are using the Approve for production task, which requires approval from a user before application changes are deployed to production, an email is sent to the user. The user can reject or approve the changes. Verify security checklist failure – If you are using the Verify security checklist task, which requires that all tasks be completed in the Application Security Checklist to ensure that the pipeline complies with security best practices, the release manager receives an email. Verify security checklist success – If you are using the Verify security checklist task, which requires that all tasks be completed in the Application Security Checklist to ensure that the pipeline complies with security best practices, the release manager receives an email.

Configuring Jenkins If you are using a Jenkins task in your pipeline, configure Jenkins so that it can communicate with the orchestration server. 1. On the orchestration server, create an authentication profile that uses Jenkins credentials. 1. Click Create > Security > Authentication Profile. 2. Enter a name, and then click Create and open. 3. In the User name field, enter the user name of the Jenkins user. 4. Click Set password, enter the Jenkins password, and then click Submit. 5. Select the Preemptive authentication check box. 6. Click Save. 2. Because the Jenkins task does not support Cross-Site Request Forgery (CSRF), disable it by completing the following steps: 1. In Jenkins, click Manage Jenkins. 2. Click Configure Global Security. 3. In the CSRF Protection section, clear the Prevent Cross Site Request Forgery exploits check box. 4. Click Save. 3. Install the Post build task plug-in. 4. Install the curl command on the Jenkins server. 5. Create a new freestyle project. 6. On the General tab, select the This project is parameterized check box. 7. Add the BuildID and CallBackURL parameters. 1. Click Add parameter, and then select String parameter. 2. In the String field, enter BuildID. 3. Click Add parameter, and then select String parameter.

4. In the String field, enter CallBackURL. the Build Triggers section, select the Trigger builds remotely check box. the Authentication Token field, select the token that you want to use when you start Jenkins jobs remotely. the Build Environment section, select the Use Secret text(s) or file(s) check box. the Bindings section, do the following actions: 1. Click Add, and then select User name and password (conjoined). 2. In the Variable field, enter RMCREDENTIALS 3. In the Credentials field, click Specific credentials. 4. Click Add, and then select Jenkins. 5. In the Add credentials dialog box, in the Username field, enter the operator ID of the release manager operator that is configured on the orchestration server. 6. In the Password field, enter the password. 7. Click Save. 12. In the Post-Build Actions section, do one of the following actions, depending on your operating system: 8. 9. 10. 11.

In In In In

If Jenkins is running on Microsoft Windows, add the following post-build tasks: 1. Click Add post-build action, and then select Post build task. 2. In the Log text field, enter a unique string for the message that is displayed in the build console output when a build fails, for example, BUILD FAILURE. 3. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%". 4. Click Add another task. 5. In the Log text field, enter a unique string for the message that is displayed in the build console output when a build is successful, for example, BUILD SUCCESS. 6. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%" 7. Click Save. If Jenkins is running on UNIX or Linux, add the following post-build tasks. Use the dollar sign ($) instead of the percent sign (%) to access the environment variables. 1. Click Add post-build action, and then select Post build task. 2. In the Log text field, enter a unique string for the message that is displayed in the build console output when a build fails, for example, BUILD FAILURE. 3. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 4. Click Add another task. 5. In the Log text field, enter a unique string for the message that is displayed in the build console output when a build is successful, for example, BUILD SUCCESS. 6. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 7. Click Save.

Using Deployment Manager 3.4.x Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks and allow you to quickly deploy high-quality software to production. On the orchestration server, release managers use the DevOps landing page to configure CI/CD pipelines for their Pega Platform™ applications. The landing page displays all the running and queued application deployments, branches that are to be merged, and reports that provide information about your DevOps environment such as key performance indicators (KPIs). This document describes the features for the latest version of Deployment Manager 3.4.x. See the following topics for more information about using Deployment Manager to configure and use CI/CD pipelines: Configuring an application pipeline Manually starting a deployment Starting a deployment in a branch-based environment Starting a deployment in a distributed, branch-based environment Completing or rejecting a manual step in a deployment Managing aged updates Schema changes in application packages Pausing a deployment Performing actions on a deployment with errors Diagnosing a pipeline Viewing branch status Viewing deployment logs Viewing deployment reports Viewing reports for all deployments Deleting an application pipeline Viewing, downloading and deleting application packages in repositories

Configuring an application pipeline When you add a pipeline, you specify merge criteria and configure stages and steps in the continuous delivery workflow. For example, you can specify that a branch must be peer-reviewed before it can be merged, and you can specify that Pega unit tests must be run after a branch is merged and is in the QA stage of the pipeline. You can create multiple pipelines for one version of an application. For example, you can use multiple pipelines in the following scenarios: To move a deployment to production separately from the rest of the pipeline. You can then create a pipeline that has only a production stage or development and production stages. To use parallel development and hotfix life cycles for your application.

Adding a pipeline on Pega Cloud

To add a pipeline on Pega Cloud, perform the following steps: 1. In the Designer Studio footer, click Deployment Manager. 2. Click Add pipeline. 3. Specify the details of the application for which you are creating the pipeline. 1. Optional: If you want to change the URL of your development system, which is populated by default with your development system URL, in the Development environment field, press the Down Arrow key and select the URL. This is the system on which the product rule that defines the application package that moves through the repository is located. 2. In the Application field, press the Down Arrow key and select the name of the application. 3. In the Version field, press the Down Arrow key and select the application version. 4. In the Access group field, press the Down Arrow key and select the access group for which pipeline tasks are run. This access group must be present on all the candidate systems and have at least the sysadmin4 role. 5. In the Pipeline name field, enter the name of the pipeline. This name must be unique. 4. Click Create. The system adds tasks, which you cannot delete, to the pipeline that are required to successfully run a workflow, for example, Deploy and Generate Artifact. For Pega Cloud, the system also adds mandatory tasks that must be run on the pipeline, for example, the Check guardrail compliance task and Verify security checklist task. 5. Optional: Add tasks that you want to perform on your pipeline, such as Pega unit testing. For more information, see Modifying stages and tasks in the pipeline.

Adding a pipeline on premises To add a pipeline on premises, complete the following steps: 1. In the Designer Studio footer, click Deployment Manager. 2. Click Add pipeline. 3. Specify the details of the application for which you are creating the pipeline. 1. In the Development environment field, enter the URL of the development system. This is the system on which the product rule that defines the application package that moves through the repository is located. 2. In the Application field, press the Down Arrow key and select the name of the application. 3. In the Version field, press the Down Arrow key and select the application version. 4. In the Access group field, press the Down Arrow key and select the access group for which pipeline tasks are run. This access group must be present on all the candidate systems and have at least the sysadmin4 role. 5. In the Pipeline name field, enter the name of the pipeline. This name must be unique. 6. In the Product rule field, enter the name of the product rule that defines the contents of the application. 7. In the Version field, enter the product rule version. 4. Optional: If the application depends on other applications, in the Dependencies section, add those applications. 1. Click Add. 2. In the Application name field, press the Down Arrow key and select the application name. 3. In the Application version field, press the Down Arrow key and select the application version. 4. In the Repository name field, press the Down Arrow key and select the repository that contains the production-ready artifact of the dependent application. If you want the latest artifact of the dependent application to be automatically populated, ensure that the repository that contains the production-ready artifact of the dependent application is configured to support file updates. 5. In the Artifact name field, press the Down Arrow key and select the artifact. For more information about dependent applications, see Product rules: Listing product dependencies for Pega-supplied applications. 5. Click Next. 6. In the Environment details section, in the Stages section, specify the URL of each candidate system and the authentication profile that each system uses to communicate with the orchestration system. 1. In the Environments field for the system, press the Down Arrow key and select the URL of the system. 2. Optional: If you are using your own authentication profiles, in the Authentication field for the system, press the Down Arrow key and select the authentication profile that you want to communicate from the orchestration server to the system. By default, the fields are populated with the DMAppAdmin authentication profile. 7. In the Artifact management section, specify the development and production repositories through which the product rule that contains application contents moves through the pipeline. 1. In the Development repository field, press the Down Arrow key and select the development repository. 2. In the Production repository field, press the Down Arrow key and select the production repository. 8. Optional: In the External orchestration server section, if you are using a Jenkins step in a pipeline, specify Jenkins details. 1. In the URL field, enter the URL of the Jenkins server. 2. In the Authentication profile field, press the Down Arrow key and select the authentication profile on the orchestration server that specifies the Jenkins credentials to use for Jenkins jobs. 9. Click Next. 10. Optional: If you are using branches in your application, in the Merge policy section, specify merge options. 1. Do one of the following actions: To merge branches into the highest existing ruleset in the application, click Highest existing ruleset. To merge branches into a new ruleset, click New ruleset. 2. In the Password field, enter the password that locks the rulesets on the development system. 11. Click Next. The system adds tasks, which you cannot delete, to the pipeline that are required to successfully run a workflow, for example, Deploy and Generate Artifact. The system also adds other tasks to enforce best practices such as Check guardrail compliance and Verify security checklist. 1. Optional: In the Merge criteria pane, specify tasks that must be completed before a branch can be merged in the pipeline. 1. Click Add task. 2. Specify the task that you want to perform. To specify that a branch must meet a compliance score before it can be merged: 1. From the Task list, select Check guardrail compliance. 2. In the Weighted compliance score field, enter the minimum required compliance score. 3. Click Submit. To specify that a branch must be reviewed before it can be merged: 1. From the Task list, select Check review status. 2. Click Submit. 2. Optional: To start a deployment automatically when a branch is merged, click the Trigger deployment on merge check box. 3. Optional: Clear a check box for a deployment life cycle stage to skip it. 4. Optional: In the Continuous Deployment section pane, specify the tasks to be performed during each stage of the pipeline.

1. Do one of the following actions: Click a manually added task, click the More

icon, and then click either Add task above or Add task below. Click Add task in the stage. 2. From the Task list, select the task that you want to perform. To run Pega unit tests either for the pipeline application or for an application that is associated with an access group, select Pega unit testing: 1. Optional: Perform one of the following actions: To run all the Pega unit tests that are in a Pega unit suite for the pipeline application, in the Test Suite ID field, enter the pxInsName of the suite ID. You can find this value in the XML document that comprises the test suite by clicking Actions > XML on the Edit Test Suite form. If you do not specify a test suite, all the Pega unit tests for the pipeline application are run. To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. For more information about creating Pega unit tests, see Creating PegaUnit test cases. 2. Click Submit. To run a Jenkins job that you have configured, select Jenkins. 1. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins deployment) that you want to run. 2. In the Token field, enter the Jenkins authentication token. 3. In the Parameters field, enter parameters, if any, to send to the Jenkins job. Separate multiple parameters with a comma. 4. Click Submit. To add a manual step that a user must perform in the pipeline, select Manual. 1. In the Job name field, enter text that describes the action that you want the user to take. 2. In the Assigned to field, press the Down Arrow key and select the operator ID to assign the task to. To specify that the application must meet a compliance score, select Check guardrail compliance. 1. In the Weighted compliance score field, enter the minimum required compliance score. 2. Click Submit. To specify that all the tasks in the Application Security Checklist must be performed so that the pipeline can comply with security best practices, select Verify security checklist, and then click Submit. You must log in to the system for which this task is configured, and then mark all the tasks in the Application Security checklist as completed for the pipeline application. For more information about completing the checklist, see Preparing your application for secure deployment. 5. Optional: To modify the Approve for production task, which is added to the stage before production and which you use so that a user must approve application changes before they are sent to production, do the following actions: 1. Click the Info icon. 2. In the Job name field, enter a name for the task. 3. In the Assign to field, press the Down Arrow key and select the user who approves the application for production. An email is sent to this user, who can approve or reject application changes from within the email. 4. Click Submit. 6. Click Finish.

Modifying application details You can modify application details, such as the product rule that defines the content of the application that moves through the pipeline. 1. Click Actions > Application details. 2. Optional: In the Development environment field, enter the URL of the development system, which is the system on which the product rule that defines the application package that moves through the repository is located. 3. Optional: In the Version field, press the Down Arrow key and select the application version. 4. Optional: In the Product rule field, press the Down Arrow key and select the product rule that defines the contents of the application. 5. Optional: In the Version field, press the Down Arrow key and select the product rule version. 6. Optional: If the application depends on other applications, in the Dependencies section, add those applications. 1. Click Add. 2. In the Application name field, press the Down Arrow key and select the application name. 3. In the Application version field, press the Down Arrow key and select the application version. 4. In the Repository name field, press the Down Arrow key and select the repository that contains the production-ready artifact of the dependent application. If you want the latest artifact of the dependent application to be automatically populated, ensure that the repository that contains the production-ready artifact of the dependent application is configured to support file updates. 5. In the Artifact name field, press the Down Arrow key and select the artifact. For more information about dependent applications, see Product rules: Listing product dependencies for Pega-supplied applications. 7. Click Save.

Modifying URLs and authentication profiles You can modify the URLs of your development and candidate systems and the authentication profiles that are used to communicate between those systems and the orchestration server. 1. 2. 3. 4.

Click Actions > Environment details. Click Stages. In the Environments field for the system, press the Down Arrow key and select the URL of the system. In the Authentication field for the system, press the Down Arrow key and select the authentication profile that you want to communicate from the orchestration server to the system. 5. Click Save.

Modifying development and production repositories You can modify the development and production repositories through which the product rule that contains application contents moves through the pipeline. All the generated artifacts are archived in the Development repository, and all the production-ready artifacts are archived in the Production repository. You do not need to configure repositories if you are using Pega Cloud but can use different repositories other than the default ones that are provided. 4. Click Actions > Environment details. 5. Click Artifact Management. 6. Do one of the following actions to select a repository: If you are using Deployment Manager on premises, or on Pega Cloud with default repositories, complete the following tasks:

1. In the Application repository section, in the Development repository field, press the Down Arrow key and select the development repository 2. In the Production repository field, press the Down Arrow key and select the production repository. If you are using Deployment Manager on Pega Cloud and want to use different repositories other than the default repositories, complete the following tasks: 1. In the Artifact repository section, click Yes. 2. In the Development repository field, press the Down Arrow key and select the development repository. 3. In the Production repository field, press the Down Arrow key and select the production repository. 7. Click Save.

Specifying Jenkins server information If you are using a Jenkins step, specify details about the Jenkins server such as its URL. 1. 2. 3. 4. 5. 6.

Click Actions > Environment details. Click External orchestration server. Click the Jenkins icon. Click OK. In the URL field, enter the URL of the Jenkins server. In the Authentication profile field, press the Down Arrow key and select the authentication profile on the orchestration server that specifies the Jenkins credentials to use for Jenkins jobs. 7. Click Save.

Specifying merge options for branches If you are using branches in your application, specify options for merging branches into the base application. 1. Click Actions > Merge policy. 2. Do one of the following actions: To merge branches into a new ruleset, click New ruleset. To merge branches into the highest existing ruleset in the application, click Highest existing ruleset. 3. In the Password field, enter the password that locks the rulesets on the development system. 4. Click Save.

Modifying stages and tasks in the pipeline You can modify the stages and the tasks that are performed in each stage of the pipeline. For example, you can skip a stage or add tasks such as Pega unit testing to be done on the QA stage. 1. Click Actions > Pipeline model. 2. Optional: In the Merge criteria pane, specify tasks that must be completed before a branch can be merged in the pipeline. 1. Click Add task. 2. Specify the task that you want to perform. 1. To specify that a branch must meet a compliance score before it can be merged: From the Task list, select Check guardrail compliance. In the Weighted compliance score field, enter the minimum required compliance score. Click Submit. 2. To specify that a branch must be reviewed before it can be merged: 1. From the Task list, select Check review status. 2. Click Submit. 3. Optional: To start a deployment automatically when a branch is merged, click the Trigger deployment on merge check box. 4. Optional: Clear a check box for a deployment life cycle stage to skip it. 5. Optional: In the Continuous Deployment section pane, specify the tasks to be performed during each stage of the pipeline. 1. Do one of the following actions: Click a manually added task, click the More icon, and then click either Add task above or Add task below to add the task above or below the existing task. Click Add task in the stage. 2. From the Task list, select the task that you want to perform. To run Pega unit tests either for the pipeline application or for an application that is associated with an access group, select Pega unit testing. 1. Optional: Perform one of the following actions: To run all the Pega unit tests that are in a Pega unit suite for the pipeline application, in the Test Suite ID field, enter the pxInsName of the suite ID. You can find this value in the XML document that comprises the test suite by clicking Actions > XML on the Edit Test Suite form. If you do not specify a test suite, all the Pega unit tests for the pipeline application are run. To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. For more information about creating Pega unit tests, see Creating PegaUnit test cases. 2. Click Submit. To run a Jenkins job that you have configured, select Jenkins. 1. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins deployment) that you want to run. 2. In the Token field, enter the Jenkins authentication token. 3. In the Parameters field, enter parameters, if any, to send to the Jenkins job. Separate multiple parameters with a comma. 4. Click Submit. To add a manual step that a user must perform in the pipeline, select Manual. 1. In the Job name field, enter text that describes the action that you want the user to take. 2. In the Assigned to field, press the Down Arrow key and select the operator ID to assign the task to. 3. Click Submit. To specify that the application must meet a compliance score, select Check guardrail compliance. 1. In the Weighted compliance score field, enter the minimum required compliance score. 2. Click Submit. To specify that all the tasks in the Application Security Checklist must be performed so that the pipeline can comply with security best practices, select Verify security checklist, and then click Submit. You must log in to the system for which this task is configured, and then mark all the tasks in the Application Security checklist as completed for the pipeline application. For more information about completing the checklist, see Preparing your application for secure deployment. 6. Optional: To modify the Approve for production task, which is added to the stage before production and which you use so that a user must approve application changes before they are sent to production, do the following actions: 1. Click the Info icon. 2. In the Job name field, enter a name for the task.

3. In the Assign to field, press the Down Arrow key and select the user who approves the application for production. An email is sent to this user, who can approve or reject application changes from within the email. 4. Click Submit. 7. Click Finish.

Manually starting a deployment Start a deployment manually if you are not using branches and are working directly in rulesets. You can also start a deployment manually if you do not want deployments to start automatically when branches are merged. You must also clear the Trigger deployment on merge check box in the pipeline configuration. 1. 2. 3. 4.

Click Deployment Manager in the Designer Studio footer. Click the pipeline for which you want to start a deployment. Click Start deployment. Start a new deployment or deploy an existing application by completing one of the following actions: To start a deployment and deploy a new application package, do the following steps: 1. Click Generate new artifact. 2. In the Deployment name field, enter the name of the deployment. 3. Click Deploy. To deploy an application package that is on a cloud repository, do the following steps:

1. 2. 3. 4. 5.

Click Deploy an existing artifact. In the Deployment name field, enter the name of the deployment. In the Select a repository field, press the Down Arrow key and select the repository. In the Select an artifact field, press the Down Arrow key and select the application package. Click Deploy.

Starting a deployment in a branch-based environment In non-distributed, branch-based environments, you can immediately start a deployment by submitting a branch into a pipeline in the Merge Branches wizard. For more information, see Submitting a branch into a pipeline.

Starting a deployment in a distributed branch-based environment If you are using Deployment Manager in a distributed, branch-based environment and using multiple pipelines per application, first export the branch to the main development system, and then merge it. 1. On the remote development system, package the branch. For more information, see Packaging a branch. 2. Export the branch. 3. On the main development system, import the branch by using the Import wizard. For more information, see Importing a file by using the Import wizard. 4. On the main development system, start a deployment by using the Merge Branches wizard. For more information, see Submitting a branch into a pipeline. If you are using one pipeline per application, you can publish a branch to start the merge. For more information, see Publishing a branch to a repository.

Completing or rejecting a manual step in a deployment If a manual step is configured on a deployment, the deployment pauses when it reaches the step, and you can either complete it or reject it. For example, if a user was assigned a task and completed it, you can complete the task to continue the deployment. Deployment Manager also sends you an email when there is a manual step in the pipeline. You can complete or reject a step either within the pipeline or through email. Deployment Manager also generates a manual step if there are schema changes in the application package that the release manager must apply. For more information, see Schema changes in application packages. To complete or reject a manual step within the deployment, do the following steps: 1. Click Deployment Manager in the Designer Studio footer. 2. Click a pipeline. 3. Right-click the manual step and select one of the following options: Complete task: Resolve the task so that the deployment continues through the pipeline. Reject task: Reject the task so that the deployment does not proceed. To complete or reject a manual step from within an email, click either Accept or Reject.

Managing aged updates An aged update is a rule or data instance in an application package that is older than an instance that is on a system to which you want to deploy the application package. By being able to import aged updates, skip the import, or manually deploy your application changes, you now have more flexibility in determining the rules that you want in your application and how you want to deploy them. For example, you can update a Dynamic System Setting on a quality assurance system, which has an application package that contains the older instance of the Dynamic System Setting. Before Deployment Manager deploys the package, the system detects that the version of the Dynamic System Setting on the system is newer than the version in the package and creates a manual step in the pipeline. To import aged updates: 1. In the Dev Studio footer, click Deployment Manager. 2. Click the pipeline. 3. Optional: Click View aged updates to view a list of the rules and data instances, which are in the application package, that are older than the instances that are on the system. 4. Click the More icon and select one of the following options: Click Overwrite aged updates to import the older rule and data instances that are in the application package into the system, which overwrites the newer versions that are on the system. Click Skip aged updates to skip the import. Click Deploy manually and resume to manually deploy the package from the Import wizard on the system. Deployment Manager

does not run the Deploy step on the stage.

Schema changes in application packages If an application package that is to be deployed on candidate systems contains schema changes, the Pega Platform orchestration server checks the candidate system to verify that you have the required privileges to deploy the schema changes. One of the following results occurs: If you have the appropriate privileges, schema changes are automatically applied to the candidate system, the application package is deployed to the candidate system, and the pipeline continues. If you do not have the appropriate privileges, Deployment Manager generates an SQL file that lists the schema changes and sends it to your email address. It also creates a manual step, pausing the pipeline, so that you can apply the schema changes. After you complete the step, the pipeline continues. For more information about completing a step, see Completing or rejecting a manual step. You can also configure settings to automatically deploy schema changes so that you do not have to manually apply them if you do not have the required privileges. For more information, see Configuring settings to automatically deploy schema changes.

Configuring settings to automatically deploy schema changes You can configure settings to automatically deploy schema changes that are in an application package that is to be deployed on candidate systems. Configure these settings so that you do not have to apply schema changes if you do not have the privileges to deploy them. 1. On the orchestration server, in Pega Platform, set the AutoDBSchemaChanges Dynamic System Setting to true to enable schema changes at the system level. 1. In Designer Studio, search for AutoDBSchemaChanges. 2. On the Settings tab, in the Value field, enter true. 3. Click Save. 2. Add the SchemaImport privilege to your access role to enable schema changes at the user level. For more information, see Specifying privileges for an Access or Role to Object rule. These settings are applied sequentially. If the AutoDBSchemaChanges Dynamic System Setting is set to false, you cannot deploy schema changes, even if you have the SchemaImport privilege.

Pausing a deployment When you pause a deployment, the pipeline completes the task that it is running, and stops the deployment at the next step. To pause a deployment, click Pause.

Performing actions on a deployment that has errors If a deployment has errors, the pipeline stops processing on it. You can do one of the following actions: Ignore the current step and run the next step by clicking Start. Restart the deployment at the current step, after fixing the errors, by clicking Start. Roll back to an earlier deployment by clicking Roll back deployment .

Diagnosing a pipeline You can diagnose your pipeline to verify that your pipeline is configured properly such as whether the target application and product rule are in the development environment, connectivity between systems and repositories is working, and premerge settings are correctly configured. 1. 2. 3. 4. 5.

In the Designer Studio footer, click Deployment Manager. Click a pipeline. Click Actions > Diagnose pipeline. In the Diagnose application pipeline dialog box, review the errors, if any. Optional: To view troubleshooting tips about errors, hover your mouse over the Troubleshooting tips link.

If the RMURL Dynamic System Setting is not configured, Deployment Manager displays a message that you can disregard if you are not using branches, because you do not need to configure the Dynamic System Setting.

Viewing branch status You can view the status of all the branches that are in your pipeline. For example, you can see whether a branch was merged in a deployment and when it was merged. 1. Click Deployment Manager in the Designer Studio footer. 2. Click a pipeline. 3. Click Actions > View branches.

Viewing deployment logs View logs for a deployment to see the completion status of operations, for example, when a deployment is moved to a new stage. You can change the logging level to control which events are displayed in the log. For example, you can change logging levels of your deployment from INFO to DEBUG for troubleshooting purposes. For more information, see Logging Level Settings tool. 1. 2. 3. 4.

Click Click Click Click

Deployment Manager in the Designer Studio footer. a pipeline. the Gear icon for the deployment for which you want to view the log file. View log.

Viewing deployment reports Deployment reports provide information about a specific deployment. You can view information such as the number of tasks that you configured on a deployment that have been completed and when each task started and ended.

1. 2. 3. 4.

Click Click Click Click

Deployment Manager in the Designer Studio footer. a pipeline. the Gear icon for the deployment for which you want to view the deployment report. View report.

Viewing reports for all deployments Reports provide a variety of information about all the deployments in your pipeline. You can view the following key performance indicators (KPI): Deployment Success - Percentage of deployments that are successfully deployed to production Deployment Frequency – Frequency of new deployments to production Deployment Speed - Average time taken to deploy to production Start frequency - Frequency at which new deployments are triggered Failure rate - Average number of failures per deployment Merges per day - Average number of branches that are successfully merged per day To view reports, do the following tasks: 1. Click Deployment Manager in the Designer Studio footer. 2. Click a pipeline. 3. Click Actions > View reports.

Deleting an application pipeline When you delete a pipeline, its associated application packages are not removed from the repositories that the pipeline is configured to use. 1. In the Designer Studio footer, click Deployment Manager. 2. Click the Delete icon for the pipeline that you want to delete. 3. Click Submit.

Viewing, downloading, and deleting application packages in repositories You can view, download, and delete application packages in repositories that are on the orchestration server. If you are using Deployment Manager on Pega Cloud, application packages that you have deployed to cloud repositories are stored on Pega Cloud. To manage your cloud storage space, you can download and permanently delete the packages. 1. 2. 3. 4. 5. 6.

In the Designer Studio footer, click Deployment Manager. Click the pipeline for which you want to download or delete packages. Click either Development Repository or Production Repository. Click Actions > Browse artifacts. To download an application package, click the package, and then save it to the appropriate location. To delete a package, select the check boxes for the packages that you want to delete and click Delete.

Deployment Manager 3.3.x Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your Pega applications from within Pega Platform™. You can create a standardized deployment process so that you can deploy predictable, high-quality releases without using third-party tools. With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging, application package generation, artifact management, and package promotion to different stages in the workflow. Deployment Manager 3.3.x is supported on Pega 7.4. You can download it for Pega Platform from the Deployment Manager Pega Exchange page. Each customer VPC on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use with your Pega Cloud application. This document describes the features for the latest version of Deployment Manager 3.3.x. For more information about the features in latest version of Deployment Manager 3.3.x, see the following articles: Deployment Manager release notes Deployment Manager architecture and workflows Creating custom repository types for Deployment Manager Installing and configuring Deployment Manager 3.3.x Using Deployment Manager 3.3.x

Installing and configuring Deployment Manager 3.3.x Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks and allow you to quickly deploy high-quality software to production. This document describes the features for the latest version of Deployment Manager 3.3.x. See the following topics for more information about installing and configuring Deployment Manager: Step Step Step Step Step

1: 2: 3: 4: 5:

Installing Deployment Manager Upgrading to Deployment Manager 3.3.x Configuring systems in the pipeline Configuring the development system for branch-based development (optional) Configuring additional settings

For information about using Deployment Manager, see Using Deployment Manager 3.3.x.

Step 1: Installing Deployment Manager

Each customer virtual private cloud (VPC) on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use it with your Pega Cloud application. If you are upgrading from an earlier release to Deployment Manager 3.3.x, contact Pegasystems® Global Customer Support (GCS) to request a new version. If you are upgrading from Deployment Manger 3.2.1, after you import files on premises or Deployment Manager 3.3.x is deployed on Pega Cloud, finish the upgrade immediately so that your pipelines work in Deployment Manager 3.3.x. If you are using Deployment Manager on premises, complete the following steps to install it. 1. Install Pega 7.4 on all systems in the CI/CD pipeline. 2. Browse to the Deployment Manager Pega Exchange page, and then download the DeploymentManager03.03.0x.zip file for your version of Deployment Manager to your local disk on each system. 3. Use the Import wizard to import files into the appropriate systems. For more information about the Import wizard, see Importing a file by using the Import wizard. 4. On the orchestration server, import the following files: PegaDevOpsFoundation_03.03.0x.zip PegaDeploymentManager_03.03.0x.zip 5. On the development, QA, staging, and production systems, import the PegaDevOpsFoundation_03.03.0x.zip file. 6. Optional: If you are using a distributed development, on the remote development system, import the PegaDevOpsFoundation_03.03.0x.zip file. 7. Do one of the following actions: 1. If you are upgrading to Deployment Manager 3.3.x, perform the upgrade. For more information, see Upgrading to Deployment Manager 3.3.x. 2. If you are not upgrading Deployment Manager 3.3.x, continue the installation procedure. For more information, see Step 3b: Configuring the orchestration server.

Step 2: Upgrading to Deployment Manager 3.3.x Before you upgrade, ensure that no deployments are running, have errors, or are paused. To upgrade to Deployment Manager 3.3.x either on Pega Cloud or on premises, perform the following steps: 1. Enable default operators and configure authentication profiles on the orchestration server and candidate systems. For more information, see Step 3a: Configuring authentication profiles on the orchestration server and candidate systems. 2. On each candidate system, add the PegaDevOpsFoundation application to your application stack. 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Built on application section, click Add application. 3. In the Name field, press the Down Arrow key and select PegaDevOpsFoundation. 4. In the Version field, press the Down Arrow key and select the version of Deployment Manager that you are using. 5. Click Save. If you are upgrading from Deployment Manager 3.2.1, you do not need to do the rest of the steps in this procedure or the required steps in the remainder of this document. If you are upgrading from earlier releases and have pipelines configured, complete this procedure. 3. 4. 5. 6. 7.

On the orchestration server, log in to the release management application. In Designer Studio, search for pxUpdatePipeline, and then click the activity in the dialog box that displays the results. Click Actions > Run. In the dialog box that is displayed, click Run. Modify the current release management application so that it is built on PegaDeploymentManager:03-03-01. 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Edit Application rule form, on the Definition tab, in the Built on application section, for the PegaDeploymentManager application, press the Down Arrow key and select 03.03.01. 3. Click Save. 8. Merge rulesets to the PipelineData ruleset. 1. Click Designer Studio > System > Refactor > Rulesets. 2. Click Copy/Merge RuleSet. 3. Click the Merge Source RuleSet(s) to Target RuleSet radio button. 4. Click the RuleSet Versions radio button. 5. In the Available Source Ruleset(s) section, select the first open ruleset version that appears in the list, and then click the Move icon. All your current pipelines are stored in the first open ruleset. If you modified this ruleset after you created the application, select all the ruleset versions that contain pipeline data. 9. 10. 11. 12. 13. 14. 15.

In the target RuleSet/Information section, in the Name field, press the Down Arrow key and select Pipeline Data. In the Version field, enter 01-01-01. For the Delete Source RuleSet(s) upon completion of merge? option, click No. Click Next. Click Merge to merge your pipelines to the PipelineData:01-01-01 rulset. Click Done. Your pipelines are migrated to the Pega Deployment Manager application. Log out of the orchestration server and log back in to it with the DMReleaseAdmin operator ID and the password that you specified for it.

For backup purposes, pipelines are still visible in your previous release management application. However, you should not create deployments with this application, because deployments might not work correctly. You do not need to perform any of the required steps in the remainder of this document.

Step 3: Configuring systems in the pipeline Complete the following steps to set up a pipeline for all supported CI/CD workflows. If you are using branches, you must configure additional settings after you perform the required steps. 1. 2. 3. 4.

Step Step Step Step

3a: Configuring authentication profiles on the orchestration server and candidate systems 3b: Configuring the orchestration server 3c: Configuring candidate systems 3d: Creating repositories on the orchestration server and candidate systems

Step 3a: Configuring authentication profiles on the orchestration server and candidate systems

When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs, and authentication profiles that communicate between the orchestration server and candidate systems are also installed. On the orchestration server, the following items are installed: The Pega Deployment Manager application. The DMReleaseAdmin operator ID, which release managers use to log in to the Pega Deployment Manager application. You must enable this operator ID and specify its password. The DMAppAdmin authentication profile. You must update this authentication profile to use the password that you specified for the DMAppAdmin operator ID, which is configured on all the candidate systems. On all the candidate systems, the following items are installed: The PegaDevOpsFoundation application. The DMAppAdmin operator ID, which points to the PegaDevOpsFoundation application. You must enable this operator ID and specify its password. The DMReleaseAdmin authentication profile. You must update this authentication profile to use the password that you specified for the DMReleaseAdmin operator ID, which is configured on the orchestration server. The DMReleaseAdmin and DMAppAdmin operator IDs do not have default passwords. Configure the default authentication profile by doing these steps: 1. On the orchestration server, enable the DMReleaseAdmin operator ID and specify its password. 1. Log in to the orchestration server with [email protected]/install. 2. In Designer Studio, click Records > Organization > Operator ID, and then click DMReleaseAdmin. 3. In the Designer Studio header, click the operator ID initials, and then click Operator. 4. On the Edit Operator ID rule form, click the Security tab. 5. Clear the Disable Operator check box. 6. Click Save. 7. Click Update password. 8. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then click Submit. 9. Optional: Clear the Force password change on next login check box if you do not want to change the password for the DMReleaseAdmin operator ID the next time that you log in. 10. Log out of the orchestration server. 2. On each candidate system, update the DMReleaseAdmin authentication profile to use the new password. All candidate systems use this authentication profile to communicate with the orchestration server about the status of the tasks in the pipeline. 1. Log in to each candidate system with the DMReleaseAdmin user name and the password that you specified. 2. Click Records > Security > Authentication Profile. 3. Click DMReleaseAdmin. 4. On the Edit Authentication Profile rule form, click Set password. 5. In the Password dialog box, enter the password, and then click Submit. 6. Save the rule form. 3. On each candidate system, which includes the development, QA, staging, and production systems, enable the DMAppAdmin operator ID. If you want to create your own operator IDs, ensure that they point to the PegaDevOpsFoundation application. 1. Log in to each candidate system with [email protected]/install. 2. In Designer Studio, click Records > Organization > Operator ID, and then click DMAppAdmin. 3. In the Designer Studio header, click the operator ID initials, and then click Operator. 4. On the Edit Operator ID rule form, click the Security tab. 5. Clear the Disable Operator check box. 6. Click Save. 7. Click Update password. 8. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then click Submit. 9. Optional: Clear the Force password change on next login check box if you do not want to change the password for the DMAppAdmin operator ID the next time that you log in. 10. Log out of each candidate system. 4. On the orchestration server, modify the DMAppAdmin authentication profile to use the new password. The orchestration server uses this authentication profile to communicate with candidate systems so that it can run tasks in the pipeline. 1. Log in to the orchestration server with the DMAppAdmin user name and the password that you specified. 2. Click Records > Security > Authentication Profile. 3. Click DMAppAdmin. 4. On the Edit Authentication Profile rule form, click Set password. 5. In the Password dialog box, enter the password, and then click Submit. 6. Save the rule form. 5. Do one of the following actions: 1. If you are upgrading to Deployment Manager 3.3.x, resume the upgrade procedure from step 2. For more information, see Upgrading to Deployment Manager 3.3.x. 2. If you are not upgrading, continue the installation procedure. For more information, see Step 3b: Configuring the orchestration server.

Step 3b: Configuring the orchestration server The orchestration server is the system on which release managers configure and manage CI/CD pipelines. 1. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages. 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 2. Configure the candidate systems in your pipeline. For more information, see Step 3c: Configuring candidate systems.

Step 3c: Configuring candidate systems Configure each system system that is used for the development, QA, staging, and production stage in the pipeline. 1. On each candidate system, add the PegaDevOpsFoundation application to your application stack. 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Built on application section, click Add application.

3. In the Name field, press the Down Arrow key and select PegaDevOpsFoundation. 4. In the Version field, press the Down Arrow key and select the version of Deployment Manager that you are using. 5. Click Save. 2. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages. 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 3. Optional: If you want to use a product rule other than the default product rule that is created by the New Application wizard, on the development system, create a product rule that defines the application package that will be moved through repositories in the pipeline. For more information, see Product rules: Completing the Create, Save As, or Specialization form. When you use the New Application wizard, a default product rule is created that has the same name as your application. 4. Configure repositories through which to move artifacts in your pipeline. For more information, see Step 3d: Creating repositories on the orchestration server and candidate systems.

Step 3d: Creating repositories on the orchestration server and candidate systems If you are using Deployment Manager on premises, create repositories on the orchestration server and all candidate systems to move your application between all the systems in the pipeline. You can use a supported repository type that is provided in Pega Platform™, or you can create a custom repository type. If you are using Deployment Manager on Pega Cloud, default repositories are provided. If you want to use repositories other than the ones provided, you can create your own. For more information about creating a supported repository type, see Creating a repository connection. For more information about creating a custom repository type, see Creating custom repository types for Deployment Manager. When you create repositories, note the following information: The Pega repository type is not supported. Ensure that each repository has the same name on all systems. When you create JFrog Artifactory repositories, ensure that you create a Generic package type in JFrog Artifactory. Also, when you create the authentication profile for the repository on Pega Platform, you must select the Preemptive authentication check box. After you configure a pipeline, you can verify that the repository connects to the URL of the development and production repositories by clicking Test Connectivity on the Repository rule form.

Step 4: Configuring the development system for branch-based development (optional) After you configure the orchestration server and all your candidate systems, configure additional settings so that you can use pipelines if you are using branches in a distributed or non-distributed branch-based environment. You must configure the development system to create a pipeline in a branch-based environment. 1. On the development system (in nondistributed environment) or the main development system (in a distributed environment), create a Dynamic System Setting to define the URL of the orchestration server, even if the orchestration server and the development system are the same system. 1. Click Create > Records > SysAdmin > Dynamic System Settings. 2. In the Owning Ruleset field, enter Pega-DevOps-Foundation. 3. In the Setting Purpose field, enter RMURL. 4. Click Create and open. 5. On the Settings tab, in the Value field, enter the URL of the orchestration server. Use this format: http://hostname:port/prweb/PRRestService. 6. Click Save. 2. Complete the following steps on either the development system (in a non-distributed environment) or the remote development system (in a distributed environment). 1. Use the New Application wizard to create a new development application that developers will log in to. This application allows development teams to maintain a list of development branches without modifying the definition of the target application. 2. Add the target application of the pipeline as a built-on application layer of the development application. 1. Log in to the application. 2. In the Designer Studio header, click the name of your application, and then click Definition. 3. In the Built-on application section, click Add application. 4. In the Name field, press the Down Arrow key and select the name of the target application. 5. In the Version field, press the Down Arrow key and select the target application version. 6. Click Save. 3. Lock the application rulesets to prevent developers from making changes to rules after branches have been merged. 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Application rulesets section, click the Open icon for each ruleset that you want to lock. 3. Click Lock and Save. 4. Optional: It is recommended that you merge branches by using the Merge Branch wizard. However, you can publish a branch to the remote development system to start a deployment. Publishing a branch when you have multiple pipelines per application is not supported. 1. In Designer Studio, enable Pega repository types. For more information, see Enabling the Pega repository type. 2. Create a new Pega repository type. For more information, see Creating a repository connection. Ensure that you do the following tasks: In the Host ID field, enter the URL of the development system. The default access group of the operator that is configured for the authentication profile of this repository should point to the pipeline application on the development system (in a nondistributed environment) or main development system (in a distributed environment).

Step 5: Configuring additional settings As part of your pipeline, you can optionally send email notifications to users or configure Jenkins if you are using a Jenkins task. See the

following topics for more information: Configuring email notifications on the orchestration server Configuring Jenkins

Configuring email notifications on the orchestration server You can optionally configure email notifications on the orchestration server. For example, users can receive emails when pre-merge criteria are not met and the system cannot create a deployment. To configure the orchestration server to send emails, complete the following steps: 1. ​Use the Email wizard to configure an email account and listener by clicking Designer Studio > Integration > Email > Email Wizard. This email account sends notifications to users when events occur, for example, if there are merge conflicts. For detailed information, see the procedure for “Configuring an email account that receives email and creates or manages work” in Entering email information in the Email wizard. 2. From the What would you like to do? list, select Receive an email and create/manage a work object. 3. From the What is the class of your work type? list, select Pega-Pipeline-CD. 4. From the What is your starting flow name? list, select NewWork. 5. From the What is your organization? list, select the organization that is associated with the work item. 6. In the What Ruleset? field, select the ruleset that contains the generated email service rule. This ruleset applies to the work class. 7. In the What RuleSet Version? field, select the version of the ruleset for the generated email service rule. 8. Click Next to configure the email listener. 9. In the Email Account Name field, enter <span><span>Pega-Pipeline-CD, which is the name of the email account that the listener references for incoming and outgoing email. 10. In the Email Listener Name field, enter the name of the email listener. Begin the name with a letter, and use only letters, numbers, the ampersand character (&), and hyphens. 11. In the Folder Name field, enter the name of the email folder that the listener monitors. Typically, this folder is INBOX. 12. In the Service Package field, enter the name of the service package to be deployed. Begin the name with a letter, and use only letters, numbers, and hyphens to form an identifier. 13. In the Service Class field, enter the service class name. 14. In the Requestor User ID field, press the Down Arrow key, and select the operator ID of the release manager operator. 15. In the Requestor Password field, enter the password for the release manager operator. 16. In the Requestor User ID field, enter the operator ID that the email service uses when it runs. 17. In the Password field, enter the password for the operator ID. 18. Click Next to continue the wizard and configure the service package. For more information, see Configuring the service package in the Email wizard. 19. After you complete the wizard, enable the listener that you created in the Email Wizard. For more information, see Starting a listener. Email notifications Emails are also preconfigured with information about each notification type. For example, when a deployment failure occurs, the email that is sent provides information, such as the pipeline name and URL of the system on which the deployment failure occurred. Preconfigured emails are sent in the following scenarios: Deployment start – When a deployment starts, an email is sent to the release manager and, if you are using branches, to the operator who started a deployment. Deployment step failure – If any step in the deployment process is unsuccessful, the deployment pauses. An email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Deployment step completion – When a step in a deployment process is completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Deployment completion – When a deployment is successfully completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Stage completion – When a stage in a deployment process is completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Stage failure – If a stage fails to be completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Manual tasks requiring approval – When a manual task requires email approval from a user, an email is sent to the user, who can approve or reject the task from the email. Stopped deployment – When a deployment is stopped, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Pega unit testing failure – If a Pega unit test cannot successfully run on a step in the deployment, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Pega unit testing success – If a Pega unit test is successfully run on a step in the deployment, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Schema changes required – If you do not have the required schema privileges to deploy the changes on application packages that require those changes, an email is sent to the operator who started the deployment. Guardrail compliance score failure – If you are using the Check guardrail compliance task, and the compliance score is less than the score that is specified in the task, an email with the score is sent to the release manager. Guardrail compliance score success – If you are using the Check guardrail compliance task, and the task is successful, an email with the score is sent to the release manager. Approve for production – If you are using the Approve for production task, which requires approval from a user before application changes are deployed to production, an email is sent to the user. The user can reject or approve the changes. Verify security checklist failure – If you are using the Verify security checklist task, which requires that all tasks be completed in the Application Security Checklist to ensure that the pipeline complies with security best practices, the release manager receives an email. Verify security checklist success – If you are using the Verify security checklist task, which requires that all tasks be completed in the Application Security Checklist to ensure that the pipeline complies with security best practices, the release manager receives an email.

Configuring Jenkins If you are using a Jenkins task in your pipeline, configure Jenkins so that it can communicate with the orchestration server. 1. On the orchestration server, create an authentication profile that uses Jenkins credentials. 1. Click Create > Security > Authentication Profile. 2. Enter a name, and then click Create and open. 3. In the User name field, enter the user name of the Jenkins user. 4. Click Set password, enter the Jenkins password, and then click Submit. 5. Select the Preemptive authentication check box.

6. Click Save. 2. Because the Jenkins task does not support Cross-Site Request Forgery (CSRF), disable it by completing the following steps: 1. In Jenkins, click Manage Jenkins. 2. Click Configure Global Security. 3. In the CSRF Protection section, clear the Prevent Cross Site Request Forgery exploits check box. 4. Click Save. 3. Install the Post build task plug-in. 4. Install the curl command on the Jenkins server. 5. Create a new freestyle project. 6. On the General tab, select the This project is parameterized check box. 7. Add the BuildID and CallBackURL parameters. 1. Click Add parameter, and then select String parameter. 2. In the String field, enter BuildID. 3. Click Add parameter, and then select String parameter. 4. In the String field, enter CallBackURL. 8. In the Build Triggers section, select the Trigger builds remotely check box. 9. In the Authentication Token field, select the token that you want to use when you start Jenkins jobs remotely. 10. In the Build Environment section, select the Use Secret text(s) or file(s) check box. 11. In the Bindings section, do the following actions: 1. Click Add, and then select User name and password (conjoined). 2. In the Variable field, enter RMCREDENTIALS 3. In the Credentials field, click Specific credentials. 4. Click Add, and then select Jenkins. 5. In the Add credentials dialog box, in the Username field, enter the operator ID of the release manager operator that is configured on the orchestration server. 6. In the Password field, enter the password. 7. Click Save. 12. In the Post-Build Actions section, do one of the following actions, depending on your operating system: If Jenkins is running on Microsoft Windows, add the following post-build tasks: 1. Click Add post-build action, and then select Post build task. 2. In the Log text field, enter a unique string for the message that is displayed in the build console output when a build fails, for example, BUILD FAILURE. 3. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%". 4. Click Add another task. 5. In the Log text field, enter a unique string for the message that is displayed in the build console output when a build is successful, for example, BUILD SUCCESS. 6. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%" 7. Click Save. If Jenkins is running on UNIX or Linux, add the following post-build tasks. Use the dollar sign ($) instead of the percent sign (%) to access the environment variables. 1. Click Add post-build action, and then select Post build task. 2. In the Log text field, enter a unique string for the message that is displayed in the build console output when a build fails, for example, BUILD FAILURE. 3. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 4. Click Add another task. 5. In the Log text field, enter a unique string for the message that is displayed in the build console output when a build is successful, for example, BUILD SUCCESS. 6. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 7. Click Save.

Using Deployment Manager 3.3.x Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks and allow you to quickly deploy high-quality software to production. On the orchestration server, release managers use the DevOps landing page to configure CI/CD pipelines for their Pega Platform™ applications. The landing page displays all the running and queued application deployments, branches that are to be merged, and reports that provide information about your DevOps environment such as key performance indicators (KPIs). This document describes the features for the latest version of Deployment Manager 3.3.x. See the following topics for more information about using Deployment Manager to configure and use CI/CD pipelines: Configuring an application pipeline Manually starting a deployment Starting a deployment in a branch-based environment Starting a deployment in a distributed, branch-based environment Completing or rejecting a manual step in a deployment Schema changes in application packages Pausing a deployment Performing actions on a deployment with errors Diagnosing a pipeline Viewing branch status Viewing deployment logs Viewing deployment reports Viewing reports for all deployments Deleting an application pipeline Viewing, downloading and deleting application packages in repositories

Configuring an application pipeline When you add a pipeline, you specify merge criteria and configure stages and steps in the continuous delivery workflow. For example, you can specify that a branch must be peer-reviewed before it can be merged, and you can specify that Pega unit tests must be run after a branch is merged and is in the QA stage of the pipeline. You can create multiple pipelines for one version of an application. For example, you can use multiple pipelines in the following scenarios: To move a deployment to production separately from the rest of the pipeline. You can then create a pipeline that has only a production stage or development and production stages. To use parallel development and hotfix life cycles for your application.

Adding a pipeline on Pega Cloud To add a pipeline on Pega Cloud, perform the following steps: 1. In the Designer Studio footer, click Deployment Manager. 2. Click Add pipeline. 3. Specify the details of the application for which you are creating the pipeline. 1. Optional: If you want to change the URL of your development system, which is populated by default with your development system URL, in the Development environment field, press the Down Arrow key and select the URL. This is the system on which the product rule that defines the application package that moves through the repository is located. 2. In the Application field, press the Down Arrow key and select the name of the application. 3. In the Version field, press the Down Arrow key and select the application version. 4. In the Access group field, press the Down Arrow key and select the access group for which pipeline tasks are run. This access group must be present on all the candidate systems and have at least the sysadmin4 role. 5. In the Pipeline name field, enter the name of the pipeline. This name must be unique. 4. Click Create. The system adds tasks, which you cannot delete, to the pipeline that are required to successfully run a workflow, for example, Deploy and Generate Artifact. For Pega Cloud, it also adds mandatory tasks that must be run on the pipeline, for example, the Check guardrail compliance task and Verify security checklist task. 5. Optional: Add tasks that you want to perform on your pipeline, such as Pega unit testing. For more information, see Modifying stages and tasks in the pipeline.

Adding a pipeline on premises To add a pipeline on premises, complete the following steps: 1. In the Designer Studio footer, click Deployment Manager. 2. Click Add pipeline. 3. Specify the details of the application for which you are creating the pipeline. 1. In the Development environment field, enter the URL of the development system. This is the system on which the product rule that defines the application package that moves through the repository is located. 2. In the Application field, press the Down Arrow key and select the name of the application. 3. In the Version field, press the Down Arrow key and select the application version. 4. In the Access group field, press the Down Arrow key and select the access group for which pipeline tasks are run. This access group must be present on all the candidate systems and have at least the sysadmin4 role. 5. In the Pipeline name field, enter the name of the pipeline. This name must be unique. 6. In the Product rule field, enter the name of the product rule that defines the contents of the application. 7. In the Version field, enter the product rule version. 4. Optional: If the application depends on other applications, in the Dependencies section, add those applications. 1. Click Add. 2. In the Application name field, press the Down Arrow key and select the application name. 3. In the Application version field, press the Down Arrow key and select the application version. 4. In the Repository name field, press the Down Arrow key and select the repository that contains the production-ready artifact of the dependent application. If you want the latest artifact of the dependent application to be automatically populated, ensure that the repository that contains the production-ready artifact of the dependent application is configured to support file updates. 5. In the Artifact name field, press the Down Arrow key and select the artifact. For more information about dependent applications, see Product rules: Listing product dependencies for Pega-supplied applications. 5. Click Next. 6. In the Environment details section, in the Stages section, specify the URL of each candidate system and the authentication profile that each system uses to communicate with the orchestration system. 1. In the Environments field for the system, press the Down Arrow key and select the URL of the system. 2. Optional: If you are using your own authentication profiles, in the Authentication field for the system, press the Down Arrow key and select the authentication profile that you want to communicate from the orchestration server to the system. By default, the fields are populated with the DMAppAdmin authentication profile. 7. In the Artifact management section, specify the development and production repositories through which the product rule that contains application contents moves through the pipeline. 1. In the Development repository field, press the Down Arrow key and select the development repository. 2. In the Production repository field, press the Down Arrow key and select the production repository. 8. Optional: In the External orchestration server section, if you are using a Jenkins step in a pipeline, specify Jenkins details. 1. In the URL field, enter the URL of the Jenkins server. 2. In the Authentication profile field, press the Down Arrow key and select the authentication profile on the orchestration server that specifies the Jenkins credentials to use for Jenkins jobs. 9. Click Next. 10. Optional: If you are using branches in your application, in the Merge policy section, specify merge options. 1. Do one of the following actions: To merge branches into the highest existing ruleset in the application, click Highest existing ruleset. To merge branches into a new ruleset, click New ruleset. 2. In the Password field, enter the password that locks the rulesets on the development system. 11. Click Next. The system adds tasks, which you cannot delete, to the pipeline that are required to successfully run a workflow, for example, Deploy and Generate Artifact. The system also adds other tasks to enforce best practices such as Check guardrail compliance and Verify security checklist.

1. Optional: In the Merge criteria pane, specify tasks that must be completed before a branch can be merged in the pipeline. 1. Click Add task. 2. Specify the task that you want to perform. To specify that a branch must meet a compliance score before it can be merged: 1. From the Task list, select Check guardrail compliance. 2. In the Weighted compliance score field, enter the minimum required compliance score. 3. Click Submit. To specify that a branch must be reviewed before it can be merged: 1. From the Task list, select Check review status. 2. Click Submit. 2. Optional: To start a deployment automatically when a branch is merged, click the Trigger deployment on merge check box. 3. Optional: Clear a check box for a deployment life cycle stage to skip it. 4. Optional: In the Continuous Deployment section pane, specify the tasks to be performed during each stage of the pipeline. 1. Do one of the following actions: Click a manually added task, click the More

icon, and then click either Add task above or Add task below. Click Add task in the stage. 2. From the Task list, select the task that you want to perform. To run Pega unit tests either for the pipeline application or for an application that is associated with an access group, select Pega unit testing: 1. Optional: Perform one of the following actions: To run all the Pega unit tests that are in a Pega unit suite for the pipeline application, in the Test Suite ID field, enter the test suite ID. If you do not specify a test suite, all the Pega unit tests for the pipeline application are run. To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. For more information about creating Pega unit tests, see Creating PegaUnit test cases. 2. Click Submit. To run a Jenkins job that you have configured, select Jenkins. 1. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins deployment) that you want to run. 2. In the Token field, enter the Jenkins authentication token. 3. In the Parameters field, enter parameters, if any, to send to the Jenkins job. Separate multiple parameters with a comma. 4. Click Submit. To add a manual step that a user must perform in the pipeline, select Manual. 1. In the Job name field, enter text that describes the action that you want the user to take. 2. In the Assigned to field, press the Down Arrow key and select the operator ID to assign the task to. To specify that the application must meet a compliance score, select Check guardrail compliance. 1. In the Weighted compliance score field, enter the minimum required compliance score. 2. Click Submit. To specify that all the tasks in the Application Security Checklist must be performed so that the pipeline can comply with security best practices, select Verify security checklist, and then click Submit. You must log in to the system for which this task is configured, and then mark all the tasks in the Application Security checklist as completed for the pipeline application. For more information about completing the checklist, see Preparing your application for secure deployment. 5. Optional: To modify the Approve for production task, which is added to the stage before production and which you use so that a user must approve application changes before they are sent to production, do the following actions: 1. Click the Info icon. 2. In the Job name field, enter a name for the task. 3. In the Assign to field, press the Down Arrow key and select the user who approves the application for production. An email is sent to this user, who can approve or reject application changes from within the email. 4. Click Submit. 6. Click Finish.

Modifying application details You can modify application details, such as the product rule that defines the content of the application that moves through the pipeline. 1. Click Actions > Application details. 2. Optional: In the Development environment field, enter the URL of the development system, which is the system on which the product rule that defines the application package that moves through the repository is located. 3. Optional: In the Version field, press the Down Arrow key and select the application version. 4. Optional: In the Product rule field, press the Down Arrow key and select the product rule that defines the contents of the application. 5. Optional: In the Version field, press the Down Arrow key and select the product rule version. 6. Optional: If the application depends on other applications, in the Dependencies section, add those applications. 1. Click Add. 2. In the Application name field, press the Down Arrow key and select the application name. 3. In the Application version field, press the Down Arrow key and select the application version. 4. In the Repository name field, press the Down Arrow key and select the repository that contains the production-ready artifact of the dependent application. If you want the latest artifact of the dependent application to be automatically populated, ensure that the repository that contains the production-ready artifact of the dependent application is configured to support file updates. 5. In the Artifact name field, press the Down Arrow key and select the artifact. For more information about dependent applications, see Product rules: Listing product dependencies for Pega-supplied applications. 7. Click Save.

Modifying URLs and authentication profiles You can modify the URLs of your development and candidate systems and the authentication profiles that are used to communicate between those systems and the orchestration server. 1. 2. 3. 4.

Click Actions > Environment details. Click Stages. In the Environments field for the system, press the Down Arrow key and select the URL of the system. In the Authentication field for the system, press the Down Arrow key and select the authentication profile that you want to communicate from the orchestration server to the system. 5. Click Save.

Modifying development and production repositories

You can modify the development and production repositories through which the product rule that contains application contents moves through the pipeline. All the generated artifacts are archived in the Development repository, and all the production-ready artifacts are archived in the Production repository. You do not need to configure repositories if you are using Pega Cloud but can use different repositories other than the default ones that are provided. 4. Click Actions > Environment details. 5. Click Artifact Management. 6. Do one of the following actions to select a repository: If you are using Deployment Manager on premises, or on Pega Cloud with default repositories, complete the following tasks: 1. In the Application repository section, in the Development repository field, press the Down Arrow key and select the development repository 2. In the Production repository field, press the Down Arrow key and select the production repository. If you are using Deployment Manager on Pega Cloud and want to use different repositories other than the default repositories, complete the following tasks: 1. In the Artifact repository section, click Yes. 2. In the Development repository field, press the Down Arrow key and select the development repository. 3. In the Production repository field, press the Down Arrow key and select the production repository. 7. Click Save.

Specifying Jenkins server information If you are using a Jenkins step, specify details about the Jenkins server such as its URL. 1. 2. 3. 4. 5. 6.

Click Actions > Environment details. Click External orchestration server. Click the Jenkins icon. Click OK. In the URL field, enter the URL of the Jenkins server. In the Authentication profile field, press the Down Arrow key and select the authentication profile on the orchestration server that specifies the Jenkins credentials to use for Jenkins jobs. 7. Click Save.

Specifying merge options for branches If you are using branches in your application, specify options for merging branches into the base application. 1. Click Actions > Merge policy. 2. Do one of the following actions: To merge branches into a new ruleset, click New ruleset. To merge branches into the highest existing ruleset in the application, click Highest existing ruleset. 3. In the Password field, enter the password that locks the rulesets on the development system. 4. Click Save.

Modifying stages and tasks in the pipeline You can modify the stages and the tasks that are performed in each stage of the pipeline. For example, you can skip a stage or add tasks such as Pega unit testing to be done on the QA stage. 1. Click Actions > Pipeline model. 2. Optional: In the Merge criteria pane, specify tasks that must be completed before a branch can be merged in the pipeline. 1. Click Add task. 2. Specify the task that you want to perform. 1. To specify that a branch must meet a compliance score before it can be merged: From the Task list, select Check guardrail compliance. In the Weighted compliance score field, enter the minimum required compliance score. Click Submit. 2. To specify that a branch must be reviewed before it can be merged: 1. From the Task list, select Check review status. 2. Click Submit. 3. Optional: To start a deployment automatically when a branch is merged, click the Trigger deployment on merge check box. 4. Optional: Clear a check box for a deployment life cycle stage to skip it. 5. Optional: In the Continuous Deployment section pane, specify the tasks to be performed during each stage of the pipeline. 1. Do one of the following actions: Click a manually added task, click the More icon, and then click either Add task above or Add task below to add the task above or below the existing task. Click Add task in the stage. 2. From the Task list, select the task that you want to perform. To run Pega unit tests either for the pipeline application or for an application that is associated with an access group, select Pega unit testing. 1. Optionally, perform one of the following actions: To run all the Pega unit tests that are in a Pega unit suite for the pipeline application, in the Test Suite ID field, enter the test suite ID. If you do not specify a test suite, all the Pega unit tests for the pipeline application are run. To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. For more information about creating Pega unit tests, see Creating PegaUnit test cases. 2. Click Submit. To run a Jenkins job that you have configured, select Jenkins. 1. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins deployment) that you want to run. 2. In the Token field, enter the Jenkins authentication token. 3. In the Parameters field, enter parameters, if any, to send to the Jenkins job. Separate multiple parameters with a comma. 4. Click Submit. To add a manual step that a user must perform in the pipeline, select Manual. 1. In the Job name field, enter text that describes the action that you want the user to take. 2. In the Assigned to field, press the Down Arrow key and select the operator ID to assign the task to. 3. Click Submit. To specify that the application must meet a compliance score, select Check guardrail compliance. 1. In the Weighted compliance score field, enter the minimum required compliance score. 2. Click Submit.

To specify that all the tasks in the Application Security Checklist must be performed so that the pipeline can comply with security best practices, select Verify security checklist, and then click Submit. You must log in to the system for which this task is configured, and then mark all the tasks in the Application Security checklist as completed for the pipeline application. For more information about completing the checklist, see Preparing your application for secure deployment. 6. Optional: To modify the Approve for production task, which is added to the stage before production and which you use so that a user must approve application changes before they are sent to production, do the following actions: 1. Click the Info icon. 2. In the Job name field, enter a name for the task. 3. In the Assign to field, press the Down Arrow key and select the user who approves the application for production. An email is sent to this user, who can approve or reject application changes from within the email. 4. Click Submit. 7. Click Finish.

Manually starting a deployment Start a deployment manually if you are not using branches and are working directly in rulesets. You can also start a deployment manually if you do not want deployments to start automatically when branches are merged. You must also clear the Trigger deployment on merge check box in the pipeline configuration. 1. 2. 3. 4.

Click Deployment Manager in the Designer Studio footer. Click the pipeline for which you want to start a deployment. Click Start deployment. Start a new deployment or deploy an existing application by completing one of the following actions: To start a deployment and deploy a new application package, do the following steps: 1. Click Generate new artifact. 2. In the Deployment name field, enter the name of the deployment. 3. Click Deploy. To deploy an application package that is on a cloud repository, do the following steps:

1. 2. 3. 4. 5.

Click Deploy an existing artifact. In the Deployment name field, enter the name of the deployment. In the Select a repository field, press the Down Arrow key and select the repository. In the Select an artifact field, press the Down Arrow key and select the application package. Click Deploy.

Starting a deployment in a branch-based environment In non-distributed, branch-based environments, you can immediately start a deployment by submitting a branch into a pipeline in the Merge Branches wizard. For more information, see Submitting a branch into a pipeline.

Starting a deployment in a distributed branch-based environment If you are using Deployment Manager in a distributed, branch-based environment and using multiple pipelines per application, first export the branch to the remote development system, and then merge it. 1. On the development system, package the branch. For more information, see Packaging a branch. 2. On the remote development system, import the branch by using the Import wizard. For more information, see Importing a file by using the Import wizard. 3. On the remote remote development system, start a deployment by using the Merge Branches wizard. For more information, see Submitting a branch into a pipeline. If you are using one pipeline per application, you can publish a branch to start the merge. For more information, see Publishing a branch to a repository.

Completing or rejecting a manual step in a deployment If a manual step is configured on a deployment, the deployment pauses when it reaches the step, and you can either complete it or reject it. For example, if a user was assigned a task and completed it, you can complete the task to continue the deployment. Deployment Manager also sends you an email when there is a manual step in the pipeline. You can complete or reject a step either within the pipeline or through email. Deployment Manager also generates a manual step if there are schema changes in the application package that the release manager must apply. For more information, see Schema changes in application packages. To complete or reject a manual step within the deployment, do the following steps: 1. Click Deployment Manager in the Designer Studio footer. 2. Click a pipeline. 3. Right-click the manual step and select one of the following options: Complete task: Resolve the task so that the deployment continues through the pipeline. Reject task: Reject the task so that the deployment does not proceed. To complete or reject a manual step from within an email, click either Accept or Reject.

Schema changes in application packages If an application package that is to be deployed on candidate systems contains schema changes, the Pega Platform orchestration server checks the candidate system to verify that you have the required privileges to deploy the schema changes. One of the following results occurs: If you have the appropriate privileges, schema changes are automatically applied to the candidate system, the application package is deployed to the candidate system, and the pipeline continues. If you do not have the appropriate privileges, Deployment Manager generates an SQL file that lists the schema changes and sends it to your email address. It also creates a manual step, pausing the pipeline, so that you can apply the schema changes. After you complete the step, the pipeline continues. For more information about completing a step, see Completing or rejecting a manual step. You can also configure settings to automatically deploy schema changes so that you do not have to manually apply them if you do not

have the required privileges. For more information, see Configuring settings to automatically deploy schema changes.

Configuring settings to automatically deploy schema changes You can configure settings to automatically deploy schema changes that are in an application package that is to be deployed on candidate systems. Configure these settings so that you do not have to apply schema changes if you do not have the privileges to deploy them. 1. On the orchestration server, in Pega Platform, set the AutoDBSchemaChanges Dynamic System Setting to true to enable schema changes at the system level. 1. In Designer Studio, search for AutoDBSchemaChanges. 2. On the Settings tab, in the Value field, enter true. 3. Click Save. 2. Add the SchemaImport privilege to your access role to enable schema changes at the user level. For more information, see Specifying privileges for an Access or Role to Object rule. These settings are applied sequentially. If the AutoDBSchemaChanges Dynamic System Setting is set to false, you cannot deploy schema changes, even if you have the SchemaImport privilege.

Pausing a deployment When you pause a deployment, the pipeline completes the task that it is running, and stops the deployment at the next step. To pause a deployment, click Pause.

Performing actions on a deployment that has errors If a deployment has errors, the pipeline stops processing on it. You can do one of the following actions: Ignore the current step and run the next step by clicking Start. Restart the deployment at the current step, after fixing the errors, by clicking Start. Roll back to an earlier deployment by clicking Roll back deployment .

Diagnosing a pipeline You can diagnose your pipeline to verify that your pipeline is configured properly such as whether the target application and product rule are in the development environment, connectivity between systems and repositories is working, and premerge settings are correctly configured. 1. 2. 3. 4. 5.

In the Designer Studio footer, click Deployment Manager. Click a pipeline. Click Actions > Diagnose pipeline. In the Diagnose application pipeline dialog box, review the errors, if any. Optional: To view troubleshooting tips about errors, hover your mouse over the Troubleshooting tips link.

If the RMURL Dynamic System Setting is not configured, Deployment Manager displays a message that you can disregard if you are not using branches, because you do not need to configure the Dynamic System Setting.

Viewing branch status You can view the status of all the branches that are in your pipeline. For example, you can see whether a branch was merged in a deployment and when it was merged. 1. Click Deployment Manager in the Designer Studio footer. 2. Click a pipeline. 3. Click Actions > View branches.

Viewing deployment logs View logs for a deployment to see the completion status of operations, for example, when a deployment is moved to a new stage. You can change the logging level to control which events are displayed in the log. For example, you can change logging levels of your deployment from INFO to DEBUG for troubleshooting purposes. For more information, see Logging Level Settings tool. 1. 2. 3. 4.

Click Click Click Click

Deployment Manager in the Designer Studio footer. a pipeline. the Gear icon for the deployment for which you want to view the log file. View log.

Viewing deployment reports Deployment reports provide information about a specific deployment. You can view information such as the number of tasks that you configured on a deployment that have been completed and when each task started and ended. 1. 2. 3. 4.

Click Click Click Click

Deployment Manager in the Designer Studio footer. a pipeline. the Gear icon for the deployment for which you want to view the deployment report. View report.

Viewing reports for all deployments Reports provide a variety of information about all the deployments in your pipeline. You can view the following key performance indicators (KPI): Deployment Success - Percentage of deployments that are successfully deployed to production Deployment Frequency – Frequency of new deployments to production Deployment Speed - Average time taken to deploy to production Start frequency - Frequency at which new deployments are triggered

Failure rate - Average number of failures per deployment Merges per day - Average number of branches that are successfully merged per day To view reports, do the following tasks: 1. Click Deployment Manager in the Designer Studio footer. 2. Click a pipeline. 3. Click Actions > View reports.

Deleting an application pipeline When you delete a pipeline, its associated application packages are not removed from the repositories that the pipeline is configured to use. 1. In the Designer Studio footer, click Deployment Manager. 2. Click the Delete icon for the pipeline that you want to delete. 3. Click Submit.

Viewing, downloading, and deleting application packages in repositories You can view, download, and delete application packages in repositories that are on the orchestration server. If you are using Deployment Manager on Pega Cloud, application packages that you have deployed to cloud repositories are stored on Pega Cloud. To manage your cloud storage space, you can download and permanently delete the packages. 1. 2. 3. 4. 5. 6.

In the Designer Studio footer, click Deployment Manager. Click the pipeline for which you want to download or delete packages. Click either Development Repository or Production Repository. Click Actions > Browse artifacts. To download an application package, click the package, and then save it to the appropriate location. To delete a package, select the check boxes for the packages that you want to delete and click Delete.

Deployment Manager 3.2.x Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your Pega applications from within Pega Platform™. You can create a standardized deployment process so that you can deploy predictable, high-quality releases without using third-party tools. With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging, application package generation, artifact management, and package promotion to different stages in the workflow. Deployment Manager 3.2.x is supported on Pega 7.4. You can download it for Pega Platform from the Deployment Manager Pega Exchange page. Each customer VPC on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use with your Pega Cloud application. For more information about the features in the latest version of Deployment Manager 3.2.x, see the following articles: Deployment Manager release notes Deployment Manager architecture and workflows Creating custom repositories for Deployment Manager Installing and configuring Deployment Manager 3.2.x Using Deployment Manager 3.2.x

Installing and configuring Deployment Manager 3.2.x Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks and allow you to quickly deploy high-quality software to production. Each customer virtual private cloud (VPC) on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use it with your Pega Cloud application. This document describes the features for the latest version of Deployment Manager 3.2.x. See the following topics for more information about installing and configuring Deployment Manager: Step Step Step Step

1: 2: 3: 4:

Installing Deployment Manager on premises Configuring systems in the pipeline Configuring the development system for branch-based development (optional) Configuring additional settings

For information about using Deployment Manager, see Using Deployment Manager 3.2.x.

Step 1: Installing Deployment Manager on premises If you are using Deployment Manager on Pega Platform™, complete the following steps to install it. 1. Install Pega 7.4 on all systems in the CI/CD pipeline. 2. Browse to the Deployment Manager Pega Exchange page, and then download the DeploymentManager03.02.0x.zip file for your version of Pega Platform to your local disk on each system. 3. Use the Import wizard to import files into the appropriate systems. For more information about the Import wizard, see Importing a file by using the Import wizard. 4. On the orchestration server, import the following files: PegaDevOpsFoundation_03.02.0x.zip PegaDeploymentManager_03.02.0x.zip 5. On the development, QA, staging, and production systems, import the PegaDevOpsFoundation_03.02.0x.zip file. 6. Optional: If you are using a distributed development, on the remote development system, import the PegaDevOpsFoundation_03.02.0x.zip file.

Step 2: Configuring systems in the pipeline

Complete the following steps to set up a pipeline for all supported CI/CD workflows. If you are using branches, you must configure additional settings after you perform the required steps. 1. 2. 3. 4.

Step Step Step Step

2a: Configuring authentication profiles on the orchestration server and candidate systems 2b: Configuring the orchestration server 2c: Configuring candidate systems 2d: Creating repositories on the orchestration server and candidate systems

Step 2a: Configuring authentication profiles on the orchestration server and candidate systems When you install Deployment Manager on all the systems in your pipeline, default applications, operator IDs, and authentication profiles that communicate between the orchestration server and candidate systems are also installed. On the orchestration server, the following items are installed: The Pega Deployment Manager application. The DMReleaseAdmin operator ID, which release managers use to log in to the Pega Deployment Manager application. You must enable this operator ID and specify its password. The DMAppAdmin authentication profile. You must update this authentication profile to use the password that you specified for the DMAppAdmin operator ID, which is configured on all the candidate systems On all the candidate systems, the following items are installed: The PegaDevOpsFoundation application. The DMAppAdmin operator ID, which points to the PegaDevOpsFoundation aplication. You must enable this operator ID and specify its password. The DMReleaseAdmin authentication profile. You must update this authentication profile to use the password that you specified for the DMReleaseAdmin operator ID, which is configured on the orchestration server. Configure the default authentication profile by following these steps: 1. On the orchestration server, enable the DMReleaseAdmin operator ID and specify its password. 1. Log in to the orchestration server with the credentials [email protected]/install. 2. In Designer Studio, click Records > Organization > Operator ID, and then click DMReleaseAdmin. 3. In the Designer Studio header, click the operator ID initials, and then click Operator. 4. On the Edit Operator ID rule form, click the Security tab. 5. Clear the Disable Operator check box. 6. Click Save. 7. Click Update password. 8. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then click Submit. 9. Optional: Clear the Force password change on next login check box if you do not want to change the password for the DMReleaseAdmin operator ID the next time that you log in. 10. Log out of the orchestration server. 2. On each candidate system, update the DMReleaseAdmin authentication profile to use the new password. All candidate systems use this authentication profile to communicate with the orchestration server about the status of the tasks in the pipeline. 1. Log in to each candidate system with the DMReleaseAdmin user name and the password that you specified. 2. Click Records > Security > Authentication Profile. 3. Click DMReleaseAdmin. 4. On the Edit Authentication Profile rule form, click Set password. 5. In the Password dialog box, enter the password, and then click Submit. 6. Save the rule form. 3. On each candidate system, which includes the development, QA, staging, and production systems, enable the DMAppAdmin operator ID. If you want to create your own operator IDs, ensure that they point to the PegaDevOpsFoundation application. 1. Log in to each candidate system with [email protected]/install. 2. In Designer Studio, click Records > Organization > Operator ID, and then click DMAppAdmin. 3. In the Designer Studio header, click the operator ID initials, and then click Operator. 4. On the Edit Operator ID rule form, click the Security tab. 5. Clear the Disable Operator check box. 6. Click Save. 7. Click Update password. 8. In the Change Operator ID Password dialog box, enter a password, reenter it to confirm it, and then click Submit. 9. Optional: Clear the Force password change on next login check box if you do not want to change the password for the DMAppAdmin operator ID the next time that you log in. 10. Log out of the orchestration server. 11. Log out of each candidate system. 4. On the orchestration server, modify the DMAppAdmin authentication profile to use the new password. The orchestration server uses this authentication profile to communicate with candidate systems so that it can run tasks in the pipeline. 1. Log in to the orchestration server with the DMAppAdmin user name and the password that you specified. 2. Click Records > Security > Authentication Profile. 3. Click DMAppAdmin. 4. On the Edit Authentication Profile rule form, click Set password. 5. In the Password dialog box, enter the password, and then click Submit. 6. Save the rule form.

Step 2b: Configuring the orchestration server The orchestration server is the system on which release managers configure and manage CI/CD pipelines. 1. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages. 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 2. Configure the candidate systems in your pipeline. For more information, see Step 2c: Configuring candidate systems.

Step 2c: Configuring candidate systems Configure each system system that is used for the development, QA, staging, and production stage in the pipeline.

1. On each candidate system, add the PegaDevOpsFoundation application to your application stack. 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Built on application section, click Add application. 3. In the Name field, press the Down Arrow key and select PegaDevOpsFoundation. 4. In the Version field, press the Down Arrow key and select the version of Deployment Manager that you are using. 5. Click Save. 2. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages. 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 3. Optional: If you want to use a product rule other than the default product rule that is created by the New Application wizard, on the development system, create a product rule that defines the application package that will be moved through repositories in the pipeline. For more information, see Product rules: Completing the Create, Save As, or Specialization form. When you use the New Application wizard, a default product rule is created that has the same name as your application. 1. Configure repositories through which to move artifacts in your pipeline. For more information, see Step 2d: Creating repositories on the orchestration server and candidate systems.

Step 2d: Creating repositories on the orchestration server and candidate systems If you are using Deployment Manager on premises, create repositories on the orchestration server and all candidate systems to move your application between all the systems in the pipeline. You can use a supported repository type that is provided in Pega Platform, or you can create a custom repository type. If you are using Deployment Manager on Pega Cloud, default repositories are provided. If you want to use repositories other than the ones provided, you can create your own. For more information about creating a supported repository type, see Creating a repository connection. For more information about creating a custom repository type, see Creating custom repository types for Deployment Manager. When you create repositories, note the following information: The Pega repository type is not supported. Ensure that each repository has the same name on all systems. When you create JFrog Artifactory repositories, ensure that you create a Generic package type in JFrog Artifactory. Also, when you create the authentication profile for the repository on Pega Platform, you must select the Preemptive authentication check box. After you configure a pipeline, you can verify that the repository connects to the URL of the development and production repositories by clicking Test Connectivity on the Repository rule form.

Step 3: Configuring the development system for branch-based development (optional) After you configure the orchestration server and all your candidate systems, configure additional settings so that you can use pipelines if you are using branches in a distributed or non-distributed branch-based environment. You must configure the development system to create a pipeline in a branch-based environment. 1. On the development system (in nondistributed environment) or the main development system (in a distributed environment), create a Dynamic System Setting to define the URL of the orchestration server, even if the orchestration server and the development system are the same system. 1. Click Create > Records > SysAdmin > Dynamic System Settings. 2. In the Owning Ruleset field, enter Pega-DevOps-Foundation. 3. In the Setting Purpose field, enter RMURL. 4. Click Create and open. 5. On the Settings tab, in the Value field, enter the URL of the orchestration server. Use this format: http://hostname:port/prweb/PRRestService. 6. Click Save. 2. Complete the following steps on either the development system (in a non-distributed environment) or the remote development system (in a distributed environment). 1. Use the New Application wizard to create a new development application that developers will log in to. This application allows development teams to maintain a list of development branches without modifying the definition of the target application. 2. Add the target application of the pipeline as a built-on application layer of the development application. 1. Log in to the application. 2. In the Designer Studio header, click the name of your application, and then click Definition. 3. In the Built-on application section, click Add application. 4. In the Name field, press the Down Arrow key and select the name of the target application. 5. In the Version field, press the Down Arrow key and select the target application version. 6. Click Save. 3. Lock the application rulesets to prevent developers from making changes to rules after branches have been merged. 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Application rulesets section, click the Open icon for each ruleset that you want to lock. 3. Click Lock and Save. 4. Optional: It is recommended that you merge branches by using the Merge Branch wizard. However, you can publish a branch to the remote development system to start a deployment. Publishing a branch when you have multiple pipelines per application is not supported. 1. In Designer Studio, enable Pega repository types. For more information, see Enabling the Pega repository type. 2. Create a new Pega repository type. For more information, see Creating a repository connection. Ensure that you do the following tasks: In the Host ID field, enter the URL of the development system. The default access group of the operator that is configured for the authentication profile of this repository should point to the pipeline application on the development system (in a nondistributed environment) or main development system (in a distributed environment). 3. Configure the orchestration server. For more information, see Step 2b: Configuring the orchestration server for branch-based

development.

Step 4: Configuring additional settings As part of your pipeline, you can optionally send email notifications to users, configure Jenkins if you are using a Jenkins task, and upgrade to the latest version of Deployment Manager if you are using a previous version. See the following topics for more information: Configuring email notifications on the orchestration server Configuring Jenkins Upgrading to Deployment Manager 3.2.x on the orchestration server

Configuring email notifications on the orchestration server You can optionally configure email notifications on the orchestration server. For example, users can receive emails when pre-merge criteria are not met and the system cannot create a deployment. To configure the orchestration server to send emails, complete the following steps: 1. ​Use the Email wizard to configure an email account and listener by clicking Designer Studio > Integration > Email > Email Wizard. This email account sends notifications to users when events occur, for example, if there are merge conflicts. For detailed information, see the procedure for “Configuring an email account that receives email and creates or manages work” in Entering email information in the Email wizard. 2. From the What would you like to do? list, select Receive an email and create/manage a work object. 3. From the What is the class of your work type? list, select Pega-Pipeline-CD. 4. From the What is your starting flow name? list, select NewWork. 5. From the What is your organization? list, select the organization that is associated with the work item. 6. In the What Ruleset? field, select the ruleset that contains the generated email service rule. This ruleset applies to the work class. 7. In the What RuleSet Version? field, select the version of the ruleset for the generated email service rule. 8. Click Next to configure the email listener. 9. In the Email Account Name field, enter <span><span>Pega-Pipeline-CD, which is the name of the email account that the listener references for incoming and outgoing email. 10. In the Email Listener Name field, enter the name of the email listener. Begin the name with a letter, and use only letters, numbers, the ampersand character (&), and hyphens. 11. In the Folder Name field, enter the name of the email folder that the listener monitors. Typically, this folder is INBOX. 12. In the Service Package field, enter the name of the service package to be deployed. Begin the name with a letter, and use only letters, numbers, and hyphens to form an identifier. 13. In the Service Class field, enter the service class name. 14. In the Requestor User ID field, press the Down Arrow Key, and select the operator ID of the release manager operator. 15. In the Requestor Password field, enter the password for the release manager operator. 16. In the Requestor User ID field, enter the operator ID that the email service uses when it runs. 17. In the Password field, enter the password for the operator ID. 18. Click Next to continue the wizard and configure the service package. For more information, see Configuring the service package in the Email wizard. 19. After you complete the wizard, enable the listener that you created in the Email Wizard. For more information, see Starting a listener. Email notifications Emails are also preconfigured with information about each notification type. For example, when a deployment failure occurs, the email that is sent provides information, such as the pipeline name and URL of the system on which the deployment failure occurred. Preconfigured emails are sent in the following scenarios: Deployment start – When a deployment starts, an email is sent to the release manager and, if you are using branches, to the operator who started a deployment. Deployment failure – If any step in the deployment process is unsuccessful, the deployment pauses. An email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Deployment step completion – When a step in a deployment process is completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Stage completion – When a stage in a deployment process is completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Deployment completion – When a deployment is successfully completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Manual tasks requiring approval – When a manual task requires email approval from a user, an email is sent to the user, who can approve or reject the task from the email. Stopped deployment – When a deployment is stopped, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Pega unit testing failure – If a Pega unit test cannot successfully run on a step in the deployment, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Schema changes required – If you do not have the required schema privileges to deploy them on application packages that require those changes, an email is sent to the operator who started the deployment. Guardrail compliance scores – If you are using the Check guardrail compliance task, and the compliance score is less than the score that is specified in the task, an email is sent to the release manager. Approve for production – If you are using the Approve for production task, which requires approval from a user before application changes are deployed to production, an email is sent to the user. The user can reject or approve the changes.

Configuring Jenkins If you are using a Jenkins task in your pipeline, configure Jenkins so that it can communicate with the orchestration server. 1. On the orchestration server, create an authentication profile that uses Jenkins credentials. 1. Click Create > Security > Authentication Profile. 2. Enter a name, and then click Create and open. 3. In the User name field, enter the user name of the Jenkins user. 4. Click Set password, enter the Jenkins password, and then click Submit. 5. Click the Preemptive authentication check box. 6. Click Save. 2. Because the Jenkins task does not support Cross-Site Request Forgery (CSRF), disable it by completing the following steps: 1. In Jenkins, click Manage Jenkins. 2. Click Configure Global Security. 3. In the CRSF Protection section, clear the Prevent Cross Site Request Forgery exploits check box.

3. 4. 5. 6. 7.

8. 9. 10. 11.

12.

4. Click Save. Install the Post build task plug-in. Install the curl command on the Jenkins server. Create a new freestyle project. On the General tab, select the This project is parameterized check box. Add the BuildID and CallBackURL parameters. 1. Click Add parameter, and then select String parameter. 2. In the String field, enter BuildID. 3. Click Add parameter, and then select String parameter. 4. In the String field, enter CallBackURL. In the Build Triggers section, select the Trigger builds remotely check box. In the Authentication Token field, select the token that you want to use when you start Jenkins jobs remotely. In the Build Environment section, select the Use Secret text(s) or file(s) check box. In the Bindings section, do the following actions: 1. Click Add, and then select User name and password (conjoined). 2. In the Variable field, enter RMCREDENTIALS 3. .In the Credentials field, click Specific credentials. 4. Click Add, and then select Jenkins. 5. In the Add credentials dialog box, in the Username field, enter the operator ID of the release manager operator that is configured on the orchestration server. 6. In the Password field, enter the password. 7. Click Save. In the Post-Build Actions section, do one of the following actions, depending on your operating system: If Jenkins is running on Microsoft Windows, add the following post-build tasks:

1. Click Add post-build action, and then select Post build task. 2. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build fails, for example BUILD FAILURE. 3. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%". 4. Click Add another task. 5. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build is successful, for example BUILD SUCCESS. 6. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%" 7. Click Save. If Jenkins is running on UNIX or Linux, add the following post-build tasks. Use the dollar sign ($) instead of the percent sign (%) to access the environment variables. 1. Click Add post-build action, and then select Post build task. 2. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build fails, for example BUILD FAILURE. 3. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 4. Click Add another task. 5. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build is successful, for example BUILD SUCCESS. 6. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 7. Click Save.

Upgrading to Deployment Manager 3.2.x on the orchestration server Before you upgrade, ensure that no deployments are running, have errors, or are paused. If you are using an earlier version of Deployment Manager, upgrade to Deployment Manager 3.2.x by performing the following steps: 1. 2. 3. 4. 5.

Log in to the release management application. In Designer Studio, search for pxUpdatePipeline, and then click the activity in the dialog box that displays the results. Click Actions > Run. In the dialog box that is displayed, click Run. Modify the current release management application so that it is built on PegaDeploymentManager:03-02-01. 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Edit Application rule form, on the Definition tab, in the Built on application section, for the PegaDeploymentManager application, press the Down Arrow key and select 03.02.01. 3. Click Save. 6. Merge rulesets to the PipelineData ruleset. 1. Click Designer Studio > System > Refactor > Rulesets. 2. Click Copy/Merge RuleSet. 3. Click the Merge Source RuleSet(s) to Target RuleSet radio button. 4. Click the RuleSet Versions radio button. 5. In the Available Source Ruleset(s) section, select the first open ruleset version that appears in the list, and then click the Move icon. All your current pipelines are stored in the first open ruleset. If you modified this ruleset after you created the application, select all the ruleset versions that contain pipeline data. 1. 2. 3. 4. 5. 6. 7.

In the target RuleSet/Information section, in the Name field, press the Down Arrow key and select Pipeline Data. In the Version field, enter 01-01-01. For the Delete Source RuleSet(s) upon completion of merge? option, click No. Click Next. Click Merge to merge your pipelines to the PipelineData:01-01-01 rulset. Click Done. Your pipelines are migrated to the Pega Deployment Manager application. Log out of the orchestration server and log back in to it with the DMReleaseAdmin operator ID and the password that you specified for it.

For backup purposes, pipelines are still visible in your previous release management application. However, you should not create deployments with this application, because deployments might not work correctly.

Using Deployment Manager 3.2.x Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks and allow you to quickly deploy high-quality software to production. On the orchestration server, release managers use the DevOps landing page to configure CI/CD pipelines for their Pega® Platform applications. The landing page displays all the running and queued application deployments, branches that are to be merged, and reports that provide information about your DevOps environment such as key performance indicators (KPIs). This document describes the features for the latest version of Deployment Manager 3.2.x. See the following topics for more information about using Deployment Manager to configure and use CI/CD pipelines: Configuring an application pipeline Manually starting a deployment Starting a deployment in a branch-based environment Starting a deployment in a distributed, branch-based environment Completing or rejecting a manual step in a deployment Schema changes in application packages Pausing a deployment Performing actions on a deployment with errors Viewing branch status Viewing deployment logs Viewing deployment reports Viewing reports for all deployments Deleting an application pipeline Viewing, downloading and deleting application packages in repositories

Configuring an application pipeline When you add a pipeline, you specify merge criteria and configure stages and steps in the continuous delivery workflow. For example, you can specify that a branch must be peer-reviewed before it can be merged, and you can specify that Pega unit tests must be run after a branch is merged and is in the QA stage of the pipeline. You can create multiple pipelines for one version of an application. For example, you can use multiple pipelines in the following scenarios: To move a deployment to production separately from the rest of the pipeline. You can then create a pipeline that has only a production stage or development and production stages. To use parallel development and hotfix life cycles for your application.

Adding a pipeline on Pega Cloud To add a pipeline on Pega Cloud, perform the following steps: 1. In the Designer Studio footer, click Deployment Manager. 2. Click Add pipeline. 3. Specify the details of the application for which you are creating the pipeline. 1. Optional: If you want to change the URL of your development system, which is populated by default with your development system URL, in the Development environment field, press the Down Arrow key and select the URL This is the system on which the product rule that defines the application package that moves through the repository is located. 2. In the Application field, press the Down Arrow key and select the name of the application. 3. In the Version field, press the Down Arrow key and select the application version. 4. In the Access group field, press the Down Arrow key and select the access group for which pipeline tasks are run. This access group must be present on all the candidate systems and have at least the sysadmin4 role. 5. In the Pipeline name field, enter the name of the pipeline. This name must be unique. 4. Click Create. The system adds tasks, which you cannot delete, to the pipeline that are required to successfully run a workflow, for example, Deploy and Generate Artifact. For Pega Cloud, it also adds mandatory tasks that must be run on the pipeline, for example, the Check guardrail compliance tasks. 5. Optional: Add tasks that you want to perform on your pipeline, such as Pega unit testing. For more information, see Modifying stages and tasks in the pipeline.

Adding a pipeline on premises To add a pipeline on premises, complete the following steps: 1. In the Designer Studio footer, click Deployment Manager. 2. Click Add pipeline. 3. Specify the details of the application for which you are creating the pipeline. 1. In the Development environment field, enter the URL of the development system. This is the system on which the product rule that defines the application package that moves through the repository is located. 2. In the Application field, press the Down Arrow key and select the name of the application. 3. In the Version field, press the Down Arrow key and select the application version. 4. In the Access group field, press the Down Arrow key and select the access group for which pipeline tasks are run. This access group must be present on all the candidate systems and have at least the sysadmin4 role. 5. In the Pipeline name field, enter the name of the pipeline. This name must be unique. 6. In the Product rule field, enter the name of the product rule that defines the contents of the application. 7. In the Version field, enter the product rule version. 4. Optional: If the application depends on other applications, in the Dependencies section, add those applications. 1. Click Add. 2. In the Application name field, press the Down Arrow key and select the application name. 3. In the Application version field, press the Down Arrow key and select the application version. 4. In the Repository name field, press the Down Arrow key and select the repository that contains the production-ready artifact of the dependent application. If you want the latest artifact of the dependent application to be automatically populated, ensure that the repository that contains the production-ready artifact of the dependent application is configured to support file updates.

5. In the Artifact name field, press the Down Arrow key and select the artifact. For more information about dependent applications, see Product rules: Listing product dependencies for Pega-supplied applications. 5. Click Next. 6. In the Environment details section, in the Stages section, specify the URL of each candidate system and the authentication profile that each system uses to communicate with the orchestration system. 1. In the Environments field for the system, press the Down Arrow key and select the URL of the system. 2. Optional: If you are using your own authentication profiles, in the Authentication field for the system, press the Down Arrow key and select the authentication profile that you want to communicate from the orchestration server to the system. By default, the fields are populated with the DMAppAdmin authentication profile. 7. In the Artifact management section, specify the development and production repositories through which the product rule that contains application contents moves through the pipeline. 1. In the Development repository field, press the Down Arrow key and select the development repository. 2. In the Production repository field, press the Down Arrow key and select the production repository. 8. Optional: In the External orchestration server section, if you are using a Jenkins step in a pipeline, specify Jenkins details. 1. In the URL field, enter the URL of the Jenkins server. 2. In the Authentication profile field, press the Down Arrow key and select the authentication profile on the orchestration server that specifies the Jenkins credentials to use for Jenkins jobs. 9. Click Next. 10. Optional: If you are using branches in your application, in the Merge policy section, specify merge options. 1. Do one of the following actions: To merge branches into the highest existing ruleset in the application, click the Highest existing ruleset radio button. To merge branches into a new ruleset, click the New ruleset radio button. 2. In the Password field, enter the password that locks the rulesets on the development system. 11. Click Next. The system adds tasks, which you cannot delete, to the pipeline that are required to successfully run a workflow, for example, Deploy and Generate Artifact. The system also adds other tasks to enforce best practices such as Check guardrail compliance. 12. Optional: In the Merge criteria pane, specify tasks that must be completed before a branch can be merged in the pipeline. 1. Click Add task. 2. Specify the task that you want to perform. To specify that a branch must meet a compliance score before it can be merged: 1. From the Task list, select Check guardrail compliance. 2. In the Weighted compliance score field, enter the minimum required compliance score. 3. Click Submit. To specify that a branch must be reviewed before it can be merged: 1. From the Task list, select Check review status. 2. Click Submit. 13. Optional: To start a deployment automatically when a branch is merged, click the Trigger deployment on merge check box. 14. Optional: Clear a check box for a deployment life cycle stage to skip it. 15. Optional: In the Continuous Deployment section pane, specify the tasks to be performed during each stage of the pipeline. 1. Do one of the following actions: Click a manually added task, click the Selector

icon, and then click either Add task above or Add task below to add the task above or below the existing task. Click Add task in the stage. 2. From the Task list, select the task that you want to perform. To run Pega unit tests either for the pipeline application or for an application that is associated with an access group, select Pega unit testing. 1. Optional: Complete one of the following tasks: To run all the Pega unit tests that are in a Pega unit suite for the pipeline application, in the Test Suite ID field, enter the test suite ID. If you do not specify a test suite, all the Pega unit tests for the pipeline application are run. To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. For more information about creating Pega unit tests, see Creating PegaUnit test cases. 2. Click Submit. To run a Jenkins job that you have configured, select Jenkins. 1. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins deployment) that you want to run. 2. In the Token field, enter the Jenkins authentication token. 3. In the Parameters field, enter parameters, if any, to send to the Jenkins job. Separate multiple parameters with a comma. 4. Click Submit. To add a manual step that a user must perform in the pipeline, select Manual. 1. In the Job name field, enter text that describes the action that you want the user to take. 2. In the Assigned to field, press the Down Arrow key and select the operator ID to assign the task to. 3. Click Submit. To specify that the application must meet a compliance, select Check guardrail compliance. 1. In the Weighted compliance score field, enter the minimum required compliance score. 2. Click Submit. 16. Optional: To modify the Approve for production task, which is added to the stage before production and which you use so that a user must approve application changes before they are sent to production, do the following actions: 1. Click the info icon. 2. In the Job name field, enter a name for the task. 3. In the Assign to field, press the Down Arrow key and select the user who approves the application for production. An email is sent to this user, who can approve or reject application changes from within the email. 4. Click Submit. 17. Click Finish.

Modifying application details You can modify application details, such as the product rule that defines the content of the application that moves through the pipeline. 1. Click Actions > Application details. 2. Optional: In the Development environment field, enter the URL of the development system, which is the system on which the product rule that defines the application package that moves through the repository is located. 3. Optional: In the Version field, press the Down Arrow key and select the application version.

4. Optional: In the Product rule field, press the Down Arrow key and select the product rule that defines the contents of the application. 5. Optional: In the Version field, press the Down Arrow key and select the product rule version. 6. Optional: If the application depends on other applications, in the Dependencies section, add those applications. 1. Click Add. 2. In the Application name field, press the Down Arrow key and select the application name. 3. In the Application version field, press the Down Arrow key and select the application version. 4. In the Repository name field, press the Down Arrow key and select the repository that contains the production-ready artifact of the dependent application. If you want the latest artifact of the dependent application to be automatically populated, ensure that the repository that contains the production-ready artifact of the dependent application is configured to support file updates. 5. In the Artifact name field, press the Down Arrow key and select the artifact. For more information about dependent applications, see Product rules: Listing product dependencies for Pega-supplied applications. 7. Click Save.

Modifying URLs and authentication profiles You can modify the URLs of your development and candidate systems and the authentication profiles that are used to communicate between those systems and the orchestration server. 1. 2. 3. 4.

Click Actions > Environment details. Click Stages. In the Environments field for the system, press the Down Arrow key and select the URL of the system. In the Authentication field for the system, press the Down Arrow key and select the authentication profile that you want to communicate from the orchestration server to the system. 5. Click Save.

Modifying development and production repositories You can modify the development and production repositories through which the product rule that contains application contents moves through the pipeline. All the generated artifacts are archived in the Development repository, and all the production-ready artifacts are archived in the Production repository. You do not need to configure repositories if you are using Pega Cloud but can use different repositories other than the default ones that are provided. 4. Click Actions > Environment details. 5. Click Artifact Management. 6. Do one of the following actions to select a repository: If you are using Deployment Manager on premises, or on Pega Cloud with default repositories, complete the following tasks: 1. In the Application repository section, in the Development repository field, press the Down Arrow key and select the development repository 2. In the Production repository field, press the Down Arrow key and select the production repository. If you are using Deployment Manager on Pega Cloud and want to use different repositories other than the default repositories, complete the following tasks: 1. In the Artifact repository section, click the Yes radio button. 2. In the Development repository field, press the Down Arrow key and select the development repository. 3. In the Production repository field, press the Down Arrow key and select the production repository. 7. Click Save.

Specifying Jenkins server information If you are using a Jenkins step, specify details about the Jenkins server such as its URL. 1. 2. 3. 4. 5. 6.

Click Actions > Environment details. Click External orchestration server. Click the Jenkins icon. Click OK. In the URL field, enter the URL of the Jenkins server. In the Authentication profile field, press the Down Arrow key and select the authentication profile on the orchestration server that specifies the Jenkins credentials to use for Jenkins jobs. 7. Click Save.

Specifying merge options for branches If you are using branches in your application, specify options for merging branches into the base application. 1. Click Actions > Merge policy. 2. Do one of the following actions: To merge branches into a new ruleset, click the New ruleset radio button. To merge branches into the highest existing ruleset in the application, click the Highest existing ruleset radio button. 3. In the Password field, enter the password that locks the rulesets on the development system. 4. Click Save.

Modifying stages and tasks in the pipeline You can modify the stages and the tasks that are performed in each stage of the pipeline. For example, you can skip a stage or add tasks such as Pega unit testing to be done on the QA stage. 1. Click Actions > Pipeline model. 2. Optional: In the Merge criteria pane, specify tasks that must be completed before a branch can be merged in the pipeline. 1. Click Add task. 2. Specify the task that you want to perform. 1. To specify that a branch must meet a compliance score before it can be merged: From the Task list, select Check guardrail compliance. In the Weighted compliance score field, enter the minimum required compliance score. Click Submit. 2. To specify that a branch must be reviewed before it can be merged: 1. From the Task list, select Check review status. 2. Click Submit. 3. Optional: To start a deployment automatically when a branch is merged, click the Trigger deployment on merge check box.

4. Optional: Clear a check box for a deployment life cycle stage to skip it. 5. Optional: In the Continuous Deployment section pane, specify the tasks to be performed during each stage of the pipeline. 1. Do one of the following actions: Click a manually added task, click the selector icon, and then click either Add task above or Add task below to add the task above or below the existing task. Click Add task in the stage. 2. From the Task list, select the task that you want to perform. To run Pega unit tests either for the pipeline application or for an application that is associated with an access group, select Pega unit testing. 1. Optional: Complete one of the following tasks: To run all the Pega unit tests that are in a Pega unit suite for the pipeline application, in the Test Suite ID field, enter the test suite ID. If you do not specify a test suite, all the Pega unit tests for the pipeline application are run. To run all the Pega unit tests for an application that is associated with an access group, in the Access Group field, enter the access group. For more information about creating Pega unit tests, see Creating PegaUnit test cases. 2. Click Submit. To run a Jenkins job that you have configured, select Jenkins. 1. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins deployment) that you want to run. 2. In the Token field, enter the Jenkins authentication token. 3. In the Parameters field, enter parameters, if any, to send to the Jenkins job. Separate multiple parameters with a comma. 4. Click Submit. To add a manual step that a user must perform in the pipeline, select Manual. 1. In the Job name field, enter text that describes the action that you want the user to take. 2. In the Assigned to field, press the Down Arrow key and select the operator ID to assign the task to. 3. Click Submit. To specify that the application must meet a compliance, select Check guardrail compliance. 1. In the Weighted compliance score field, enter the minimum required compliance score. 2. Click Submit. 6. Optional: To modify the Approve for production task, which is added to the stage before production and which you use so that a user must approve application changes before they are sent to production, do the following actions: 1. Click the info icon. 2. In the Job name field, enter a name for the task. 3. In the Assign to field, press the Down Arrow key and select the user who approves the application for production. An email is sent to this user, who can approve or reject application changes from within the email. 4. Click Submit. 7. Click Finish.

Manually starting a deployment Start a deployment manually if you are not using branches and are working directly in rulesets. You can also start a deployment manually if you do not want deployments to start automatically when branches are merged. You must also clear the Trigger deployment on merge check box in the pipeline configuration. 1. 2. 3. 4.

Click Deployment Manager in the Designer Studio footer. Click the pipeline for which you want to start a deployment. Click Start deployment. Start a new deployment or deploy an existing application by completing one of the following actions: To start a deployment and deploy a new application package, do the following steps: 1. Click the Generate new artifact radio button. 2. In the Deployment name field, enter the name of the deployment. 3. Click Deploy. To deploy an application package that is on a cloud repository, do the following steps:

1. 2. 3. 4. 5.

Click the Deploy an existing artifact radio button. In the Deployment name field, enter the name of the deployment. In the Select a repository field, press the Down Arrow key and select the repository. In the Select an artifact field, press the Down Arrow key and select the application package. Click Deploy.

Starting a deployment in a branch-based environment In non-distributed, branch-based environments, you can immediately start a deployment by submitting a branch into a pipeline in the Merge Branches wizard. For more information, see Submitting a branch into a pipeline.

Starting a deployment in a distributed branch-based environment If you are using Deployment Manager in a distributed, branch-based environment and using multiple pipelines per application, first export the branch to the remote development system, and then merge it. 1. On the development system, package the branch. For more information, see Packaging a branch. 2. On the remote development system, import the branch by using the Import wizard. For more information, see Importing a file by using the Import wizard. 3. On the remote remote development system, start a deployment by using the Merge Branches wizard. For more information, see Submitting a branch into a pipeline. If you are using one pipeline per application, you can publish a branch to start the merge. For more information, see Publishing a branch to a repository.

Completing or rejecting a manual step in a deployment If a manual step is configured on a deployment, the deployment pauses when it reaches the step, and you can either complete or reject it. For example, if a user was assigned a task and completed it, you can complete the task to continue the deployment. Deployment Manager also sends you an email when there is a manual step in the pipeline. You can complete or reject a step either within the pipeline or through email. Deployment Manager also generates a manual step if there are schema changes in the application package that the release manager

must apply. For more information, see Schema changes in application packages. To complete or reject a manual step within the deployment, do the following steps: 1. Click Deployment Manager in the Designer Studio footer. 2. Click a pipeline. 3. Right-click the manual step and select one of the following options: Complete task: Resolve the task so that the deployment continues through the pipeline. Reject task: Reject the task so that the deployment does not proceed. To complete or reject a manual step from within an email, click either Accept or Reject.

Schema changes in application packages If an application package that is to be deployed on candidate systems contains schema changes, the Pega Platform orchestration server checks the candidate system to verify that you have the required privileges to deploy the schema changes. One of the following results occurs: If you have the appropriate privileges, schema changes are automatically applied to the candidate system, the application package is deployed to the candidate system, and the pipeline continues. If you do not have the appropriate privileges, Deployment Manager generates an SQL file that lists the schema changes and sends it to your email address. It also creates a manual step, pausing the pipeline, so that you can apply the schema changes. After you complete the step, the pipeline continues. For more information about completing a step, see Completing or rejecting a manual step. You can also configure settings to automatically deploy schema changes so that you do not have to manually apply them if you do not have the required privileges. For more information, see Configuring settings to automatically deploy schema changes.

Configuring settings to automatically deploy schema changes You can configure settings to automatically deploy schema changes that are in an application package that is to be deployed on candidate systems. Configure these settings so that you do not have to apply schema changes if you do not have the privileges to deploy them. 1. On the orchestration server, in Pega Platform, set the AutoDBSchemaChanges dynamic system setting to true to enable schema changes at the system level. 1. In Designer Studio, search for AutoDBSchemaChanges. 2. On the Settings tab, in the Value field, enter true. 3. Click Save. 2. Add the SchemaImport privilege to your access role to enable schema changes at the user level. For more information, see Specifying privileges for an Access or Role to Object rule. These settings are applied sequentially. If the AutoDBSchemaChanges Dynamic System Setting is set to false, you cannot deploy schema changes, even if you have the SchemaImport privilege.

Pausing a deployment When you pause a deployment, the pipeline completes the task that it is running, and stops the deployment at the next step. To pause a deployment, click the Pause button.

Performing actions on a deployment that has errors If a deployment has errors, the pipeline stops processing on it. You can do one of the following actions: Ignore the current step and run the next step by clicking the Start button. Restart the deployment at the current step, after fixing the errors, by clicking the Start button. Roll back to an earlier deployment by clicking the Roll back deployment button.

Viewing branch status You can view the status of all the branches that are in your pipeline. For example, you can see whether a branch was merged in a deployment and when it was merged. 1. Click Deployment Manager in the Designer Studio footer. 2. Click a pipeline. 3. Click Actions > View branches.

Viewing deployment logs View logs for a deployment to see the completion status of operations, for example, when a deployment is moved to a new stage. You can change the logging level to control which events are displayed in the log. For example, you can change logging levels of your deployment from INFO to DEBUG for troubleshooting purposes. For more information, see Logging Level Settings tool. 1. 2. 3. 4.

Click Click Click Click

Deployment Manager in the Designer Studio footer. a pipeline. the Gear icon for the deployment for which you want to view the log file. View log.

Viewing deployment reports Deployment reports provide information about a specific deployment. You can view information such as the number of tasks that you configured on a deployment that have been completed and when each task started and ended. 1. 2. 3. 4.

Click Click Click Click

Deployment Manager in the Designer Studio footer. a pipeline. the Gear icon for the deployment for which you want to view the deployment report. View report.

Viewing reports for all deployments Reports provide a variety of information about all the deployments in your pipeline. You can view the following key performance indicators (KPI): Deployment Success - Percentage of deployments that are successfully deployed to production Deployment Frequency – Average time taken to deploy to production. Deployment Speed - Average time taken to deploy to production Start frequency - Frequency at which new deployments are triggered Failure rate - Average number of failures per deployment Merges per day - Average number of branches that are successfully merged per day To view reports, do the following tasks: 1. Click Deployment Manager in the Designer Studio footer. 2. Click a pipeline. 3. Click Actions > View reports.

Deleting an application pipeline When you delete a pipeline, its associated application packages are not removed from the repositories that the pipeline is configured to use. 1. In the Designer Studio footer, click Deployment Manager. 2. Click the Delete icon for the pipeline that you want to delete. 3. Click Submit.

Viewing, downloading, and deleting application packages in repositories You can view, download, and delete application packages in repositories that are on the orchestration server. If you are using Deployment Manager on Pega Cloud, application packages that you have deployed to cloud repositories are stored on Pega Cloud. To manage your cloud storage space, you can download and permanently delete the packages. 1. 2. 3. 4. 5. 6.

In the Designer Studio footer, click Deployment Manager. Click the pipeline for which you want to download or delete packages. Click either Development Repository or Production Repository. Click Actions > Browse artifacts. To download an application package, click the package, and then save it to the appropriate location. To delete a package, select the check boxes for the packages that you want to delete and click Delete.

Deployment Manager 3.1.x Use Deployment Manager to configure and run continuous integration and delivery (CI/CD) workflows for your Pega applications from within Pega Platform™. You can create a standardized deployment process so that you can deploy predictable, high-quality releases without using third-party tools. With Deployment Manager, you can fully automate your CI/CD workflows, including branch merging, application package generation, artifact management, and package promotion to different stages in the workflow. Deployment Manager 3.1.x is supported on Pega 7.4. You can download it for Pega Platform from the Deployment Manager Pega Exchange page. Each customer VPC on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use with your Pega Cloud application. For more information about the features in the latest version of Deployment Manager 3.1.x, see the following articles: ​Deployment Manager release notes Deployment Manager architecture and workflows Creating custom repositories for Deployment Manager Installing and configuring Deployment Manager 3.1.x Using Deployment Manager 3.1.x

Installing and configuring Deployment Manager 3.1.x Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks and allow you to quickly deploy high-quality software to production. Each customer VPC on Pega Cloud has a dedicated orchestrator instance to use Deployment Manager. You do not need to install Deployment Manager to use with your Pega Cloud application. This document describes the features for the latest version of Deployment Manager 3.1.x. See the following topics for more information about installing and configuring Deployment Manager: Step Step Step Step

1: 2: 3: 4:

Installing Deployment Manager on premises Configuring systems in the pipeline Configuring systems for branch-based development (optional) Configuring additional settings

For information on using Deployment Manager, see Using Deployment Manager 3.1.x.

Step 1: Installing Deployment Manager on premises If you are using Deployment Manager on Pega Platform™, complete the following steps to install it. 1. Install Pega 7.4 on all systems in the CI/CD pipeline. 2. Browse to the Deployment Manager Pega Exchange page, and then download DeploymentManager03.0x.0x.zip for your version of Deployment Manager to your local disk on each system. 3. Use the Import wizard to import files into the appropriate systems. For more information about the Import wizard, see Importing a file by using the Import wizard.

1. On the orchestration server, import the following files: PegaDevOpsFoundation_03.01.0x.zip PegaDeploymentManager_03.01.0x.zip 2. On the development, QA, staging, and production systems, import the PegaDevOpsFoundation_03.01.0x.zip file. 3. Optional: If you are using a distributed development, on the remote development system, import the PegaDevOpsFoundation_03.01.0x.zip file.

Step 2: Configuring systems in the pipeline You must complete the following steps to set up a pipeline for all supported CI/CD workflows. If you are using branches, you must configure additional settings after you perform the required steps. 1. Step 2a: Configuring the orchestration server 2. Step 2b: Configuring candidate systems 3. Step 2c: Creating repositories on the orchestration server and candidate systems

Step 2a: Configuring the orchestration server The orchestration server is the system on which release managers configure and manage CI/CD pipelines. 1. Create an application that the release manager uses for creating, managing, and running pipelines, by using the New Application wizard. For more information, see Creating an application. 2. Add the PegaDeploymentManager application to your application stack. 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Built on application section, click Add application. 3. In the Name field, press the Down Arrow key and select PegaDeploymentManager. 4. In the Version field, press the Down Arrow key and select the version of Deployment Manager that you are using. 5. Click Save. Ensure that this application remains unlocked and has at least one unlocked ruleset. 3. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages. 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Add the PegaRULES:RepositoryAdministrator, PegaRULES:PegaAPI, and PegaRULES:SecurityAdministrator roles to the Administrator access groups that were generated by the New Application wizard. 1. Click Designer Studio > Org & Security > Groups & Roles > Access Groups. 2. Click an access group to open it. 3. In the Available roles section, click Add role. 4. In the field that is displayed, press the Down Arrow key and select PegaRULES:RepositoryAdministrator. 5. Click Add role. 6. In the field that is displayed, press the Down Arrow key and select PegaRULES:PegaAPI. 7. Click Add role. 8. In the field that is displayed, press the Down Arrow key and select PegaRULES:SecurityAdministrator. 9. Save the Edit Access Group rule form. 5. If you are a Pega Cloud customer, specify that content is stored in the Pega database: 1. Click the name of your application, and then click Definition. 2. Click Integration & security. 3. In the Content management system section, click Store in Pega database. 4. Click Save. 6. Create an authentication profile on the orchestration server that references an operator ID whose access group points to the target application on each candidate system. For example, if the operator that is on the candidate systems has the credentials janedoe/rules, you must create an authentication profile on the orchestration server that is also configured with the janedoe/rules credentials. For more information about configuring authentication profiles, see Creating an authentication profile. If the operator IDs and passwords are different on the candidate systems, you must create multiple authentication profiles. 7. Configure the candidate systems in your pipeline. For more information, see Step 2b: Configuring candidate systems.

Step 2b: Configuring candidate systems Configure each system system that is used for the development, QA, staging, and production stage in the pipeline. 1. Use the Import wizard to import your target application into each candidate system. For more information about the Import wizard, see Importing a file by using the Import wizard. Deployment Manager does not support first-time deployment, so you must import the application into each Pega Platform server the first time that you configure Deployment Manager. 2. On each candidate system, add the PegaDevOpsFoundation application to your application stack. 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Built on application section, click Add application. 3. In the Name field, press the Down Arrow key and select PegaDevOpsFoundation. 4. In the Version field, press the Down Arrow key and select the version of Deployment Manager that you are using. 5. Click Save. 3. On each candidate system, add the PegaRULES:RepositoryAdministrator, PegaRULES:PegaAPI, and PegaRULES:SecurityAdministrator roles to the operator that you use to access your application on each system. 1. Log in to each Pega Platform server with an operator whose default access group points to your application. This is the same operator that you configured on the orchestration server whose authentication profile points to this system from the orchestration server. 2. Click your user profile and select Access group. 3. In the Available roles section, click Add role. 4. In the field that is displayed, press the Down Arrow key and select PegaRULES:RepositoryAdministrator. 5. Click Add role.

6. In the field that is displayed, press the Down Arrow key and select PegaRULES:PegaAPI. 7. Click Add role. 8. In the field that is displayed, press the Down Arrow key and select PegaRULES:SecurityAdministrator. 9. Click Save. 4. On each candidate system, create an authentication profile and configure it with the operator ID and password of the release manager operator. Use the operator ID and password of the administrative operator that was generated when you created a new application on the orchestration server. All candidate systems use this authentication profile to communicate with the orchestration server about the status of the tasks in the pipeline. 1. Click Create > Security > Authentication Profile. 2. In the name field, enter ReleaseManager. 3. Configure the authentication profile to use the release manager operator ID and password, and configure other information, as appropriate. For example, if the credentials of the release manager are rmanager/rules, configure each authentication profile on the candidate systems with the rmanager/rules credentials. For more information about creating authentication profiles, see Creating an authentication profile. 5. Optional: If your system is not configured for HTTPS, verify that TLS/SSL settings are not enabled on the api and cicd service packages. 1. Click Records > Integration-Resources > Service Package. 2. Click api. 3. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 4. Click Records > Integration-Resources > Service Package. 5. Click cicd. 6. On the Context tab, verify that the Require TLS/SSL for REST services in this package check box is cleared. 6. On the development system, create a product rule that defines the application package that will be moved through repositories in the pipeline. For more information, see Product rules: Completing the Create, Save As, or Specialization form. Do not include the operator, which is referenced in the authentication profile that you created on the orchestration system, in the product rule. 7. Configure repositories through which to move artifacts in your pipeline. For more information, see Step 2c: Creating repositories on the orchestration server and candidate systems.

Step 2c: Creating repositories on the orchestration server and candidate systems If you are using Deployment Manager on premises, create repositories on the orchestration server and all candidate systems to move your application between all the systems in the pipeline. You can use a supported repository type that is provided in Pega Platform, or you can create a custom repository type. If you are using Deployment Manager on Pega Cloud, default repositories are provided. If you want to use repositories other than the ones provided, you can create your own. For more information about creating a supported repository type, see Creating a repository connection. For more information about creating a custom repository type, see Creating custom repository types for Deployment Manager. When you create repositories, note the following information: The Pega repository type is not supported. Ensure that each repository has the same name on all systems. When you create JFrog Artifactory repositories, ensure that you create a Generic package type in JFrog Artifactory. Also, when you create the authentication profile for the repository in Pega Platform, you must select the Preemptive authentication check box. After you configure a pipeline, you can verify that the repository connects to the URL of the development and production repositories by clicking Test Connectivity on the Repository rule form.

Step 3: Configuring systems for branch-based development (optional) After you configure the orchestration server and all your candidate systems, configure additional settings so that you can use pipelines if you are using branches in a distributed or non-distributed branch-based environment. 1. Step 3a: Configuring the development system for branch-based development 2. Step 3b: Configuring the orchestration server for branch-based development

Step 3a: Configuring the development system for branch-based development You must configure the development system to create a pipeline in a branch-based environment. 1. On the development system (in nondistributed environment) or the main development system (in a distributed environment), create a Dynamic System Setting to define the URL of the orchestration server, even if the orchestration server and the development system are the same system. 1. 2. 3. 4. 5.

Click Create > Records > SysAdmin > Dynamic System Settings. In the Owning Ruleset field, enter Pega-DevOps-Foundation. In the Setting Purpose field, enter RMURL. Click Create and open. On the Settings tab, in the Value field, enter the URL of the orchestration server. Use this format: http://hostname:port/prweb/PRRestService. 6. Click Save. 2. Complete the following steps on either the development system (in a non-distributed environment) or the remote development system (in a distributed environment). 1. Use the New Application wizard to create a new development application that developers will log in to. This application allows development teams to maintain a list of development branches without modifying the definition of the target application. 2. Add the target application of the pipeline as a built-on application layer of the development application. 1. Log in to the application. 2. In the Designer Studio header, click the name of your application, and then click Definition. 3. In the Built-on application section, click Add application. 4. In the Name field, press the Down Arrow key and select the name of the target application. 5. In the Version field, press the Down Arrow key and select the target application version. 6. Click Save.

3. Lock the application rulesets to prevent developers from making changes to rules after branches have been merged. 1. In the Designer Studio header, click the name of your application, and then click Definition. 2. In the Application rulesets section, click the Open icon for each ruleset that you want to lock. 3. Click Lock and Save. 4. Optional: It is recommended that you merge branches by using the Merge Branch wizard. However, you can publish a branch to the remote development system to start a build. Publishing a branch when you have multiple pipelines per application is not supported. 1. In Designer Studio, enable Pega repository types. For more information, see Enabling the Pega repository type. 2. Create a new Pega repository type. For more information, see Creating a repository connection. Ensure that you do the following tasks: In the Host ID field, enter the URL of the development system. The default access group of the operator that is configured for the authentication profile of this repository should point to the pipeline application on the development system (in a nondistributed environment) or main development system (in a distributed environment). 3. Configure the orchestration server. For more information, see Step 3b: Configuring the orchestration server for branch-based development.

Step 3b: Configuring the orchestration server for branch-based development Configure the orchestration server so that you can use pipelines in a branch-based environment. 1. Create a Dynamic System Setting to define the operator who can start queued builds. The build will be started using the operator that you define in this Dynamic System Setting. 2. Click Create > Sysadmin > Dynamic System Settings. 3. In the Owning Ruleset field, enter Pega-DevOps-DeploymentManager . 4. In the Setting Purpose field, enter ReleaseManager. 5. Click Create and open. 6. On the Settings tab, in the Value field, enter the operator ID whose default access group points to the release manager application. 7. Click Save. 8. Save the Pega-DeploymentManager agent to your ruleset and set its access group to the release manager application access group. 1. 2. 3. 4. 5.

Click Designer Studio > System > Operations > Agent Management. Filter the Name column with Pega-DeploymentManager . Click the Security tab. In the Access Group field, press the Down Arrow key and select the access group of the release manager application. Click Save.

Step 4: Configuring additional settings As part of your pipeline, you can optionally send email notifications to users, configure Jenkins if you are using a Jenkins task, and upgrade to the latest version of Deployment Manager if you are using a previous version. See the following topics for more information: Configuring email notifications on the orchestration server Configuring Jenkins Upgrading to Deployment Manager 3.x.x on the orchestration server

Configuring email notifications on the orchestration server You can optionally configure email notifications on the orchestration server. For example, users can receive emails when pre-merge criteria are not met and the system cannot create a build. To configure the orchestration server to send emails, complete the following steps: 1. ​Use the Email wizard to configure an email account and listener by clicking Designer Studio > Integration > Email > Email Wizard. This email account sends notifications to users when events occur, for example, if there are merge conflicts. For detailed information, see the procedure for “Configuring an email account that receives email and creates or manages work” in Entering email information in the Email wizard. 2. From the What would you like to do? list, select Receive an email and create/manage a work object. 3. From the What is the class of your work type? list, select Pega-Pipeline-CD. 4. From the What is your starting flow name? list, select NewWork. 5. From the What is your organization? list, select the organization that is associated with the work item. 6. In the What Ruleset? field, select the ruleset that contains the generated email service rule. This ruleset applies to the work class. 7. In the What RuleSet Version? field, select the version of the ruleset for the generated email service rule. 8. Click Next to configure the email listener. 9. In the Email Account Name field, enter <span><span>Pega-Pipeline-CD, which is the name of the email account that the listener references for incoming and outgoing email. 10. In the Email Listener Name field, enter the name of the email listener. Begin the name with a letter, and use only letters, numbers, the ampersand character (&), and hyphens. 11. In the Folder Name field, enter the name of the email folder that the listener monitors. Typically, this folder is INBOX. 12. In the Service Package field, enter the name of the service package to be deployed. Begin the name with a letter, and use only letters, numbers, and hyphens to form an identifier. 13. In the Service Class field, enter the service class name. 14. In the Requestor User ID field, press the Down Arrow Key and select the operator ID of the release manager operator. 15. In the Requestor Password field, enter the password for the release manager operator. 16. In the Requestor User ID field, enter the operator ID that the email service uses when it runs. 17. In the Password field, enter the password for the operator ID. 18. Click Next to continue the wizard and configure the service package. For more information, see Configuring the service package in the Email wizard. 19. After you complete the wizard, enable the listener that you created in the Email Wizard. For more information, see Starting a listener. Email notifications Emails are also preconfigured with information about each notification type. For example, when a build failure occurs, the email that is sent provides information, such as the pipeline name and URL of the system on which the build failure occurred. Preconfigured emails are sent in the following scenarios: Build start – When a build starts, an email is sent to the release manager and, if you are using branches, to the operator who started a build. Build failure – If any step in the build process is unsuccessful, the build pauses. An email is sent to the release manager and, if you are

using branches, to the operator who started the branch merge. Build step completion – When a step in a build process is completed, an email is sent to the release manager and, if you are branches, to the operator who started the branch merge. Stage completion – When a stage in a build process is completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Build completion – When a build is successfully completed, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Manual tasks requring approval – When a manual task requires email approval from a user, an email is sent to the user, who can approve or reject the task from the email. Stopped build – When a build is stopped, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Pega unit testing failure – If a Pega unit test cannot successfully run on a step in the build, an email is sent to the release manager and, if you are using branches, to the operator who started the branch merge. Schema changes required – If you do not have the required schema privileges to deploy them on application packages that require those changes, an email is sent to the operator who started the build.

Configuring Jenkins If you are using a Jenkins task in your pipeline, configure Jenkins so that it can communicate with the orchestration server. 1. On the orchestration server, create an authentication profile that uses Jenkins credentials. 1. Click Create > Security > Authentication Profile. 2. Enter a name, and then click Create and open. 3. In the User name field, enter the user name of the Jenkins user. 4. Click Set password, enter the Jenkins password, and then click Submit. 5. Click the Preemptive authentication check box. 6. Click Save. 2. Because the Jenkins task does not support Cross-Site Request Forgery (CSRF), disable it by completing the following steps: 1. In Jenkins, click Manage Jenkins. 2. Click Configure Global Security. 3. In the CRSF Protection section, clear the Prevent Cross Site Request Forgery exploits check box. 4. Click Save. 3. Install the Post build task plug-in. 4. Install the curl command on the Jenkins server. 5. Create a new freestyle project. 6. On the General tab, select the This project is parameterized check box. 7. Add the BuildID and CallBackURL parameters. 1. Click Add parameter, and then select String parameter. 2. In the String field, enter BuildID. 3. Click Add parameter, and then select String parameter. 4. In the String field, enter CallBackURL. 8. In the Build Triggers section, select the Trigger builds remotely check box. 9. In the Authentication Token field, select the token that you want to use when you start Jenkins jobs remotely. 10. In the Build Environment section, select the Use Secret text(s) or file(s) check box. 11. In the Bindings section, do the following actions: 1. Click Add, and then select User name and password (conjoined). 2. In the Variable field, enter RMCREDENTIALS . 3. In the Credentials field, click Specific credentials. 4. Click Add, and then select Jenkins. 5. In the Add credentials dialog box, in the Username field, enter the operator ID of the release manager operator that is configured on the orchestration server. 6. In the Password field, enter the password. 7. Click Save. 12. In the Post-Build Actions section, do one of the following actions, depending on your operating system: If Jenkins is running on Microsoft Windows, add the following post-build tasks: 1. Click Add post-build action, and then select Post build task. 2. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build fails, for example BUILD FAILURE. 3. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%". 4. Click Add another task. 5. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build is successful, for example BUILD SUCCESS. 6. In the Script field, enter curl --user %RMCREDENTIALS% -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"%JOB_NAME%\",\"buildNumber\":\"%BUILD_NUMBER%\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"%BuildID%\"}" "%CallBackURL%" 7. Click Save. If Jenkins is running on UNIX or Linux, add the following post-build tasks. Use the dollar sign ($) to access the environment variables instead of the percent sign (%). 1. Click Add post-build action, and then select Post build task. 2. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build fails, for example BUILD FAILURE. 3. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"FAIL\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 4. Click Add another task. 5. In the Log text field, enter a unique string that for the message that is displayed in the build console output when a build is successful, for example BUILD SUCCESS. 6. In the Script field, enter curl --user $RMCREDENTIALS -H "Content-Type: application/json" -X POST --data " {\"jobName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"pyStatusValue\":\"SUCCESS\",\"pyID\":\"$BuildID\"}" "$CallBackURL" 7. Click Save.

Upgrading to Deployment Manager 3.1.x on the orchestration server If you are using an earlier version of Deployment Manager, upgrade to Deployment Manager 3.1.x by running the pxUpdatePipeline activity

in the Data-Pipeline-Configuration class on the orchestration server. 1. In Designer Studio, search for pxUpdatePipeline, and then click the activity in the dialog box that displays the results. 2. Click Actions > Run. 3. In the dialog box that appears, click Run.

Using Deployment Manager 3.1.x Use Deployment Manager to create continuous integration and delivery (CI/CD) pipelines, which automate tasks and allow you to quickly deploy high-quality software to production. On the orchestration server, release managers use the DevOps landing page to configure CI/CD pipelines for their Pega® Platform applications. The landing page displays all the running and queued application builds, branches that are to be merged, and reports that provide information about your DevOps environment such as key performance indicators (KPIs). This document describes the features for the latest version of Deployment Manager 3.1.x. See the following topics for more information about using Deployment Manager to configure and use CI/CD pipelines: Adding an application pipeline Modifying stages and tasks in your pipeline Modifying application and environment details Manually starting a build Starting a build in a branch-based environment Starting a build in a distributed, branch-based environment Completing or rejecting a manual step in a build Schema changes in application packages Pausing a build Performing actions on a build with errors Viewing branch status Viewing build logs Viewing build reports Viewing reports for all builds Deleting an application pipeline Viewing, downloading and deleting application packages in repositories

Adding an application pipeline When you add a pipeline, you specify both pre-merge and post-merge criteria. For example, you can specify that a branch must be peerreviewed before it can be merged, and you can specify that Pega unit tests are run after a branch is merged and is in the QA stage of the pipeline. You can create multiple pipelines for one version of an application. For example, you can use multiple pipelines in the following scenarios: You want to deploy a build to production separately from the rest of the pipeline. You can then create a pipeline that has only a production stage or development and production stages. You want to use parallel development and hotfix life cycles for your application. To add a pipeline, perform the following steps: 1. Click Deployment Manager in the Designer Studio footer. 2. Click Add application pipeline. 3. Optional: Specify tasks that must be completed before a branch can be merged in the pipeline. 1. Click Add task. 2. From the Type list, select Pega, and then specify the task that you want to perform. To specify that a branch must meet a compliance percentage before it can be merged: 1. From the Task list, select Check for guardrails. 2. In the Weighted Compliance Score field, enter the minimum required compliance percentage. 3. Click Submit. To specify that a branch must be reviewed before it can be merged: 1. From the Task list, select Branch review. 2. Click Submit. 3. Optional: To start a build when a branch is merged, click the Trigger build on merge check box. One of the following results occurs: If no build is running in the pipeline, and a branch is successfully merged, the build is started by the operator who is logged in to the orchestration server. If a build is running, and a branch is successfully merged, the build is queued for processing. The build will be started by using the operator ID that you defined in this dynamic system setting. 4. Optional: Clear a check box for a build life cycle stage to skip it. 5. Optional: Clear the Production ready check box if you do not want to generate an application package, which is sent to the production repository. You cannot clear this check box if you are using a production stage in the life cycle. 6. Optional: In the build life cycle stages, specify the tasks to be performed during each stage of the pipeline. 1. Click Add task. 2. From the Type list, select Pega, and then specify the task that you want to perform. To run all Pega unit tests in the application, from the Task list, select Pega unit testing. For more information about creating Pega unit tests, see Creating PegaUnit test cases. To run a Jenkins job that you have configured, do the following actions: 1. From the Task list, select Jenkins. 2. In the Job name field, enter the name of the Jenkins job (which is the name of the Jenkins build) that you want to run. 3. In the Token field, enter the Jenkins authentication token. 4. In the Parameters field, enter parameters, if any, to send to the Jenkins job. To add a manual step that a user must perform in the pipeline, do the following tasks: 1. From the Task list, select Manual. 2. In the Job name field, enter text that describes the action that you want the user to take. 3. In the Assigned to field, press the Down Arrow key and select the operator ID to assign the task to. 7. Click Review pipeline. The system generates tasks, which you cannot delete, that the pipeline always performs, for example, for

deploying the application to each stage in the pipeline. 8. Click Next. 9. Optional: If you added a Jenkins step, specify Jenkins server information in the Add application dialog box, in the Jenkins server section. 1. In the URL field, enter the URL of the Jenkins server. 2. In the Authentication profile field, press the Down Arrow key and select the authentication profile on the orchestration server that specifies the Jenkins credentials to use for Jenkins jobs. 10. In the Environments section, specify the URL for the development and candidate systems that are in your pipeline, and also specify merge targets. 1. Specify development system information: 1. In the Development field, enter the URL of the development system. 2. In the Authentication profile field, press the Down Arrow key and select the authentication profile that you want to communicate from the orchestration server to the development system. 2. Specify the URL of the candidate systems and the authentication profiles that the orchestration server uses to communicate with candidate systems. You should select the authentication profile that you configured in step 4 in Configuring candidate systems in Installing and configuring Deployment Manager 03.01.01. Fields are displayed only for the pipeline stages that you selected in the build lifecycle on the previous page. 11. Specify options for merging branches into the base application. 1. Do one of the following actions: To merge branches into the highest existing ruleset in the application, click the Highest existing ruleset radio button. To merge branches into a new ruleset, click the New ruleset radio button. 2. Optional: In the Password field, enter the password that locks the rulesets. 12. Select the development and production repositories to which you want to submit application packages. Complete one of the following tasks: If you are using Deployment Manager on-premises, complete the following tasks: 1. In the Application repository section, in the Dev repository field, press the Down Arrow key and select the repository that connects to a candidate system from the development system. The archived product rule that contains the application in your pipeline is sent from the development system to the candidate system to which this repository connects. 2. In the Production repository field, press the Down Arrow key and select the production repository. The archived product rule that contains the application is sent from a candidate system to the production system to which this repository connects. If you are using Deployment Manager on Pega Cloud and want to use different repositories other than the default, complete the following tasks: 1. 2. 3. 4. 5.

In the Artifact repository section, click the Yes radio button. In the Dev repository field, press the Down Arrow key and select the development repository. In the Production repository field, press the Down Arrow key and select the production repository. Click Save. Click Next.

13. Specify the application and the application contents that you want to build in your pipeline by completing the following steps: 1. In the Pipeline field, enter a name for the pipeline. 2. In the Application field, press the Down Arrow key and select your application. 3. In the Version field, press the Down Arrow key and select the application version. 4. In the Product rule field, press the Down Arrow key and select the product rule that defines the contents of the application. 5. In the Version field, press the Down Arrow key and select the product rule version. 6. Optional: Add dependent applications. For more information, see Product rules: Listing product dependencies for Pega-supplied applications. 7. Click Add.

Modifying stages and tasks in your pipeline You can add and remove tasks from the stages in your pipeline, if there are no builds are running. You can also add and skip pipeline stages. However, if you add a stage that you did not originally configure, you cannot configure details for it. 1. 2. 3. 4. 5. 6.

Click Deployment Manager in the Designer Studio footer. Click the pipeline that you want to modify. Click Actions > Edit pipeline. Optional: Add and remove tasks to the stages in your pipeline. Optional: Add or skip stages in your pipeline. Click Review pipeline.

For detailed information about modifying your pipeline, see Adding an application pipeline. You can modify application and environment details, such as the product rule to use and the URLs of the systems in your pipeline. See Modifying application and environment details for more information.

Modifying application and environment details You can modify application details when there are no builds running on a pipeline. 1. 2. 3. 4.

Click Deployment Manager in the Designer Studio footer. Click the pipeline that you want to modify. Click Actions > Settings. To modify the product rule and version that defines the content of your application, do the following tasks: 1. Click the Edit icon in the Application Details section. 2. Optional: Specify the product rule version, and add or remove dependent applications. 3. Click Save. 5. To modify environment details, do the following tasks: 1. Click the Edit icon in the Environment Details section. 2. Optional: Specify information such as the URLs to your pipeline systems and the authentication profiles to apply to each system. 3. Click Save. For detailed information about modifying your pipeline, see Adding an application pipeline.

Manually starting a build

Start a build manually if you are not using branches and are working directly in rulesets. You can also start a build manually if you do not want builds to start automatically when branches are merged. You must also clear the Trigger build on merge check box in the pipeline configuration. 1. 2. 3. 4.

Click Deployment Manager in the Designer Studio footer. Click the pipeline for which you want to start a build. Click Start build. Start a new build or deploy an existing application by completing one of the following actions: To start a build and deploy a new application package, do the following steps: 1. Click the Generate new artifact radio button. 2. In the Build name field, enter the name of the build. 3. Click Deploy. To deploy an application package that is on a cloud repository, do the following steps:

1. 2. 3. 4. 5.

Click the Deploy an existing artifact radio button. In the Build name field, enter the name of the build. In the Select a repository field, press the Down Arrow key and select the repository. In the Select an artifact field, press the Down Arrow key and select the application package. Click Deploy.

Starting a build in a branch-based environment In non-distributed, branch-based environments, you can immediately start a build by submitting a branch into a pipeline in the Merge Branches wizard. For more information, see Submitting a branch into a pipeline.

Starting a build in a distributed branch-based environment If you are using Deployment Manager in a distributed, branch-based environment and using multiple pipelines per application, first export the branch to the remote development system, and then merge it. 1. On the development system, package the branch. For more information, see Packaging a branch. 2. On the remote development system, import the branch by using the Import wizard. For more information, see Importing a file by using the Import wizard. 3. On the remote remote development system, start a build by using the Merge Branches wizard. For more information, see Submitting a branch into a pipeline. If you are using one pipeline per application, you can publish a branch to start the merge. For more information, see Publishing a branch to a repository.

Completing or rejecting a manual step in a build If a manual step is configured on a build, the build pauses when it reaches the step, and you can either complete or reject it. For example, if a user was assigned a task and completed it, you can complete the task to continue the build. Deployment Manager also sends you an email when there is a manual step in the pipeline. You can complete or reject a step either within the pipeline or through email. Deployment Manager also generates a manual step if there are schema changes in the application package that the release manager must apply. For more information, see Schema changes in application packages. To complete or reject a manual step within the pipeline, do the following steps: 1. Click Deployment Manager in the Designer Studio footer. 2. Click a pipeline. 3. Right-click the manual step and select one of the following options: Complete task: Resolve the task so that the build continues through the pipeline. Reject task: Reject the task so that the build does not proceed. To complete or reject a manual step from within an email, click either Accept or Reject.

Schema changes in application packages If an application package that is to be deployed on candidate systems contains schema changes, the Pega Platform orchestration server checks the candidate system to verify that you have the required privileges to deploy the schema changes. One of the following results occurs: If you have the appropriate privileges, schema changes are automatically applied to the candidate system, the application package is deployed to the candidate system, and the pipeline continues. If you do not have the appropriate privileges, Deployment Manager generates an SQL file that lists the schema changes and sends it to your email address. It also creates a manual step, pausing the pipeline, so that you can apply the schema changes. After you complete the step, the pipeline continues. For more information about completing a step, see Completing or rejecting a manual step. You can also configure settings to automatically deploy schema changes so that you do not have to manually apply them if you do not have privileges to do so. For more information, see Configuring settings to automatically deploy schema changes.

Configuring settings to automatically deploy schema changes You can configure settings to automatically deploy schema changes that are in an application package that is to be deployed on candidate systems. Configure these settings so that you do not have to apply schema changes if you do not have the privileges to deploy them. 1. On the orchestration server, in Pega Platform, set the AutoDBSchemaChanges dynamic system setting to true to enable schema changes at the system level. 1. In Designer Studio, search for AutoDBSchemaChanges. 2. On the Settings tab, in the Value field, enter true. 3. Click Save. 2. Add the SchemaImport privilege to your access role to enable schema changes at the user level. For more information, see Specifying privileges for an Access or Role to Object rule. These settings are applied sequentially. If the AutoDBSchemaChanges dynamic system setting is set to false, you cannot deploy schema changes, even if you have the SchemaImport privilege.

Pausing a build When you pause a build, the pipeline completes the task that it is running, and stops the build at the next step. To pause a build, click the Pause button.

Performing actions on a build with errors If a build has errors, the pipeline stops processing on it. You can do one of the following actions: Ignore the current step and run the next step by clicking the Start button. Restart the build at the current step, after fixing the errors, by clicking the Start button. Roll back to an earlier build by clicking the Roll back build

button.

Viewing branch status You can view the status of all the branches that are in your pipeline. For example, you can see whether a branch was merged in a build and when it was merged. 1. Click Deployment Manager in the Designer Studio footer. 2. Click a pipeline. 3. Click Actions > View branches.

Viewing build logs View logs for a build to see the completion status of operations, for example, when a build is moved to a new stage. You can change the logging level to control which events are displayed in the log. For example, you can change logging levels of your builds from INFO to DEBUG for troubleshooting purposes. For more information, see Logging Level Settings tool. 1. 2. 3. 4.

Click Click Click Click

Deployment Manager in the Designer Studio footer. a pipeline. the Gear icon for the build for which you want to view the log file. View log.

Viewing build reports Build reports provide information about a specific build. You can view information such as the number of tasks that you configured on a build that have been completed and when each task started and ended. 1. 2. 3. 4.

Click Click Click Click

Deployment Manager in the Designer Studio footer. a pipeline. the Gear icon for the build for which you want to view the build report. View report.

Viewing reports for all builds Reports provide a variety of information about all the builds in your pipeline. You can view the following key performance information (KPI): Deployment Success - Percentage of deployments that are successfully deployed to production. Deployment Frequency - Frequency of new deployments to production. Deployment Speed - Average time taken to deploy to the build from when it was started to production. Build frequency - Frequency at which new builds are started. Failure rate - Average number of failures per build. To view reports, do the following tasks: 1. Click Deployment Manager in the Designer Studio footer. 2. Click a pipeline. 3. Click Actions > View reports.

Deleting an application pipeline When you delete a pipeline, its associated application packages are not removed from the repositories that the pipeline is configured to use. 1. In the Designer Studio footer, click Deployment Manager. 2. Click the Delete icon for the pipeline that you want to delete. 3. Click Submit.

Viewing, downloading, and deleting application packages in repositories You can view, download, and delete application packages in repositories that are on the orchestration server. If you are using Deployment Manager on Pega Cloud, application packages that you have deployed to cloud repositories are stored on Pega Cloud. To manage your cloud storage space, you can download and permanently delete the packages. 1. 2. 3. 4. 5. 6.

In the Designer Studio footer, click Deployment Manager. Click the pipeline for which you want to download or delete packages. Click either Development Repository or Production Repository. Click Actions > Browse artifacts. To download an application package, click the package, and then save it to the appropriate location. To delete a package, select the check boxes for the packages that you want to delete and click Delete.

Development workflow in the DevOps pipeline Follow these best practices to develop or improve your application with DevOps in a shared development environment. The specific

practices depend on whether you have a single development team or multiple development teams.

Single development team Single teams typically work on a single development server and collaborate on the production application. To practice continuous integration, use a team application layer, branches, and release toggles. Build a team application layer that is built on top of the main production application. The team application layer contains branches, tests, and other development rulesets that are not intended to go into production. For more information, see Using multiple built-on applications. Create a branch of your production ruleset in the team application. For more information, see Adding branches to your application. Perform all development work in the branch. Optional: Use release toggles to disable features that are not ready for general use. Using toggles allows you to merge branch content frequently even if some content is not final. For more information, see Release toggles. Create formal review tasks for other members of the development team to review your content. For more information, see Creating a branch review. Use the branch developer tools to review the content and quality of your branch. For more information, see Reviewing branches. Lock the branch. For more information, see Locking a branch. Frequently merge the branch from the team application layer to the production rulesets. For more information, see Merging branches. Start the continuous delivery pipeline for your application. For more information, see DevOps release pipeline overview.

Multiple development teams If you have multiple teams working on the same application, each team should have a separate development server. A central Pega® server acts as a system of record (SOR). The central SOR allows teams to integrate features into the application in a controlled manner and avoid unexpected conflicts between teams working in the same rulesets. To practice continuous integration, use a team application layer, branches, and release toggles, and use the Pega repository as the SOR. Build a team application layer that is built on top of the main production application. The team application layer contains branches, tests, and other development rulesets that are not intended to go into production. For more information, see Using multiple built-on applications. Create a branch of your production ruleset in the team application. For more information, see Adding branches to your application. Perform all development work in the branch. Optional: Use release toggles to disable features that are not ready for general use. Using toggles allows you to merge branch content frequently even if some content is not final. For more information, see Release toggles. Create formal review tasks for other members of the development team to review your content. For more information, see Creating a branch review. Use the branch developer tools to review the content and quality of your branch. For more information, see Reviewing branches. Lock the branch. For more information, see Locking a branch. Push the branch to the Pega repository SOR. For more information about pushing branches to repositories, including the Pega SOR, see Pushing a branch to a repository. Merge the branch into the Pega SOR. For more information, see Merging branches. Rebase the rules on your development system to get the latest versions of rules from the Pega SOR. For more information, see Rebasing rules. Start the continuous delivery pipeline for your application. For more information, see DevOps release pipeline overview.

Related Documents

Devops Kkk.pdf
December 2019 12
How To Secure Devops
August 2019 22

More Documents from "MANOHAR PATHIKANI"