Only $2.99/month

Terms in this set (86)

Skimming is the process of collecting the highest version of every rule in the ruleset version and saving it
into a new, higher ruleset version. As a result, it applies mainly to Rule resolved rules In addition, there
are exceptions to what gets skimmed based on the availability of the Rule as well as the type of skim
we're performing. That leads us to the types of skims, there are two types, Major and Minor. This is in
concurrence with the ruleset version format (Major-Minor-Patch).
During a minor skim, rules are rolled into the higher minor version and during a major skim, rules are
rolled into the higher major version. Skimming is triggered from the Designer Studio by clicking System >
Refactor > Rulesets. Amongst other ruleset utilities locate and click the Skim a RuleSet link.

The "Skim a RuleSet dialog offers choices from which we can select whether it is a major or minor version

During skimming, the availability field also plays a key role. The following table summarizes which rules
get moved during a major and minor skim.

Not Available

Major Skim

Minor Skim

Skimming does not delete any rules.
Skimming copies but does not update or delete rules in the source
versions. After skimming, use the refactoring tool to delete other ruleset versions. It is also a good idea to
move only the skimmed version to production.

Skimming does not validate the copied rules, nor compile any Java. For rules of rule types that produce
compiled Java, compilation occurs when the rule is first assembled and executed.
Preparing for skimming
1. Make sure there are no checked out rules on the ruleset versions that are being skimmed.
2. Lock the ruleset versions.
Run the Revalidate and Save tool before the skimming operation. This is accessed by clicking
System > Release > Upgrade > Validate.
Same-case parallelism is when multiple assignments, associated with the same case, are created, and each assignment exists within a "child" or "sub" process that is different from the "parent" process.

When a Split-For-Each shape is used, the same sub-process is specified. The following options exist for when the parent flow is allowed to continue based on the sub-process completion:
Some Iterate
With all four options it is possible to specify a When rule to decide whether or not to create the sub- process when the Page List or Group is iterated. The "All" and "Any" rejoin options are straightforward. The "Iterate" rejoin option offers an "Exit Iteration when" condition whereas the "Some" option provides both "Exit iteration on when" and "Exit iteration on count" rejoin options. The "Iterate" option also differs in that sub-processes are not created in parallel. Instead sub-processes are created sequentially until either the "Exit Iteration when" condition returns "true" or the last sub-process has completed.

When a Split-Join shape is used, each parallel Assignment is created within a sub-process that can be and typically is different from each of its siblings. The following options exist when the parent flow is allowed to continue based on sub-process completion:
Note that when using a Split-Join, the number of sub-processes that are created is decided at design time.

With Split-For-Each, however, the number of sub-processes that are created is based on the size of the specified Page List or Page Group.

A third type of Business Parallel processing is called a "Spin-off". A Spin-Off flow is configured by checking the "Spin off" checkbox within the Subprocess shape. A Spin-off flow differs from the Split-For- Each and Split-Join flows in that the parent flow is not obligated to wait for the shape to complete.
Capabilities only available when using subcases
Dependency Management: A subcase can be configured to launch based on its parent's state as well as the states of one or more sibling cases. No such capability exists with same-case parallelism.
Data Propagation: The amount of data made available to a child case can be restricted. However, that propagated data can become stale since it is a copy. Within same-case parallelism, data propagation does not exist.
Ad-hoc Processing: A subcase can be used to perform ad-hoc processing at run-time. There is no concept of ad-hoc processing with regard to same-case parallelism.
Case Designer-mediated Specialization: Cases can be circumstanced using the Case Designer. Though it is possible to use circumstanced flows during same-case parallelism, the capability to circumstance flows is not supported by the Case Designer.
Class Specialization: Being a work type, subcases can utilize class specialization unlike flows used in same-parallelism.

Advantages of using subcases over same-case parallelism
Security: Subcases offer more control over security. The need may arise for certain work to only be allowed to be performed and viewed by persons with different or greater privileges than the originator. In rare situations the need may also exist for the individuals who are performing a certain type of work to not be allowed to know who originated it. This type of security is more difficult to control with same-case parallelism. Note that work can be "pushed" to different persons using either approach provided those persons are known to possess the requisite roles and privileges to perform the work. Also, with either approach, spun-off work can be routed to a Workbasket that enforces that each requestor has the proper Access Role.
Parallelism / Locking Strategy: A spun-off subcase can be made immune from its parent's locking strategy, for example, by overriding DetermineLockString. With either approach, Optimistic Locking can be used.
Reporting Granularity: When requirements exist to measure/monitor the efficiency/quality of work completion at a fine-grained level, yet greater than a single Assignment, the subcase approach is superior.

Async Comm: A spun-off subcase that is immune from its parent's locking strategy can call the Queue-For-Agent to invoke an Activity that invokes a Connector. A Standard Agent Batch Requestor can then attempt and re-attempt the connection in parallel to the parent case with no concern whether the parent case is locked. With the same-case approach, the Standard Agent must wait for the case lock to be released.

Advantages of using same-case parallelism over subcases
Attachment View-By-Child Capability: A parent case can view all child case attachments. However, extra logic is required to avoid Data-WorkAttach- duplication should a child case need to see a parent case attachment. With same-case parallelism, every child process can view every attachment.
Reporting Simplicity: Because data is captured within the same case when using same-case parallelism, no need exists for a report to join to the subcase when reporting at the parent case level. Alternatively the subcase would need to update its parent to facilitate reporting. To some extent, the Case Designer-mediated Calculation mechanism can be used to reduce this complexity.
Policy Override: It is more complex to manage "Suspend work" when multiple cases are involved hence same-case has an advantage in this respect.
Process Simplicity: When the actions required from different users take very little time to complete, and temporary locking is a non-issue, OOTB solutions such PartyMajorApproval can be used that are much simpler than implementing the same functionality using subcases.
Typically, we use the standard flow action pyCreateAdhocCase as a local or connector action to
create ad hoc cases.

When the action is submitted, the standard Simple Case flow (pySimpleCaseWorkFlow) instantiates
an ad hoc case called Simple Case of class group Work-Cover-SimpleCase.

The Ad Hoc Case Dashboard action references the section pyAdhocCaseActions, which we can

The Add Tasks icon launches a local flow action pyCreateAdHocTasks in which users add tasks to
the ad hoc case in the pyAdHocProcessesSequential section (also extensible).

Each item is processed in sequence, starting from the task on the top row. The first deadline defaults
to one business day from when the task was entered. The second task's default deadline is one day
from the first task's default, and so on. The tasks are processed in the list order regardless of the
deadline dates the user enters.
A task is processed as a Complete Task assignment in the standard Complete Task flow (Data-
AdHocTask.WorkTask). The More Tasks Split-For-Each shape iterates across the Tasks page until
all tasks on the grid are completed.

The Ad Hoc Case Dashboard assignment remains open until the user manually resolves the ad hoc

Users can add top-level ad hoc cases in Pega Pulse posts by clicking Actions and selecting the Create a Task option. The option launches the pyCreateAdhocCaseFromPulse flow action. When submitted, a link to the case displays on the post.

To create a case type from an ad hoc case
Users have the pyCaseInstitutionalize privilege (included in the standard pyPega-
ProcessEngine:CaseDesigner role).
The current application contains an unlocked ruleset version.
Here are some examples of updates that might cause problem flows:

moving an Assignment shape for which there are open assignments results in orphaned
Replacing an Assignment shape with a new one with the same name may cause a problem since
flow processing relies on an internal name for each shape.
Removing or replacing other wait points in the flow such as Subprocess and Split-For-Each
shapes may cause problems since their shape IDs are referenced in active subflows.

It is important to note that flow processing relies on parent flow information contained in assignments. The
information includes:

pxTaskName — the shape ID of the assignment shape to which it is linked
pxTaskLabel — the developer-entered text label of the assignment shape.
pyInterestPageClass — the class of the flow rule
pyFlowType — the name of the flow rule

The pzInsKey of the flow rule, which uniquely identifies the rule, is not stored on the object.

Changing or removing the related shapes or flows will likely cause a problem.
Removing an Assignment shape for which there are open assignments results in orphaned
Replacing an Assignment shape with a new one with the same name may cause a problem since
flow processing relies on an internal name for each shape.
Removing or replacing other wait points in the flow such as Subprocess and Split-For-Each
shapes may cause problems since their shape IDs are referenced in active subflows.
It is important to note that flow processing relies on parent flow information contained in assignments. The
information includes:

Approach 1: Revert the user's ruleset to the original, lower versions
To allow users to process existing assignments, add a new access group that points to the old
application. Then add the access group to the operator ID so that the operator can switch to the
application from the user portal.

Advantage: This is the only sure approach when changes between versions go beyond just the flow
Drawback: There may be unintended consequences, where desirable fixes in the higher ruleset version
aren't executed because the user's ruleset list is too low to include them.

Approach 2: Process existing assignments in parallel with the new flow
This approach keeps the placeholder shapes (Assignment, Wait, Subprocess, Split-For-Each, and so on)
that we are changing or deleting in the new flow. Reconfigure the new flow so that new cases never reach
the old shapes, but existing assignments still follow the original path.

Advantage: All cases use the same rule names across multiple versions.
Drawbacks: This approach may not be feasible given configuration changes. In addition, it may result in
cluttered Process Modeler diagrams.

Approach 3: Add tickets to control processing of existing assignments
In this approach, tickets are used in the newly modified flows to control where the processing of each type
of old assignment is to resume processing.
Run a bulk processing job that finds all the outdated assignments in the system. For each assignment,
bulk processing should call Assign-.OpenAndLockWork, and then call Work-.SetTicket on the work page.

Advantage: Leaves the flow rules clean.
Drawbacks: It might be impractical if the number of assignments is large, or if there is no moment when
the background processing is guaranteed to acquire the necessary locks.

As we reconfigure and test our flows, identify and manage problem flows on the Flow Errors landing page by going to Designer Studio > Process & Rules > Processes > Flow Errors. This report lists flow errors that are routed to our worklist or work group in our current application by a getProblemFlowOperator activity. Each row identifies one flow problem. Rows may reflect a common condition or unrelated conditions from multiple applications.
Use the following features to fix problem flows:
Resume Flow if we want to resume flow execution beginning at the step after the step that paused. Retry Last Step to resume flow execution, but begin by re-executing the step that paused.
Restart Flow to start the flow at the initial step.
Delete Orphan Assignments to delete assignments for which the work item cannot be found. Remember: Always test updated flow rules with existing work objects, not only newly created ones.

When an operator completes an assignment and a problem arises with the flow, the primary flow execution is paused and a standard problem flow "takes over" for service by an administrator who determines how the flow is resolved.
Pega 7 provides two standard problem flows: FlowProblem for general process configuration issues as described previously, and pzStageProblems for stage configuration issues.
Important: As a best practice, override the default workbasket or problem operator settings in the getProblemFlowOperator routing activity in our application to meet our requirements.

Problems due to stage configuration changes, such as when a stage or a step within a stage is removed or relocated within the Stages & Processes diagram. When an assignment is unable to process due to a stage-related issue, the system starts the standard pzStageProblems flow. The form displays an error message and the problem flow assignment. To resolve, the problem operator selects the Actions menu and either cancels the assignment or advances it to another stage.

As a best practice, do not remove stages when updating your designs. Consider keeping the stage and its steps as they are. Use a Skip stage when conditions in the old stage's Stage Configuration dialog prevent new assignments from reaching the stage.
The "Allow Locking" checkbox on a class group definition determines whether locking is enabled for every class that belongs to the class group as well as the "work pool" class itself. This is the only way to enable locking for cases.
For any concrete class, which does not belong to a class group, its own configuration setting "Allow Locking" defines, whether the system will be locking open instances of the class when the lock is requested. By default "Allow Locking" is not checked for those classes.

if we create a Data Type first and do not create records for it, the underlying data class is configured by default and. locking is not allowed. However if we define a key property for the type under the Records tab, the system automatically reconfigures the data class to include the key property added and to allow locking.

Pega only issues locks on instances initially saved and committed to the database. So, prior to requesting a lock, make sure the object is not new but has been saved and committed.
The last condition that must also be met is the requestor's access role should convey the privilege needed to perform the operation for which the lock is being requested.
Once all these configurations are properly defined, the system is able to issue a lock for an object when requested. Locks are commonly requested from activities using one of the three methods: Obj-Open, Obj- Open-by-Handle and Obj-Refresh-and-Lock.
The first two methods must have their "Lock" parameter checked in order for the system to issue the lock.

Beside these activity methods, Pega also provides a standard activity Work-.WorkLock to be used to request a lock on a work item.
Pega implemented the locks as instances of the System-Locks class and persist them into the pr_sys_locks table in the database.

A lock is exclusive to one Pega Thread and operates system-wide in a multi-node system.
When a requestor obtains a lock on an object through one Pega Thread and attempts to access the same object through a different Pega Thread, the system presents the "Release Lock" button for the second Thread.

The requestor must click on this button to release the lock from the previous Thread before another lock can be issued for the new Thread.
Once issued, a lock is held until it is released. And a lock can be released in different ways.
A commit operation typically releases the lock automatically if the method used to acquire the lock has specified to release the lock on commit. The "ReleaseOncommit" check box of the methods must be checked. This field should always be enabled when opening an object with a lock unless there is a special requirement where we need the lock even after commit.

A lock is also released by the system when the requestor who owns the lock explicitly logs out but not when the requestor terminates the session by closing the Window.
The system also automatically expires locks after a preconfigured timeout period. By default, the Data- Admin-System data instance sets the lock timeout to 30 minutes which can be modified.

An expired lock also called a "soft" lock remains held by the requestor until the requestor releases it or until the requestor session ends. However, once the lock is soft, it can be acquired by another requestor who requests it.

A requestor can only release locks held by its own session. In v6.3 and beyond, lock information is held in the memory of each node, rather than in the database, for improved performance. However, even in a multinode system, a requestor can force the release of locks held by any session with the same Operator ID and same user name (pyUserName) through the PublicAPI method LockManager.unlock(StringMap, boolean) which communicates through the system pulse across all nodes.
Pega also provides the "Page-Unlock" activity method to release a lock on an object.

In fact, Pega uses the properties listed to build the lock string. For a case not associated with a class group, the class name concatenated with the properties listed in the "Keys" area of the "General" tab are used together to build the lock string.

In some rare situations when a non-standard lock is needed for the class, the list of properties to use for the lock string is provided on the "Locking" tab of the class rule.

If the instance to be locked is within the scope of a class group, the system uses the standard activity Work-.DetermineLockString to determine the lock string or lock handle.

As defined by this activity, the lock string is either the pzInsKey property value of the object or its cover's pzInsKey property value.
The logic which defines the value for the lock string relies upon the setting "Do not lock the parent case" for the case type in Case Designer. It is a best practice to use this default implementation of DetermineLockString activity.
For example, if we have a work item with a cover item and it is locked, all other work items covered by the same cover item are automatically locked since they all share the same lock string which is the pzInsKey property value of their shared cover.

In fact, if the cover itself does not have a cover, then its lock string is simply its own pzInsKey value. Now, remember that is the same value used as a lock string for the covered item. So locking the covered item automatically locks the cover itself as the system is not able to build the same lock string again as long as that first lock is properly held.
This behavior is quite often used in Case Management where a case may be defined with one or multiple subcases. In the parent-child relationship, it may be critical in some circumstances to prevent any update to the parent case while its subcase is being updated.
However, other circumstances may require that the locking bound between parent and child case be broken in some way, not covered by case type locking configuration. In such a case, we need to specialize DetermineLockString activity. So we specialize it for a particular class in a relevant ruleset and replace the "PropertiesValue" field for pxLockHandle by anything we deem appropriate for our application.
In a general sense polymorphism makes the application more flexible and easier to change. Polymorphism often leads to more reuse which in term leads to faster development and less errors. Another benefit is that when properly used polymorphism can improve the readability of your rules.

A reference property is a type of property that acts like a pointer to another property or page. Another way to think about it is as an alias for a property.

To make a property a reference property we need to go to the Advanced tab and simply click the checkbox.
Reference properties are most commonly used to link related pages within a work object. They can be used to link other top level pages but this requires special care as the developer is responsible for making sure the page is available on the clipboard when the reference property is referenced.

At runtime, using the Property-Ref activity method the PrimaryDriver page can be linked to the applicable driver page in the DriversOnPolicy page list property. This allows us to establish a relationship without copying any data.
The Property-Ref method is pretty simple. On the left we list the reference property and on the right the page or property we wish to map to. We are able to refer to these properties using the same syntax as if this was a regular property.
Once linked the references are maintained until the link is explicitly broken or changed using the Property-Ref method. Property references cannot be circular.

In summary, reference properties are not commonly needed. However, in more advanced data structures that require the linking of various embedded entities they can be very powerful. They can help improve runtime performance and make design time easier as well by making property references simpler and more intuitive.
SOR Pattern
The System of Record (SOR) pattern describes a situation where our case needs to access data related to a case that is stored in another system or application. In most situations the case doesn't own the referenced object but rather may display data for context or use data in rules. For example, a loan application or a credit card dispute may need to access the customer's account information and history.
Another common trait of this pattern is that the case needs to have access to the most current data. For example, if the account holder's phone number changes we want that to be reflected when the data is accessed from the case. Usually, the data loaded comes from an external data source.
Let's have a look at how we can implement this pattern in a claims application for the customer account information. We start with the D_Customer data page, which represents our customer data. The data is loaded from a SOAP connector and the customer ID is passed in as a parameter to the data page.

Snapshot Pattern
In the snapshot pattern the case does not point to a data page but instead the data from the data page is copied into the case when the data is accessed. Once the data is copied into the case the data page is not accessed on subsequent property references.

This pattern is especially useful when the data needs to reflect a specific point in time. For example, an insurance claim may want a copy of the policy data as it exists when the claim is filed. If the policy changes AFTER the claim we DON'T want it updated. This is the opposite of the SOR pattern we discussed earlier.
However, if the parameters used by the data page change, the data is copied into the case again. In our claims application we configure the policy property to copy data from data page. Since the data is stored in the case it is persisted into the database with the case, making it available for reporting.

Reference Pattern
The next pattern we'll look at is one of the most common and simplest patterns. We call it the reference data pattern. In this pattern we need to reference a list of data that is usually not directly connected to a given case.
This could be a list of products, or countries, or perhaps a list of valid values for a drop down. In many cases the same list can be used by other cases or even other applications. In many cases the list is used to populate UI controls.
One permutation of this pattern is where the list needs to be filtered based on the selection of a previous value. For example a list of cities may be populated based on a selected country. Let's look at the configuration of using two data pages to implement these types of cascading selects now.
The first data page with the country list is loaded via a report definition from the local data storage. Since this list can be shared by all users we can make it a node level page to improve performance. Also, since this list is not based on any input it does not require any parameters.

Keyed Access Pattern
The keyed access pattern is not as common as the previous patterns but when appropriately applied this pattern can significantly improve an applications performance and maintainability.
The primary aspect of this pattern is that one data page can be utilized as both a list and a single page. All of the data is loaded into a single list data page during the initial load and then can subsequently be accessed as a single page via an auto-populating property.
This serves as alternative to having two separate data pages. This makes management simpler and can also improve performance. This pattern can be useful when the entire dataset we are working with can be loaded in a single service call and stored efficiently. It is also useful in cases where users may need to frequently switch back and forth between pages in the list.

Preload a Data Page
One of the primary benefits of data pages is that they automatically manage the loading of data, taking that responsibility away from the consumer of the data page. Sometimes the data pages takes time to load which may negatively impact the customer experience. In such situations, we may want to proactively load the data before it is actually needed.
For example, when a customer contacts a customer service representative it is highly likely that the customer's account and activity information will be needed to properly service the customer which may take time to load and aggregate since it often resides in multiple external systems.
Rather than waiting to load the data until it is actually needed, we could load it while the representative takes a few moments to verify the customer's identity and determine the intention of the call. To accomplish this from a technical perspective, an explicit call is made to the necessary data page in an activity using the method Load-DataPage, which causes the data to load.

If at any point, we require the data to finish loading before proceeding, we can use the Connect-Wait method to force the system to wait for a desired period of time before proceeding or return a fail status if it does not complete in a timely manner.

Both the Load-DataPage and the Connect-Wait methods have a PoolID parameter which makes it possible to pair a Load-DataPage method with the Connect-Wait method by matching their PoolIDs. Before using these methods, be sure to understand the performance gain to ensure it outweighs the cost of loading these pages procedurally and thus, maybe sometimes unnecessarily.

Configure Error Handling for Data Pages
Data Page errors are treated as any top-level page errors. A message on the property stops flow processing if it is visible on the client. A page message on the other hand does not stop flow processing.
If the Data Page is referenced to auto-populate a property then both page and property messages propagate from the Data Page and blocks flow processing from moving forward.

Use the post load processing activity on data pages to handle errors. The ConnectionProblems flow defined on the connector never gets invoked by a data page because the data pages catch all exceptions and add page messages so that error handling can be done in the post-activity.

First check for technical errors at the data layer and handle them if possible so that the messages can be cleared. Leave unhandled errors on the page so that it can be handled at the work layer. Remember to set the message on a case property visible on the client to block the flow if auto-populate is not used.
Both rules allow us to execute activities in response to events in the system. Both rules allow for monitoring property changes that are to be part of the tracked events. And both run on the application server. This is important as Declare Triggers are sometimes confused with Database triggers which run on the database. Declare OnChange rules are sometimes confused with OnChange JavaScript events which run on the browser.
Triggers and OnChange rules differ in some significant ways as well. Triggers are associated with persistence related events, for example, when objects are saved, deleted or committed. Triggers can execute their activities asynchronously, as well as track the previous values of properties. These features are all unique to triggers.
OnChange rules on the other hand, are fired purely based on changes to the clipboard. No persistence of the object is required. This makes them especially useful in pure business rules engine applications which often cannot rely on persistence events. Finally, OnChange rules help drive a unique BPM feature, Policy Overrides. Policy Overrides allow for the dynamic and declarative override of a flow, based on changes to the data on the clipboard. This is covered in more detail in a separate lesson.
Trigger and OnChange rules both help to solve some common business requirements. For example, one of the more common requirements is property auditing; where we need to track and sometimes take action when critical properties are changed. Or perhaps users need to be notified when a property goes over a certain threshold.
Another common use case is when integrating with systems of record. We can utilize triggers to synchronize data with an external system of record. In applications with complex calculations OnChange rules can be used to execute a calculation activity when values change.
Most of the use cases we just discussed can be implemented without these rules in a more procedural way. However there are some key benefits to using declarative rules in this manner. Since these rules are declarative they are executed by Pega 7 reducing the chance that a developer forgets to call them. This is particularly helpful in creating applications that are built for change as we can define the policy and let Pega 7 enforce them at an engine level. This leads to an application that is easier to maintain and debug.

Let's take a look at how a trigger rule can be used to track a specific property. This is also known as Field Level Audit Pattern and this can be created automatically using the Field Level auditing landing page (accessed by clicking Process Management > Work Management > Field Level auditing).

The Field Level Audit gadget creates a trigger and a data transform rule. The trigger rule named pyTrackedSecuityChanges is created in the appropriate class.

Now let's talk about all the configurations we need to create if we are going to create a trigger rule that performs other tasks than tracking properties. In the trigger rule, we have other choices for when the trigger rule gets executed.

Let's look at the rest of these choices:
Deleted — executes the trigger whenever the instance that belongs to the Applies to class or a descendent of that class, is deleted using Obj-Delete.
Committed Save — gets executed when the saves are committed to the database.
Committed Delete — gets executed when the deletes are committed to the database.
Saved and — executes when an applicable object is saved using Obj-Save AND one of the listed properties has been modified since the last save.
Note: Since Pega 7 normally defers the committing of saved and deleted objects, these two events can occur at different times in the process.

Activities called by a trigger rule should be of type 'Trigger'. This is set in the Security tab on the activity rule.
A trigger activity can be run immediately or in the background. When running in the background the primary page is copied to a separate child requestor and run asynchronously. While this can be useful in specific situations it is generally not advised as troubleshooting activities run asynchronously can be challenging.

Triggers also allow us to specify a page context. This page context allows a trigger to run for each page in an embedded page list. For example we can specify SelectedCourses and the appropriate class (SAE- HRServices-Data-Course). As shown, the trigger activity runs for each page in the line item list. Note that while the activity is run for each page in the list the Applies To class of the activity is still expected to be that of the trigger rule (SAE-HRServices-Work) and NOT the page context. In practice the use of Page Context on triggers is rarely implemented.

Pega 7 creates a clipboard page named pyDeclarativeContext that is available during the life of the trigger activity. This page is of type Code-Pega-DeclarativeContext and has a value list of the changed properties. In some cases it may be useful to programmatically examine this page to see which properties caused the trigger to execute.

OnChange rules execute based on a property instead of an event that occurs on the database. In an OnChange rule we can add multiple properties and when multiple properties are listed a change to any one property causes the action logic to fire. To determine which property or properties changed we can examine the pyDeclarativeContext page as previously discussed. The conditions section allows us to define a when rule as well as the action.

There are two actions allowed, Calling an activity and Suspending Work, which is also known as Policy Overrides. Policy Overrides are a unique feature of OnChange rules and allow us to declaratively alter the processing of work. This lesson does not discuss Policy Overrides in detail.

If we select the Call activity action, we can specify an activity based on whether the when rule returns true or false. If no when rule is specified the "when true" activity runs. The Security tab of this activity is set to type OnChange.
OnChange rules, unlike triggers, execute an activity based on changes to the properties on the clipboard and not the database or persistence events. These changes are tracked using standard forward chaining logic. Activities of type OnChange do not fire other forward chaining declarative rules, such as expressions, during the activity. The forward chaining rules are executed after the OnChange rule completes. This avoids any infinite loops.
Like triggers, OnChange rules can specify a page context so that the rule applies to all elements in a list. However, unlike triggers when using a page context the activity called is expected to be of the page context's class not the Applies To class of the rule itself.
A Collection is a business rule that can procedurally execute a sequence of other rules. Collections are similar to business flows, though here it orchestrates the rules that are executed one after another and does not present any UI for user to take action. The Collection rule is an extremely powerful feature that can be used to easily track all referenced rules and rule executions. Collections also allows for grouping of "like" rule execution. Any time you need to group a series of decision rules resulting in a single outcome (e.g., "Approved" or "Rejected"), consider using a Collection rule.
Collections are invoked using the "Collect" activity method.
Collection Rule Components
The Rules tab contains two parts. The left side of the screen is where you tell the Collection which rules to run and in which order they are to run. It also allows you to give a description of what is happening on each step. In this example, the rules to execute include a Function rule, a Decision Tree and a Map Value.

A Collection is a business rule that can procedurally execute a sequence of other rules. Collections are similar to business flows, though here it orchestrates the rules that are executed one after another and does not present any UI for user to take action. The Collection rule is an extremely powerful feature that can be used to easily track all referenced rules and rule executions. Collections also allows for grouping of "like" rule execution. Any time you need to group a series of decision rules resulting in a single outcome (e.g., "Approved" or "Rejected"), consider using a Collection rule.
Collections are invoked using the "Collect" activity method.
Collection Rule Components

The Rules tab contains two parts. The left side of the screen is where you tell the Collection which rules to run and in which order they are to run. It also allows you to give a description of what is happening on each step. In this example, the rules to execute include a Function rule, a Decision Tree and a Map Value.

The Pre/Post Actions tab allows you to perform an action before and after the rules in the collection run. These actions may simply be to record the start and completion of the Collection execution. A typical use of the "Before This Collection" section is to initialize a return value from the Collection.

Why not just use an Activity rule to do these types of evaluation? Many of the same features are available in the Activity rule. However, the Collection rule has been designed for the rule engine processing and offers these benefits:
The Collection form itself makes it easy for the business to maintain, along with the decision and declarative rules that are part of the Collection
Function aliases can be used to make form easier to read and define.
Response actions implement common patterns without activities.
Improved performance over old collection rules and flows for rule orchestration.
Collections can also call other collections. In a strictly rules engine based environment where there is no user interaction with the case, the ability to create a network of collection rules provides a means of providing robust processing logic, all of which can be maintained by the business owner.
Please note that flows and UI rules are excellent candidates for business users to work on during the initial development phase, however once the application is in production these rules becomes more challenging for a business to maintain.

Delegated rules internally get saved in the same class where favorites are saved. Each time we delegate a rule it creates or updates an instance of System-User-MyRules. If we look closer, we see that they are marked either as Individual or access group.

Unlike the actual delegated rules which are saved in production rulesets, these instances are not part of or associated with a ruleset. These rules must be manually added to the Product file if we are moving them from one environment to another.
If the business user has access to Designer Studio, then they need to click the Favorites explorer to see the delegated rules. This explorer provides quick access to both the rules we have saved as personal favorites and the ones that are saved as access group favorites. Business users can click the rule name and the rule form opens up allowing them to modify the rule.

Unlike the actual delegated rules which are saved in production rulesets, these instances are not part of or associated with a ruleset. These rules must be manually added to the Product file if we are moving them from one environment to another.

If the business user has access to Designer Studio, then they need to click the Favorites explorer to see the delegated rules. This explorer provides quick access to both the rules we have saved as personal favorites and the ones that are saved as access group favorites. Business users can click the rule name and the rule form opens up allowing them to modify the rule.
If users have a different portal access then we can use the My Rules gadget. This gadget is accessed in the standard Case Manager Portal by clicking the Options> My Rules. From there users can expand the gadget and see the delegated rules. When they attempt to open the rule a separate window opens.

Out of the box interfaces are just one option, but if we are working with highly specialized requirements we can use the MyRules instances to provide a custom interface. Since these instances are data instances we can easily use them programmatically to build custom interfaces for our business users.

Delegated users can work on these rules in various environments and there is no one right way to handle
rule delegation. It is important to consider various factors and find an approach that is best suited to the
application and the users who are making the changes.
1. In Development: When we delegate rules in the development environment the rules are changed
and managed by the business but the promotion of the rules follows a normal product delivery
lifecycle. This approach has the least risk but provides the least agility. It is useful when the
changes can wait and are not time bound. It is a safe approach however it adds complexity since
it adds a dependency that requires migrating rules to see the changes. This defeats the purpose
of delegation.
2. In Production: On the other end of the spectrum is managing rules directly in production. This
approach has the most risk but provides the most agility. This is useful when there is a small set
of volatile rules that can be managed with minimal risk. We can minimize the risk by using the
check-in/check-out feature which allows users to test the rule before having it used by others and
ensures some kind of approval process is built. (We will learn about this more this shortly.). This
approach is used in most cases, due to the quickness in which the changes can be seen.
However, it must be remembered that this option requires planning and risk mitigation but
provides the business with a lot of agility.
3. In a Separate Environment: A nice compromise is to setup a separate authoring environment.
Here rules can be managed by the business and tested without the risk of affecting production in
any way. Once tested the rules can be promoted into production on a separate cycle from the
standard development environment. This approach though it looks ideal, it may not be practical
because we need to setup a separate environment just for delegated users to work. This
approach can be made easier by setting up a cloud instance, thereby removing the huge
overhead in terms of cost.

It is highly recommended that we use a Check-in approval process especially when the users are making
changes directly in the production system.
The production ruleset (which is typically used in this case) should use the check-out option so that
multiple users cannot update rules at the same time. Check-out also helps us in implementing an
approval process

The production ruleset must also be set to Enable Approval Required field which is set in the Versions

After making these changes in the ruleset we need to:
1. Include Work-RuleCheckin as a workpool in the access group of the users.
2. Enable Allow rule checkout for each of those Operator IDs responsible for making changes.
3. Add the ruleset named CheckInCandidates to the Access group for the operators who are
approving the check-ins.
4. Make sure the operators who are approving the rules get an access role with the privilege
Pega 7 ships a standard check-in process already implemented for us. We can review it and make
changes. We can also customize the flow in the application ruleset that is different from the production
ruleset to use a different process.
When discussing expressions, the key concept to understand is change tracking. Forward chaining
indicates that the expression is computed when any of the source properties change. For example, Total
= Quantity*Unit Price, the total gets calculated when Quantity or unit price changes. However, if you
request Total before either of them has a value then the expression does not get calculated. In addition to
the properties used in the expression we can also identify additional properties in the additional
dependencies field. The order in this array is not significant and the expression is calculated when any of
these properties change.

Backward chaining as the name suggests computes the expression calculation based on the target
property. This can be set in one of the 3 ways:
1. When the target property referenced does not have a value
2. When the target property is not present in the clipboard
3. Whenever the target property is referenced

Obviously each of these has its own use but the system generates a warning when we use "Whenever
used". This option, however can be used in some cases such as when the expression involves values
from properties on multiple pages or when the property is not referenced in many places in the

Declare expressions can be invoked procedurally by using a collection rule. When creating a collection
rule, we can include a declare expression along with other rules. If the declare expression is included in a
collection rule then the declare expressions rule should use the option invoked procedurally. Pega
versions prior to 7.1.6 do not have this option.
If the declare expression used in the collection rule is chained to other expressions then we should use
the option "When applied by a rule collection". This option is there in the product to support this use case
and also for backward compatibility.

In addition, we call the utility function defined as part of the application which makes declare expressions
extremely powerful. When using functions unexpected behavior can arise causing issues especially when
using forward chaining.
Forward chaining when used by expressions determines which properties the system should watch for
changes. The list of properties to watch is determined when the expression is saved by reading the rule
data and looking for property references. When the function dynamically refers to a property or the
property is referenced indirectly as a string, as opposed to a property reference unexpected behavior can
arise. In other words if your property reference has quotes around it there may be an issue.
Let's look at some examples of how expressions can be written and the effect it has on change tracking.

In some cases it may be difficult to pass all the property references and values in a way that works with
forward chaining so we may need to use backward chaining.
This pattern is used to ultimately calculate or determine the value for a single decision. For example, the
price of a quote or the acceptance of a submission. This property is the "goal" in goal seek. The pattern
uses backward chaining expressions to determine what values are missing to determine the goal. For
each value that is not available the system can prompt the user for its value. This is the seek part of goal
Goal seek pattern is useful when we need to seek values for one of these dependent properties. For
example, assume we are calculating the expression which uses discount and if it does not have a value,
so the total price does not get calculated. The pattern utilizes backward chaining expressions to
determine what values are missing to determine the goal. For each value that is not available the system
can prompt the user for its value or we can procedurally provide a value.
Pega 7 provides two standard flow actions, VerifyProperty and VerifyPropertyWithListing, as examples of
goal seek. We need to add the flow action in the flow either as a connector action or on a control such as
a button.
The system is adaptive enough to prompt users only on properties that do not have values, so for
example it can ask them to enter values for Discount, if discount does not have value. We can configure
what appears as the label using the short description field.

After defining the properties and expressions, the next step is to include the standard flow action
VerifyPropertyWithListing. For the demonstration purposes, we created a sample flow with just this flow
action followed by another flow action to display the result.

The standard flow action VerifyPropertyWithListing has to be modified to change the goal seek property in
both pre and post actions.

When we use one of the standard flow actions for the goal seek pattern, the runtime screen presented to
users displays the value entered in the short description field. It makes sense to set the short description
of all other fields that are involved in the calculation. In most cases the requirements dictate if we could
use the standard flow action itself or if we need any additional customization.
In summary, explore what's under the covers of the goal seek pattern as it helps to further understand the
rules engine by looking at the flow action rule itself and the activity it uses. Note, that the flow action had a
warning, because the standard flow action uses a deprecated rule type.

Keep goal seek in mind the next time you find yourself manually creating a screen that feels like a
questionnaire or that is duplicating logic already in trees and tables. Goal seek is simple to use, so
experiment with it and see if it fits your applications needs.
The ServiceLevelEvents agent is defined as a Standard agent and by default runs every 30 seconds. No
attempt is made to process a System-Queue-ServiceLevel queue item until the current DateTime is
greater than or equal to the value of the queue item's pyMinimumDateTimeForProcessing property.
Passing that test, no attempt is made to invoke the ServiceLevelsAgent's ProcessEvent activity unless a
lock can be obtained against the queue item's associated case. If the case happens to be locked, the
pxErrorList of the queue item is set similar to the example below; the pxLastExecutionDateTime will also
be set to the time when the lock was attempted.
<pxErrorList REPEATINGTYPE="PropertyList">
<rowdata REPEATINGINDEX="1">Cannot obtain a lock on instance PA-FW-GADMFW-WORK C-361, as
Requestor H6C79350BBEDA482ACCD28F1C4AD5F1F1 already has the lock</rowdata>
A System-Queue-ServiceLevel instance has a pxEvent property, the values for which are limited to
"Goal", "Deadline", and "Late". Whenever an assignment is created, the Assign-.AddAssign activity called.
If a ServiceLevel rule is configured against the assignment, AddAssign creates a ServiceLevel queue
item with the value of pxEvent set to "Goal".
When the ProcessEvent activity is eventually called, a check is made to determine whether the current
queue item should be permanently dequeued, in other words, the ServiceLevel has run its course. If not,
a new queue item is constructed using values from the currently examined queue item. The value of the
new queue item's pxEvent property is set using the ternary-expression statement shown below.
.pxEvent = Primary.pxEvent == "Goal" ? "Deadline" : "Late"
At the end of the ProcessEvent activity, the ExecuteSLA activity is called. The ExecuteSLA activity is
where the Assignment's ServiceLevel rule, if any, is opened and processed. An assignment stores the
name of its associated ServiceLevel rule within its pxServiceLevelName property. An examination of the
System-Queue-ServiceLevel class shows a number of Properties whose names begin with "pxGoal" and
"pxDeadline" which is by design. The main purpose of the ExecuteSLA activity is to recompute the
assignment's urgency as wells as to execute the list of escalation activities, if any, associated to the type
of event that has transpired.

Work SLAs
The OverallSLA flow is a special flow that runs in parallel to any case that configures the ".pySLAName"
property when created. The Case Designer supports this value being set within the "Case Details" tab.
The value of "pySLAName" is the name of the ServiceLevel rule to be invoked by the single workbasket
assignment within the OverallSLA flow. The name of this workbasket is expected to be "default@" +
lowercase(org name). The value of the case's ".pySLAName" property is transferred to the OverallSLA
Flow workbasket assignment's ".pxServiceLevelName" property.
It is also possible to define ".pySLAGoal" and ".pySLADeadline" DateTime properties on each case. For
these properties to take effect, the ServiceLevel rule represented by ".py6/AName" must be configured
to use those properties as the Goal and Deadline times, respectively.
There may be some confusion about two of the options used to configure an SLA's "Assignment Ready"
value, namely "Dynamically defined on a Property" and "Timed delay", particularly the latter.
Setting the SLA start time in the future does not prevent the assignment from being completed or, in the
case of the overall SLA, prevent the case from being worked on. The effect of using these options can be
observed when assignments are processed back-to-back. Even though the user may have permission to
perform the second assignment, if the start time for SLA is in the future, a Confirm harness is displayed.
That user however, can still perform that assignment should they choose -- there is no penalty for
completing work early. The "Assignment Ready" SLA value is set into Assign-.pyActionTime.
The values for a case's pySLAGoalExecute and pySLADeadlineExecute DateTime properties are strictly
set by overall SLA workbasket assignment ServiceLevels. These values are used by various case-wide
"Quality" and "Processes" reports.
"Quality" and "Processes" reports report against case tables. The difference between the two is that
"Quality" reports report against resolved cases (see:
whereas "Processes" reports report against
unresolved cases (see:

As explained in Help, the System record defines the duration of a lock timeout, the default value being 30
minutes. If a case is idle for 30 minutes, someone or something else can open it and "steal" the lock.

That "something else" can be the ServiceLevelEvents agent. Help does say "However, even after a lock
is marked as soft, the lock holder retains the lock and can save and commit the updated instance. What
this means is that the lock holder retains the lock provided someone or something else has not "stolen"
the lock. Suppose the case is parked at an assignment with a ServiceLevel rule that defines an escalation
such as "Advance Flow" or "Transfer"?

The result of the escalation action could very well result in the case no longer being owned by who is now
a former lock owner.
Another concern with SLAs and locking is that the ServiceLevelEvents agent cannot act on a work object
that is currently locked. Suppose a user forgets to close a case or logout of their application, instead the
user either has either left the browser in its current state or has closed it. Suppose also that this scenario
has taken place just prior to the Deadline time for that case where the Deadline has specified one or more
escalation actions.
The result would be a 30-minute delay before the ServiceLevelEvents agent can perform those escalation
actions. Whether a 30-minute delay is significant depends on the business use case. Suppose the goal
and deadline times are very short; a customer is waiting for a response.

One approach to dealing with this situation is to define a shorter locking timeout for time-critical cases
using the Case Designer "Detail" tab's Locking strategy screen.

Note, however, that the above setting affects every assignment within the case.
In extreme situations, it makes sense not to rely solely on a ServiceLevel rule being directly associated to
a sub-hour, time-critical case Assignment. A possible alternative is to spin off a subcase prior to the timecritical
Assignment. Either Case Propagation could be used to communicate the values of the parent
case's Goal and Deadline Properties, or the child case could refer to those properties directly using
"pyWorkCover". The child case's locking strategy would be defined such that it is not locked when the
parent case is locked, for example the DetermineLockString activity is overridden and defined as
If the parent case completes prior to its deadline, it could implement a strategy similar to the
UpdateCoveredTasks Declare Trigger that sets a ticket against open subcases. If, however, the subcase
reaches the deadline first, it could send one or more notifications that the assignment is running late.
Other escalation actions are possible but may run the risk of uncommitted, browser-entered data being
Pega 7 comes with several auto-generated controls that can be used as-is without any modifications. Most of these controls come with parameters that help in using them in varied scenarios.
There are broadly three different control modes as shown here in the control rule definition.
1. Editable/Read-Only - use to present values in both editable and read-only modes.
2. Read-Only - use to present values in read-only mode
3. Action - use when the user has to click for an action to occur

Let's take a look at the Options field for the Button, which is the most commonly used control. The button has a label that appears as a tooltip when you hover over the button. We can also disable the button on a condition or use a privilege to show the button based on user role.
Buttons also offer styling options - the format can use one of the various formats defined in the skin rule. By default, three style formats are supported for buttons and additional formats can be defined in the skin rule.

The image source field in the button control can reference a binary file rule (if simple image is selected), a property which has the image path (property) and the Icon if using Icon Class.
The options vary by the action control that is selected. So for example, a Signature shows only a tooltip and can be configured to disable the control based on a condition or can conditionally be shown or hidden based on a privilege.

Let's now talk about Read-Only modes. There are two types of controls available in Read-Only mode; Text and Hidden. If Text is selected, it presents other formatting choices including the option to choose a style format defined in the skin rule.

The type determines the format options that can be configured, for instance in the case of Date, the system provides the options to determine the format in which the value is presented. This is extremely powerful because we can use a single control to display dates in different formats by configuring the format in the field where the control is being used.

If we are using a True/False property, we can customize the label of true and false as shown here. Or we can even use an image to indicate true or false.

Text Input
This is the most commonly used control in user screens to enable users to enter values in a text box. It offers various choices including selecting a style from the skin rule, specifying the size in parameters, as well as the minimum and maximum values that a user can enter.
The format area helps in customizing how the value is presented in the screen. The choices in the left are editable and the choices in the right are in read-only mode. For illustration purposes, we have selected Number which display additional configuration options specific to numbers such as decimal places, rounding and so on.

Date is another commonly used input control. It presents two different ways users can pick dates- either from a calendar or from a dropdown list for the year, month and date.

There is a sub category in Editable controls which display a list of values for selection. These may take the form of a Radio button, Select box or AutoComplete. List based controls requires the selection of the source of the list and this is handled by a parameter defined in the control.
Radio button choices can appear either vertically or horizontally just by configuring the Orientation parameter.
Again the choices vary with the control type and in the case of dropdown we see additional choices such as what to display as the option, placeholder text and when to load the choices so that the screen can render more quickly. However, all controls provide similar formatting options wherein we can use the skin rule to format the presentation. All Auto generated controls come with standard formats and allows us to define custom styling formats as well.

Defining a New Control
There are quite a few auto generated controls that comes with the product that can be used as-is and customized by using the parameters and the skin rule. In some cases we might find that we are making the same set of changes to the parameters. How do we address those changes? We use custom auto- generated controls. However, we need to make sure we are creating a new auto-generated control only when it really makes sense. We do not need to create custom controls for a style change because we define the style in skin rule and reference that format in the control.
Similarly we do not need to create custom controls if the change is not drastic. One example where we could create custom controls is if similar set of actions are carried out, say if we want a custom submit button which does a post value and refresh section.
Earlier we learned the definition of the Text Input control. The Pega product development team has created additional controls which merely extend the Text Input.
So the Text Input control is saved as the Number control which has the type selected by setting the type as Number.

Similarly the Currency control is defined as a new control just by selecting the Currency in the Symbol field, which then displays additional configurable fields for configuring the currency.

We can define a new auto-generated control by first selecting the control mode and then the UI element. After selecting the UI element the options and formats change based on the UI element that we have chosen.

Specialty components provide capabilities for the developers to build and integrate the third party components such as jQuery, Flex, Flash, JavaScript in the UI rules of a Pega application.
Specialty components can receive and send data back to the data elements used in our Pega Application. Pega publishes the API's which can be used to set and get values. Specialty components support JSON (Javascript Object Notation) type data exchange.
Specialty components support both single level and embedded properties. If a page list is used as a parameter then all the associated properties are also available in the specialty component.
The specialty component is saved in a section rule which makes it reusable, and also supports rule specialization and rule resolution and behave similarly to any other section rule.
The specialty component is recommended since it offers the ability to leverage other UI rule capabilities.
The section rule that is marked as a specialty component can be included in other auto-generated rules.
Specialty components minimize the need to build custom UI on top of our application to support features
A portal rule belongs to the user interface category and unlike other rules it does not apply to any class. When defining a portal rule, we enter only the name and ruleset to which it is associated. Portals primarily use two different UI rules - Harness rules for the content and Skin rules for styling. Portals can reference the same skin used in the application rule. It is important that we do not change the fields in the Configuration despite them appearing as dropdowns.

Like Portals, skin rules do not have an Applies To class but we need to pick the Applies To class name for the harness that is being used. In most cases it is recommended that we use Data-Portal as the class name since all the sections that we can see in Case Worker and Case Manager portals are defined in Data-Portal class and we need to define the harness in the same class so we can reuse the sections.
Customizing portals can be done in one of the following three ways:
1. Modify the sections that are used in Case Manager or Case Worker portal.
2. Modify the harness used in Case Manager or Case Worker portal to include new sections or screen layouts.
3. Create a new portal rule and a new harness rule. The harness can reference both new and existing sections.
The skin rule that is used in the portal rule must be customized to ensure that the portals are responsive when opened in a mobile device.

Harness rules used in portals are a little different in their construction from other harnesses that are used in work forms such as the perform and confirm harnesses. Harnesses use screen layouts to group sections inside its layout. When defining a new harness, the first step is to pick a screen layout from the seven supported layouts. The screen layouts offer customization to the skin rule in terms of background color, width, alignment and so on.

When constructing new portals, Live UI comes handy to identify the sections that can be reused as-is. For instance, we can use pyRecents for displaying the recent items instead of recreating a new section that would do the same thing. Similarly, the header contains the sections displaying logo, a link for the operator menu, and a text box to search on items that can be added as-is.

It is a very common requirement to embed reports on manager portals. We do not need any special sections for this. In the section, we need to add a chart control and then configure the control using its properties panel. The key things to change here are the chart type (which decides the type of chart we can display) and the Type filed in Data Source (this can be data page or a report definition).

Menus are another type of control that is used mostly on sections referenced in portals. Menus offer the ability to display the list of items in a menu format providing the option for the user to navigate as a menu. The menu control references another UI rule type, navigation which is used in displaying a hierarchical relationship.

Navigation rule unlike other UI rules present a different interface. We can add an item and configure actions which perform a specific action when clicked. This menu just displays one item below another and there is no hierarchical relationship.

Dynamic Container is an auto generated way of presenting the worklist and cases in runtime. Dynamic Containers support presenting them in one of these three ways:
1. Single Document
2. Multiple Documents
3. Dynamic layouts
When creating a custom portal, the dynamic container is added in the section referenced in the center panel because that is used as a work area by the users. The dynamic container is part of the layout group in the designer canvas and after adding it we need to pick the rule that it can reference to. It can directly reference a section or a harness. The mode field is used to select Single or Multi Document.

What is the difference between Single and Multi Document? In the case of Single documents, there is only one active document used (for example, a case). When another case is created it closes the existing document (removes the case from clipboard) and open the new case. Single Documents are useful in cases when users work on one case at a time or access the work list to open cases. Single Documents are pretty useful in keeping the application from overloading too much data and the performance is always better. To add a single document, we just add the dynamic container outside the existing layout in the section and then reference a section or harness.
Multi Document mode allows opening multiple documents at the same time. It is added the same way as a single document. The system allows us to have a maximum of 16 simultaneous documents and by default it sets 8.

Both these modes can reference section or harness for content. Typically we use sections when simpler layouts are created and harnesses are when we want to build the whole screen. The default case worker portal uses the work list section which displays the work list while the default case manager portal uses a harness which displays the whole dashboard.

The third way of adding dynamic containers is placing it directly on a layout. This is normally used when we are using it in one of the side panel or header panel and we want to refresh a small part of it and not the whole panel.
In some cases dynamic containers are added in a tabbed layout so that multiple documents are opened as new tabs.
We can also add the dynamic container inside a dynamic or a column layout. When we add a dynamic container directly inside a layout it disables the default view and also renders the dynamic container only in iFrames. We can select a section in the Purpose field and we can also use Multi Document mode.
How do we implement this in Pega? For Sidebars (left and right), responsiveness must be set so that the
columns do not appear and get rolled into a navigation menu when accessed on a mobile device. When
the response breakpoint is reached (in this case when the screen width shrinks to below 768 pixels) it
gets rolled over to the header as an icon. We can customize this and change the width at which it rolls
over to that setting, and the icon that is being used to access the pane.

A float can cause an element to be pushed to the left or right, allowing other elements to wrap around it. If
we want UI elements to, say, hug to the right, we would float right. That way regardless of the width of
the screen, the element always stays hugged to the right. While designing the sections that are used in
the header and footer, we need to make sure that we use floats so that the inner layouts automatically
align to the left, center and right respectively. As an example, setting floats ensures things such as the
logo image appearing in the leftmost corner and the logoff link appearing in the rightmost corner
irrespective of the device on which it's being accessed.

Configuring Responsiveness in Layouts
Dynamic and Column Layouts:
Responsiveness is configured in the skin rule by using a response breakpoint. In the layout configuration
we pick the screen width at which the layout is formatted to a different layout. So for instance, a two
column Inline becomes stacked at 480 pixels which mean any device accessing that screen width renders
the layout in a stacked format.

When using column layouts, the column that renders primary fields appears at the top while the other
columns roll down below it. This can be done in the skin rule as shown in the screenshot.

Layout Groups:
Layout groups allows us to group content in different ways by using different presentation options such as
tabs, accordions, menu items or as items stacked one below the other. The great thing about Layout
groups is that they can be set up to change from one presentation format to another based on the
responsive breakpoints.

Configuring Responsiveness in Grids
Grids are one layout where we can configure responsiveness in the skin and also on the actual grid. We
need to use the properties panel of the field to set the Importance flag as 'Primary', 'Secondary' and
'Other'. The skin rule offers the ability to remove the fields that are marked as Other on a specific
breakpoint. When transformed as a list, the Primary field appears as header while the secondary appears
below the header.

Pega 7 supports accessing the mobile application in offline mode, which means the application can be
rendered when the device is not connected to the internet. Offline mode is useful in presenting data and
also to collect data. After data is collected, the device needs to be synchronized online to complete work
For applications to work offline, two things need to be done.
12. When the device is online, data required for offline access must be cached in memory for later
13. Similarly when the device is online, data collected when offline must be synchronized back to
Offline access is enabled at the case level, so we need to use the Case type details screen to enable the
offline mode for a specific case.
Offline applications also requires using optimistic locking. This can be done in the same screen.

Offline capabilities allow the following functionality:
Users can Login or Logout from the application
Create new cases that are offline enabled
View cases and harnesses
Update cases that are offline enabled
View Worklist and process Assignments
Some limited dynamic UI is supported

Applications require extensive testing to ensure that the requirements are met. We highly recommend
that applications get tested periodically. How do we test applications in mobile devices? There are quite a
few ways we can do this.
1. Use the mobile device: The best choice is highly to test on the actual device because other
modes can simulate the experience only to some extent. We are listing other choices since it is
not always possible to test on all possible mobile devices.
2. Developer mode in Safari: After enabling the Develop menu in the Advanced preferences of the
browser, use the User Agent to choose the specific mobile device. This works well if we are
testing on Apple devices such as the iPad and iPhone.

Using the Android Emulator: We can simulate the android browser by downloading the
Android SDK and then extracting it to access the android emulator. More information can be
found in .
4. Using Chrome User Agent: Launch chrome with the desired user agent from the command
prompt. In the command prompt enter something similar to the
following: C:\Users\<username>\AppData\Local\Google\Chrome\Application\chro
me.exe --user-agent="<useragent>"
5. <username> is the windows user name and the <useragent> is the string that has the device, the
build, the platform version in it.
6. Remote Tracer: Using tracer, you can trace all requestors logged in to that node. This helps in
tracing a user session logged on a mobile device.
7. Using other Third party tools: Following are a few bunch of free mobile site UI testing tools to
help in testing mobile applications.
Mobile Phone Emulator: A popular mobile phone emulator, this tool allows you to test your site
across a large number of mobile devices.
MobiReady: a testing tool that evaluates the how well optimized your site is for mobile devices,
taking into account mobile best practices and industry standards. You'll get a score from 1-5 and
a full site analysis.

Screenfly: Screenfly lets you view your website on variety of devices. Just pop in your URL.
iPad Peek: As its name implies, this tool lets you see how your site appears on the iPad or
The Pega 7 system by default uses the locales that are set in the environment settings of the client
machine. These changes are configured during the installation of the OS and can be changed if required.
On a Windows machine, these can be modified using the control panel.

Making changes in the control panel impacts all programs running on the machine. The browser settings
are another place where we can override the locale. Updating the browser affects all websites opened
using that browser. For example, in the Internet Explorer browser, we can choose Internet Options and
set language preferences.

Overriding locale settings in Designer Studio
We can also override these locale settings for a specific user session in the Designer Studio by selecting
User Interface > Localization Tools > Locale Settings. The Locale Settings dialog provides various
options. Let's take a look at each of them. Here is an example of the dialog when we first open it.

We can click the Settings button to view the current setting. The values are referenced from the browser
and machine settings.

Let's use the Demo feature to see how the locale settings affect the presentation.
In the Locale Settings dialog, we change the locale and time zone to Chinese and Asia/Shanghai,
respectively and add the ISO currency abbreviation for Chinese yen. We then click Update to save the
locale for the current session.
We click Demo to see the effects of the update. Note that the locale and currency settings are in Chinese
and Chinese yen, respectively.

Note that the text describing day and year are in the Chinese language. The Currency values are in
Chinese yen. We can select other locales in the Locale field and click Compute to see the effects in the
Selected locale area.

Note: Compute is run on the local workstation. If you are trying this on your local machine, make sure
that your machine is capable of running a JAVA applet. Otherwise, you will not be able to see this work.
When the user logs out and logs back in, the system reverts to the original default settings.
In cases where we want users to be able to switch locales during their sessions without having to log off,
we can use the engine API named setLocaleName to change locales programmatically. For more
information, consult the Pega API document by selecting the APIs > Engine menu item in the Designer
Studio header.

Localization is the process of adapting internationalized software for a specific region or language by
adding locale-specific components and translating text. Pega automates the process by supplying a
Localization wizard we can use to translate the text strings in labels, captions, instructions, and so on,
that appear in the Pega user interface. Pega also ships language packs for some languages, which
translates many of the strings and reduces the effort needed to localize applications. As a best practice,
use the wizard to get the strings from the application even if a language pack is used. The output file can
be downloaded and we can overwrite the changes if required.
The localization wizard can be started in either of two modes:
Create a translation package.
Upload the completed translated package to a language-specific ruleset.
Creating a Translation Package
Select Designer Studio > User Interface > Localization Tools > Translate to New Language. The wizard
consists of four steps.
1. Select Languages —Select the application ruleset that we need to translate. The options list all
the possible languages. We are not restricted onlyto the language packs shipped in the product.
In our example, we select Spanish. When we complete this step, the system creates a work
object using the prefix pxL.
2. Select Rulesets —Select an unlocked ruleset version in which to save the rules. We can also
choose to include all PegaRULES application rulesets and export for them for custom translation.
If we wish to acquire a language pack that Pega 7 ships (such as Spanish), we can exit the
wizard at this point and import the pack. We would then reopen the translation in the Translation
in Progress mode. If we will not acquire a language pack or if the languages do not have
language packs we can indicate that we want to include the Pega fields in our language package
for custom translation as shown here:
3. Enable Records —Start the process of enabling and validating the selected records. When the
process completes, we see rules in the Errors list that require manual updates. To fix a rule, we
click the link in the Rule Name column to open it.

Export Translations —The system generates the translation package. If a Pega language pack
is installed, the system creates translations for the strings using the pack. The system creates an
XML file for all the text strings that require translation. We can download this as a zip file and then
open it in an Excel spreadsheet to see the fields.
The team can now work on updating the translations for all these strings in the spreadsheet.
Uploading the Translation Package to Our Application
When the translation process is complete, we are ready to upload the contents of the package to the
application. We select Designer Studio > User Interface > Localization Tools > Translations in Progress.
In the first panel, we select the wizard work item for the package we created. We then complete three
1. Import Rulesets — We select a new default ruleset created by the system to store the Spanish
translated strings. So that the translations can be shared across applications we would select an
organization ruleset.
We add this ruleset to the access groups that require Spanish translations. The system
automatically displays the Spanish text when the user interface is rendered.
2. Upload Translation Pack— We import the zip file that contains the translations.

3. View Import Results — The system imports the translations and creates the ruleset and the
associated rules required to save the translated strings.

Localizing the application is relatively easy if we develop the application using best practices. If the
requirements indicate the localization must be supported, we must be aware of the following guidelines.
To get the wizard to pick the rules, we must make sure that the localization flag is enabled in all
harnesses and sections.

Some rules such as correspondence, paragraph, and work parties contain text that must be
manually created, translated and stored in the corresponding ruleset.
We must define field value rules for all the labels and other text strings. In the following example,
we would clear the Use property default checkbox and enter a field value.
To understand how the localization wizard uses values in a field value rule, let's look at an
example. In addition to the Apply To class, the field value is identified using the field name and
the field value. The field name indicates how the value is used in the user interface. For example,
pyCaption means that the field value is used as a label on a section.

The localized text in the To area contains the actual text that is saved as part of the rule. When
we translate the application to Spanish, the localized label stores the translated field value for
Hire Date in the Spanish ruleset.
Standard Field Values and Names
Pega includes standard field values using field names as shown below. The Localization wizard picks all
rules with these field names; the language packs automatically translate them by default.
pyCaption — this is associated with all the labels that are used in the sections
pyInstructions — this is the instruction text that is entered in the assignment shape of the flow
pyStatusLabel — used for status values such as New, Resolved-Completed, and so on. If we create any
custom statuses then we should define the field value rule for that work status as well.
pyLabel — used as short descriptions for case types that appear in the user interface such as flow
pyMessageLabel — this is associated with the error messages that can be added in the activities or in the
validation rules.
pyButtonLabel — this is associated with labels that convey a command used in click-action controls such
as buttons or links. Examples include Save, Cancel, Submit, Next, and OK.
pyToolTip — this is associated with text used in a ToolTip. For example, "Case ID."
pyActionPrompt — this is an explanation in sentence form that is presented as a ToolTip. For example,
"Find a case by its ID."
Pega supports developing accessible friendly applications for the users who use assistive devices to
access the application. Pega follows guidelines established by Section 508 of the United States
Rehabilitation Act, the Web Accessibility Initiative and the Disability Discrimination Act.
Pega includes a special ruleset named PegaWAI in the installation bundle. The ruleset renders the
application in accessible friendly mode without any extra coding effort. We need to import the PegaWAI
ruleset and add it to the application rule production ruleset list. We also add the ruleset to the access
groups of users needing accessibility features and controls. On the access rule Advanced tab we enter
the ruleset in the production ruleset list and click the Enable accessibility add-on checkbox.

When users log in they see the screen formatted for accessibility, and the labels can be read by JAWS or
other screen readers. Many controls and labels are formatted differently. For example, the date control
displays as a dropdown box instead of as a calendar.

Some images used in field labels such as the required icon are rendered as text descriptions.
Viewing the Accessibility Report
Pega includes a report to test the accessibility compatibility of the application by going to Designer Studio
> Application > Tools > Accessibility Report. We can run the report only after adding the PegaWAI ruleset
to our access group. The report groups all rules as harness and flow actions. Drilling down further we can
see all other referenced rules and controls. The report indicates compatibility % for each element. Note
the TabbedScreenFlow7 harness, which has a score of 0%.

By default , accessibility is enabled for all auto generated rules. In this example, we can click the element
on the list to open the rule and reset the Accessibility checkbox to Yes.

In general, we should test other rules and enable this flag.
We can hover the mouse pointer over other elements to see why they are not accessible. In some cases,
a priority appears indicating the nature of the accessibility violation and an alternative solution the
Tips for Building Accessible Applications
Pega makes the application accessible friendly and it does not require any special code. The compatibility
report is useful in finding the recommendations for some violations. However, we need to design the
application according to specific guidelines to make this work seamlessly. Here are some examples.
We should use relative and not absolute units in the markup language. We should avoid using px
or px-fixed layouts. It is a best practice to use dynamic layouts.
We should not use standard markup or inline styles. Instead, we should use style sheets.

Events based on mouse actions should be avoided because this impacts most of the AJAX calls.
Typically this involves mouse actions like onClick on a checkbox, Hover, automatic calculation of
values, and so on.
We should avoid icons and replace them with buttons and then add an ampersand "&"before the
caption to create a shortcut. For a Save button we enter &SAVE as the caption and the shortcut
key will be ALT-S.
We can create shortcut keys for most common actions such as Submit, Save, and Cancel, and
for the flow actions if there are multiple flow actions in the same assignment.
Typically, we access a Pega application on the Web using composite portals, which include the case
manager, case worker, or Designer Studio. The Pega user interface displays in the entire browser
window. The portal rules do not require integration with an external system.

IAC, on the other hand, allows us to embed the Pega application as a mash up by way of a composite
application embedded as a gadget in an existing Web application. A mash up is a term used for Web
applications that merge data and content from multiple sources and presents them in a common user

IAC is designed to work with the all standard Pega harnesses and sections. UI customization is not
necessary. In other words, an existing Pega application can be embedded as a mash up without
The example below shows a gadget in the marked area that presents an auto insurance quote screen
flow within the company Web page. The process was built using standard Pega UI elements.

IAC leverages all the different authentication and authorization methods supported in Pega. Standard IAC
authentication activities in PegaRULES ruleset can be customized to integrate with the third party
authentication implemented in the external Web application. IAC is designed to work seamlessly in any
Java or .NET Web application.

There are three main components in IAC; the Pega Gadget Manager, the Pega Instance, and the Pega
Composite Gateway. Let's look at all them now.
Pega Gadget Manager — A JavaScript file PegaInternetApplicationComposer.js file which contains the
scripts to launch the Pega application in a Web application as a gadget.
We need to perform two tasks when setting up the Gadget Manager:
1. Add configuration properties such as URL of the servlet, system ID of the Pega server, name of
the application, and so on.
2. Create an HTML DIV element for each Pega Gadget, the gadget should indicate the Pega action,
action attributes and the Gadget Manager attributes.

We use the Gateway Configuration Console for both these tasks, which is described in the next page.
Pega instance —The Pega server containing the application handles the requests coming from the IAC
Pega Composite Gateway — A servlet that is used to proxy the communication between the IAC Gadget
Manager and the Pega instance. This servlet (prgateway.war) is included in the Pega installation bundle.
The system administrator must deploy prgateway in the Web server.

The gateway servlet and the Web application containing the Gadget Manager must be co-located in the
same domain. The Pega application is deployed on another server. If the Pega instance exists in a
different domain then we can only access it using a gateway servlet.

After we've deployed the Pega Composite Gateway servlet, we can launch the Gateway Configuration
Console, which is used to configure connection parameters enabling IAC gadgets to communicate
directly with Pega applications.
The Console is included as part of the standard Pega-IAC ruleset, which we must add to our application.
Before using the Console, edit the prconfig.xml on the Pega instance to include this entry:
<env name="Authentication/RedirectGuests" value="false">

Configuring the host
To begin using the Configuration Console, we first specify connection settings for the system that hosts
the application we want to access on the Host Configuration page. The Console generates property
settings and gadget attributes that are specific to a host. We click Add to configure a new host. In this
example, we use localhost because the Gateway is installed in the same instance where Pega exists (this
is typically not the case in an actual implementation).
When creating summary type reports we need to choose an aggregate column. This aggregate column
can be a function applied on a column such as count, average, max, min on a specific property (Column).
Summary reports help in providing a less detailed view by grouping related items.
Assume we want a report of all the active cases in the system. A list based report would have several
hundred rows and it is hard to interpret anything from a list that long. A Summary report might be a better
Summary reports provide additional clarity, because on the same report we can group by the operator
name column so that the results are grouped by operator and it also shows the count of active cases
grouped by each operator. In a summary report, we select the column on which we want to summarize.

For these two columns to appear as headers, we use the first field in the grouping category in the Report
Viewer tab.
When we review the report output, we can see that it displays a maximum of three results per group, this
can be done by setting the rank in the Query tab of the report editor as shown below.
We chose pxCreateDateTime to pick the three recently created cases based on create date. Here we
display the Top Ranked for each group to display three recent cases for each group.

A pivot table is a program tool that allows us to reorganize and summarize selected columns and rows of
data in a spreadsheet or database table to obtain a desired report. Report definitions can be used for
creating pivot tables.
When reports are grouped using two properties, pivoting them groups these properties in rows and
columns, thereby creating a pivot table. Let's see how we can create them in action.

This report is nice but harder to read, putting this in a pivot table improves readability. Open the Report in
report editor and then right click on the column and select Display Values across columns.
The report now uses the create date property across the row and the work status across columns.

While creating a pivot table, we need to decide which property to display across columns. We need to
select the property that has a finite number of values as we wouldn't want the report to grow in such a
way to require horizontal scrolling. This report is a good design because the work statuses will only
be a handful of values, whereas the create date time continues to grow over time.

Sub reports join two different classes using specific join conditions and the results from the second report
can be used to filter the first report or can display as part of the first report.
Sub reports can be used to calculate aggregate values, such as counts, maximums, and
minimums. Let's make a distinction as to when we would use sub report versus a summary type report
definition so it is clear when to use each if aggregate calculations are required.
This illustrates a best practice. If it is necessary to have an aggregate calculation for a flat list of distinct
records, including multiple properties for each record, then we should use a list report with a sub report.
However, if the aggregate calculation is needed for a group, then we should use a Summary Report.
Let's look at other examples where we need sub reports. Sub reports and Class joins are closely related.
When should we use a simple join to associate two classes versus a sub report? Joins are useful when
we associate two different classes and present columns which belong to the class in the second class.
For example, we use a predefined association between Assign-Worklist and the work class to display
columns belonging to both these classes.
Assume we want columns that we joined from other tables to display aggregate calculations, for example
we want to display a report that shows a list of all purchase requests, and includes columns for the time
when the request is created, the operator who created the request and case ID, and all these columns
come from the purchase request work class. Now we need to include the aggregate calculations on
subcases of the purchase request: the date the first subcase was resolved, the date the last subcase was
resolved, and the subcase count. This requires us to use a sub report.
Sub reports are added in to the main report using the data access tab by including the sub report name,
applies to class and a prefix.

Then use the configuration dialog to set:
1. Where the sub report result is used.
2. Filter conditions to join the report
3. Join conditions to configure inner, outer, left or right join

To use the columns from our sub report we need to enter the prefix we specified in the Data Access tab.
Another use case for sub reports is to use the results of a report as a filter for the calling report. Joins can
be used in some of these cases but sub reports are useful when we want to exclude the choices based
on the results of a sub report. For example, we want to know the list of all operators who have no
assignments past goal.

A trend report presents counts of cases or other data elements at a series of points along a continuum,
normally a time line. The X-axis of a trend report displays data points on that continuum representing
weeks, months, or quarters in a year, or some other meaningful increment. One column of the data
supporting the trend report displays one or more Single Value properties of a DateTime type. The timeunit
data requires us to use an SQL function (refer the Atlas- Standard SQL function rules) to get the
complete list.
When using a line chart, the time-unit data must be in the second column to present the correct data.
Let's look at this with an example, here we are trending the report based on date and we used the pxDate

When we take a closer look, there are no cases created on 11/18, 11/19, 11/20 but if we look at the report
someone might conclude there was one case created in 11/18 as well. Let's switch the order in the report
definition to make the Date column as the second column and the count column as first column. Now the
report looks like the screenshot below.

This is applicable only for line charts. If we are using bar or column or spark, then the report generates
similar output in both cases.

A summary type report definition can be configured to display the results in the form of a chart. The
cChart editor allows us to adda chart if the report definition includes at least one aggregation column. All
non-aggregate columns are referenced as group-by columns. The Chart tab indicates where to add the
aggregate and the non-aggregate columns. After adding them it is necessary to use the Options icon to
configure the data displayed. Combo-charts require two aggregate columns.
Charts can be previewed only in the report editor which makes it logical to edit them in the report editor
instead of directly in the report definition. There is an exception to this rule. If we are using map reports
then we cannot edit them in the report editor.
Bubble chart is a special category that can be used to provide a 3-D view of three numeric columns. The
third dimension is represented in the size of the bubble for each data point, while the other columns are
represented in horizontal and vertical axis respectively.
Gauge reports can be very effective in displaying burn rate, throughput and other measurements of
current state. There are 10 different ways of displaying these charts (including five different Angular
Pie charts are extremely useful when presenting reports that show the percentages of various categories;
such as how many new purchase requests are placed by various departments (engineering, marketing,
HR) and so on.
Selecting between Bar and Column reports depends on whether we want the results to be presented in
horizontal or vertical bars.
Funnel and pyramid are useful to display the numbers across various stages, such as the status of a
claim case, how many are in Open, Pending-UnderwriterDecision, Pending-WaitingforDocuments,
Pending-ManagerApproval, Resolved-Rejected, Resolved-Approved and so on.
Maps require us to add the map using the settings landing page. There a wider variety of maps available
which can be selected by searching using the autocomplete in the map type field here.

Reports created by managers in a production environment should be associated with a production
ruleset. These reports are not considered part of the application itself, but are in the production layer that
sits on top of the application layer.
To configure a reporting ruleset, we specify it in two locations:
On the application rule Definition tab.
On the access group rule Advanced tab.

Note: Make sure that the ruleset has one unlocked version, does not use check-out, and has the
necessary application rulesets as prerequisites.

We can use settings in a report definition to customize the report presentation and behavior, and control
which users can or cannot run a particular report.
The Default Starter Report
When a manager adds a report in the Case Manager portal and then selects the case type and report
type, the system copies a standard report (either pyDefaultReport or pyDefaultSummaryReport) to the
specified case type. Managers can then use this report as the starting point for a new report by creating a
new name key.

We can copy the standard reports into the appropriate work class and application ruleset to customize the
settings for reports created in production.
Report Presentation and Behavior
The report definition provides a wide variety of settings for customizing our reports. Here are a few
We can change the default thresholds on the Data Access tab. For example, we can increase the
maximum number of rows from 500 to 1000.

We can also choose whether to use non-optimized properties.

On the Report Viewer tab we can customize user interactions such as what buttons to display.

We can also choose to present records in multiple pages and define how many rows to include in each

Report Privileges
We can associate privileges with a report to control who can run it. We create a privilege rule and update
the access role to object rule to convey the privilege to a select group of users. We specify the privilege
on the Data Access tab.

By default, there are no entries in the privilege list — any user who has the corresponding ruleset in the
ruleset stack can run the report. If we add privileges, operators must be granted at least one of privilege
in order to run the report.
Updating Report Resource Limits at the Application Level
We can update default resource settings at the application level. These are the same settings we saw on
a report definition Data Access tab, which apply to that report only. To see the current limits, go to
Designer Studio > Reporting > Settings > Settings.

We may want to change these values. For example, we may want to set a higher limit on the number of
rows to retrieve if the query is likely to produce many results and we want to include all of them. We use
the following system setting rules to change the limits in our application. Remember to log off and log
back on for the settings to take effect.
Running Reports
Maximum number of rows to retrieve — pyMaxRecords
Maximum elapse time for database query — pyTimeoutQueryInterval
Exporting Data from Reports
Maximum rows to retrieve — pyExportMaxRecords
Maximum elapse time for database query — pyExportQueryTimeout

A report may work well in development but when migrated to a production environment it might not return
the results we expect.

Or it might cause an error condition.
These problems are typically the result of incorrect mappings from a report to the table in the database. A
report is applied to a class, and the class is part of a class group, for which there is a corresponding Pega
database table instance, which has a pointer to the actual database table.

Migration problems often originate from this database table instance. As shown in the example above, the
table specified (pc_SAE_HRServices_Work) might never have been created in the production
environment, or the database table instance itself might not exist in production.
Remember that database tables are data instances that have an associated ruleset setting that is used to
facilitate export and import.
However, this setting is not required and so it is possible that it was missing or incorrect. As a result,
these records might be left behind when installing a new system.

The system gathers statistics for every run of a report in an instance of the class Log-ReportStatistics,
which represents one or more executions of a particular report in a given hour. Each log entry is
populated once per hour.
Report statistics are enabled by default. We can disable them using the dynamic system setting for
reporting/enable statistics and changing the setting from "true" to "false."
There are four standard reports that provide visibility into this statistical data. We use these reports to see
what reports are being used:
pyDefaultReport — gives us a detailed view of report usage data
pyDefaultSummaryReport — gives us an aggregated view of the data
pyReportErrorSummary — shows us what reports have produced errors
pyReportUsageByUser — show us who has run what reports.
Let's look at the pyDefaultReport shown below. In addition to the Applies To class, report name, and user
for each report, there is a Run DateTime column, which contains timestamps indicating when the system
created a log entry.(Note that the timestamp does not represent when the report was run as each
instance can represent multiple runs). The timestamp could be up to an hour after the reports were run.

Also included are key metrics for the report executions such as run, error, and row counts, and the total
elapsed time.
We can copy the standard reports and make changes to them to suit our custom requirements.

If our reports are performing sub-optimally, we can examine factors in the report definitions by asking
these questions:
Are all of the properties in the report optimized? If there is a property value that must be extracted
from a BLOB column rather than from an exposed column, this can lead to added query time. For
guidance on whether to optimize specific properties, see When to use — and when not to use —
unoptimized properties in reports.
To help us locate unoptimized properties, check the Display unoptimized properties in data
explorer checkbox on the Data Access tab. Unoptimized properties appear as selection options
in the Report Editor's Data Explorer, in the Calculation Builder, and when defining report filters.
Use the Property Optimization tool to expose top-level single value properties as database
columns. For more information, see How to expose a property as a database column with the
Property Optimization tool.
Are there any outer joins (class joins, association rules, or sub-reports) in the query? Selecting
"Include all rows" on the Data Access tab can be costly. It causes the system to use an outer join
for the report in which all instances of one of the classes are included in the report even if they
have no matching instances in the other class: Select "Only include matching rows" if possible.

Are many rows retrieved and has paging been turned off? One of the main purposes of the
paging feature is to prevent excessive amounts of data being pulled onto the server. Turn paging
on by selecting Enable paging on the Report Viewer tab. For more information, see When and
how to configure paging in reports
Sometimes, the best way to troubleshoot a Pega report is to analyze the query outside the Pega system.
Use the clipboard viewer to locate the pyReportContentPage page, and get the value of the
pxSQLStatementPost property.
Analysis of the results could indicate, for example, that database statistics must be gathered, or that
additional indices are necessary.

Let's consider the two most fundamental choices we make when creating a report:
Case type — the class of instances to be reported
Report type — list or summary
There is a close logical connection between these settings. In the following example, work status data is
used in two distinct classes and report types.
For instance, assume we want to see a list of all cases in the application, their work status, and the stage
they are in. Because we want to see application-wide data, we would create a simple list report in the
work pool rather than in a case type.

On the other hand, consider creating a summary report for a specific case type. For example, we want to
create a purchase request report showing the total purchase request amounts by cost center and by
work status.
Because we are interested only in purchase requests, we create a report definition in the purchase
request case type. We need to create a column containing the purchase request amounts, which will be
aggregated. We enter the declare expression .TotalPRCost, which gives us the total purchase order
costs all the line items in a purchase request. Cost center and total PR cost data are in the purchase
request class so there is no need to configure an association between classes.
We summarize the values in the .Total PR Cost column as shown here:

Optimizing properties
Most case types that serve as a source for a Pega 7 report have data stored in an extendable BLOB
column. The BLOB can have both scalar properties and arrays like page lists and page groups. While this
data can be read directly from the BLOB for a report, the property optimization wizard lets us expose
scalar properties as dedicated database columns. Select a property in the Application Explorer and use
the right-click menu to open it.

The Property Optimization wizard can also be used to expose properties in page lists and page groups;
this creates declare index rules. There are some properties that might not be reported on directly, but
instead used as a source for a calculation. Such calculations are done using a SQL function alias rule.
See How to create a Declare Index rule for an embedded property with the Property Optimization tool.
Including data in related classes or reports
The Data Access tab on the report definition rule provides association rule, class join, and sub-report
features that allow us to include data in related classes or reports.
Association rules let us define a relationship between two classes based on matching values in
pairs of properties. We can then automatically add a join to a report that displays properties from
both classes referenced in the association. For more information, see When and how to create
an association rule to support reporting.
Class joins enable reporting on data from multiple classes or tables. For each class join, we
define one or more logical conditions that specify how to match or join corresponding instances in
different classes.
Sub-reports allow results from any report definition (including a report definition defined on a
different Apply To class) to be referenced and included in a different report definition. For more
information, see When and how to use sub-reports in Report Definition reports.

We use a centralized data warehouse if there is a requirement to report on data from Pega 7 in concert
with data in other systems. BIX is a Pega 7 feature set that extracts Pega 7 data to an external data
warehouse. This capability provides a consolidated repository for all enterprise data. It also allows us to
use 3rd party reporting tools that are designed specifically to work with enterprise-wide data.
BIX itself is a set of rules and rule types installed by including the BIX ruleset in the application stack. We
set up BIX by configuring extract rules, a rule type included as part of the BIX installation.
BIX can extract Pega 7 data to XML or CSV (comma separated value) files. It can also extract data
directly to tables in a database schema.

Like work, rules are instances of internal classes. This fundamental technical concept makes rule
reporting possible.
For example, cases for managing applications for open positions at SAE Candidate Corporation are
instances of the class SAE-HRService-Work-Candidate. Property rules such as .TotalAmount are
instances of the class Rule-Obj-Property.
Do not confuse the class of a rule, which dictates the rule type, with the Appl To class of the rule. For
example, all property rules are instances of the same rule class. Most rules also have distinct Apply To
classes, which indicate the implementation location and are keys that are used in rule resolution.
Remember that reports are created in the class of the items being reported. If a report is in the Candidate
class, the report returns candidate cases. Likewise, if the report is in Rule-Obj-Property as shown here we
get a list of properties when we generate the report.

Using the Data-Rule-Summary class as the basis for rule reports
When we create rule reports, we usually want to span rule types and join the rule classes. To achieve
this, we use the Data-Rule-Summary class, which contains an instance of every rule type in the system.
This class contains the following properties that are essential for rule reports:
pzInsKey — key of the rule instance. This property is useful for joining other classes as we might
do with Rule History reports.
pyClass — the class of the rule type. For example, Rule-Obj-Property is for property rules.
pyClassName — the Apply To class of the rule. This could be, for example, the work class of a
purchase request application.
pyRuleSet and pyRuleSetVersion — the ruleset and ruleset version.
pyRuleName — a unique identifier used by the system; for example, CountOfLineItems.
pyLabel — a short description of the rule. For example, the label for the property named
CountOfLineItems — could be Number of Items to Purchase.
pyRuleAvailable — the availability of a rule.
pxUpdateDateTime, pxUpdateOperator, and pxUpdateOpName — the most recent update
We can join other classes with the rule classes in order to provide more granular information. Some of the
classes that are most commonly joined with the rule classes are:
History-Rule instances represent rule updates.
Data-Rule-Locking instances represent locks, or checkouts, of each rule.
Index-Warning for rule warnings, like the performance warning that's given if a non-optimized
property is used in a report.
Index-CustomFields for custom fields defined for rules - a feature specifically designed to aid rule
reporting and is described later in this lesson.

Standard rule reports are packaged with Pega 7. Many of them are used on the Designer Studio landing
pages. Let's look at some examples on the Application Development landing page by selecting Designer
Studio > Application > Development.
The Recent Actions shows what updates have been made most recently. Because this shows
rule history events, the underlying report for this landing page is in the History-Rule class.
The Checked Out Rules report, which is in the Data-Rule-Locking class —stores the rule locks
that are created when a rule is checked out. The report includes the pzInsKey of the locked rule
and pxInstanceLockedBy, which points to the operator who has locked the rule.
The Weekly Rule Updates report, shows how many rules, by ruleset, have been updated in the
last 8 weeks. This report only looks at the most recent update timestamp. As such, if a rule has
been updated more than once, only the most recent update is counted. The Data-Rule-Summary
class is queried. Historical data is not being mined here — History-Rule is not part of this query.
The Developer Activity report, on the other hand, does look at History-Rule class historical data.
These are check-in counts, not just rule counts.
Let's view the standard rule warnings report below by selecting Designer Studio > Application >
Guardrails > Warning Details. We can use the expand row arrow to display the specifics about each

Let's view the standard rule warnings report below by selecting Designer Studio > Application >
Guardrails > Warning Details. We can use the expand row arrow to display the specifics about each
The content in this landing page grid is supplied by the pxRuleWarningsInApp report definition in the
Data-Rule-Summary class.
The report contains properties such as pxWarningSeverity and pxWarningType which are in the IndexWarning
class — an index of warnings embedded in all the rules in the system. The properties are
available because of a class join shown here:
We click the Edit conditions button and see that the pxReferencingRuleInsKey property in the IndexWarning
class points to the pzInsKey of the corresponding rule.

Custom fields applied to a rule provide a flexible way to supplement our rules with metadata, which can
be used as a source for a report. To find rules using these custom fields, we use the Find by Custom
Field search tool available from Designer Studio > Process & Rules > Tools > Find Rules By Custom
Note: Custom fields are not available for Data- objects.
Creating a custom field
In this example, the process begins by creating a purchase request, followed by a confirmation of the
request before it is reviewed by the approver.
Think of Create and Confirm as two sub-steps in the request creation step of the process. We want to
represent this concept by using a custom field on rules related to the step. We can then generate reports
that show all rules for request creation.
We want to add a custom field to the Review flow action because it is related to request creation. We
open the flow action rule and add a custom field in the Custom Fields area on the History tab. We define
the custom field property ProcessStep (no spaces) and value Request Creation as shown here:
When we submit the update, the system creates the ProcessStep property in the Index-CustomFields
class. The new field appears in the Custom Fields area as shown here:
Adding custom field properties to reports
In the Data-Rule-Summary class we create a report definition that provides generic rule information such
as class, rule type, rule name, and description and includes our new ProcessStep custom property.
Before populating the fields on the Query tab, we go to the Data Access tab and join this Summary class
to the Index-CustomFields class using a prefix of CustomFields. The join filter condition is .pzInsKey is
equal to CustomFields.pxReferencingRuleInsKey.
Returning to the Query tab, we add the report properties including our ProcessStep custom property in a
column and as a report filter condition.
Optimizing the custom field properties
Note that there are two warnings indicating that the ProcessStep property has not been optimized and
may cause poor performance. Although the system creates properties when we define the custom fields,
they are not exposed as columns.
We usually optimize a property by selecting it in the Application Explorer, right-clicking, and selecting
Optimize for reporting. However, this is not allowed for classes in Pega rulesets, such as the IndexCustomFields
For properties in Pega rulesets, we use the Modify Schema wizard to create the database columns and
expose the properties. Go to Designer Studio > System > Database > Modify Schema. Select
pr_index_customfields on the Select a Table window, and click the property count link in the View Table
window to view the table properties.

Select the property of interest to be exposed and click create selected columns. When we resave the
report, the warnings are eliminated.
Note: Alternatively, a database administrator can create database columns outside Pega 7.
Pega supports integration with message-oriented middleware, or M.O.M., based on the JMS (Java
Message Service) and WebSphere MQ standards for both connectors and services. JMS is a part of the
Java EE platform. WebSphere MQ is an IBM-developed public standard. Let's begin by looking look at
how JMS works and then how MQ relates to JMS and when to use each one.
The sender of a message is called a producer. Producers create messages and send them to a
destination on the M.O.M. The recipient application is called the message consumer. Consumers retrieve
messages from the destination.
Multiple producers and consumers can communicate using the same destination. How this is handled
depends on the message delivery model we use. There are two basic message delivery models: Point-topoint,
and publish-and-subscribe. We'll take a look at point-to-point first.
Point-to-Point Model
In point-to-point messaging, the producer is also referred to as a sender, the destination is called a
queue, and the consumer is considered a receiver.
As we said, multiple producers can send messages to a queue, and multiple consumers can retrieve
messages from a queue, but the main distinguishing characteristic of point-to-point messaging is that a
message can only be retrieved once. Once a retriever receives the message, it is removed from the
queue. Messages are always delivered in the order they are sent.
The strategy to determine which retriever gets which messages depends on the receiving application,
which can use the message headers and properties to determine how to handle the messages.
Publish-and-Subscribe Model
The publish-and-subscribe model is different and we use it when we want to allow messages to be
delivered to any interested receiver. This enables me to deliver the same message multiple times to
different receivers.
In the publish-and-subscribe model, the producer is also referred to as a publisher, the destination is
called a topic, and the consumer is considered a subscriber. Consumers subscribe to the topic, and
receive all messages published to that topic.
The Anatomy of a JMS Message
Let's take a look at the anatomy of a JMS message:
Header - Contains general information about who the message is to, when it was sent and so on. These
values are defined by the JMS specification, and are usually set by the JMS server.
Body - This is the data that the sender wants to send to the recipient, and is sometimes called the
payload. Both sender and receiver need to know what kind of message it is, and how to decipher it.
Properties - These are key-value pairs that the recipient can use to help filter or process the messages.
These properties are defined by the application. As with the message body, both sender and receiver
need to know what properties should be set.
JMS allows applications

JMS allows applications to exchange information without each having to understand the structure of the
other. But the two applications do need to have a common set of expectations. Before we can send
messages to a consumer, we need to agree on:
The delivery model - point-to-point using a queue, or publish-and-subscribe using a topic.
The JNDI name of the destination queue or topic.
The message type - text, map, byte string, stream, or a serialized Java object.
The expected message content -header, body, and properties.
Response - if a response is returned, if so the type and format.
This agreement is essentially a contract between the consumer and producer, and must be established
by the system architects before we can use message-based integration.
JMS is a standardized API that is up to providers to implement. J2EE application servers like JBoss,
WebSphere and WebLogic provide JMS implementations, as do numerous other third party providers.
Message-Based Integration
Customers using the IBM WebSphere application server have two choices for message-based
integration: JMS and MQ. What's the difference?
WebSphere MQ is a proprietary Message-oriented Middleware service from IBM. JMS on the other hand
is standard, not a specific implementation where each JMS provider can implement JMS their own way,
as long as their implementation adheres to the JMS specification. IBM's JMS implementation is built on
MQ. Therefore, in WebSphere, JMS is actually a wrapper for MQ.
In Pega we can use MQ directly, or the JMS implementation wrapper. For applications that are deployed
as J2EE Enterprise Applications, there are several advantages to using JMS rather than MQ directly:
JMS offers support for transactions using the Java Transaction API (JTA)
MQ requires a Java thread to constantly "poll" for incoming messages; JMS has better
performance because it can take advantage of J2EE queue monitoring
Applications that use a non-proprietary standard like JMS are inherently more portable.
So when might we want to use MQ? We usually want to use MQ when we need to integrate with a nonJ2EE
application that uses MQ.

Configuring an application to send a JMS message involves creating four types of components:
A JNDI service instance to describe how to reach the JNDI server
A producer model instance which describes the producer
A Connect JMS rule, which actually handles sending the message
A connector activity to call the JMS Connector
And depending on how we choose to handle the message content, we may need mappers to create the
message content, and parsers to handle the reply.
We will examine each of these components using an example case in which we are integrating with a
payment provider. After completing a purchase order, we send the ID of the vendor and the amount to
pay, and our payment provider takes care of the rest.
First, we need to ensure that we have access to the JMS services. If our application is deployed on a
Java Enterprise application server like JBoss, WebSphere or WebLogic, then by default we have access
to the JMS services provided by the application server. If we are deploying on a web application server
like Tomcat that doesn't support enterprise applications, we'll need to configure a third party JMS
JNDI Service instance
Then we need to create or identify the messaging destinations we will use to communicate with other
applications. The destinations are defined using JNDI. JNDI stands for Java Naming and Directory
Interface and is a standard Java mechanism to allow Java-based applications to access objects by name.
In order to access the JMS destinations we need to create a JNDI Server data instance, which is
available under the Integration-Resources category in the Records Explorer.
The JNDI server is configured at the application server so all we need to do here is enter the values
supplied by the application server administrator.
We can also view the objects that are named by this JNDI server by clicking Browse Tree. JNDI does not
only handle JMS destinations; it manages all named objects in a Java application.
Before deploying a connector or service we need to make sure it is working correctly. There are a number
of approaches to testing and debugging JMS Connector and Services. A useful tool provided is the JMS
Explorer. JMS Explorer supports only queues, not topics, and only text-based message data can be

The JMS Producer Model, also available under the Integration-Resources category in the Records
Explorer, holds messaging settings for the JMS connector rules.
Persistence determines whether the message is stored in the database or just in memory, which in turn
affects the application's reliability. If the message is stored in memory and the system goes down before
delivery, the message is lost. On the other hand, storing the message in the database adds overhead.
Priority is only relevant if we are sending to a destination that uses priority. How this is handled is
Expiration indicates how many milliseconds until the message expires. The default is 0 which means it
never expires. It stays on the queue indefinitely until retrieved by a consumer. We could set a higher
value if we wanted to make sure the message would only live for a limited amount of time.
Domain lets us indicate whether we are using a point-to-point model or a publish-and-subscribe model.

Connect JMS rules are found in the Integration-Connectors category in the Records Explorer. The
Service tab contains properties that reflect the agreement between our application, which is the message
producer, and the service application we are connecting with, the consumer.
The service-related properties describe how our connector communicates with the "service", which in
JMS means the consumer.
Resource Name Resolution specifies how the queue or topic is found. Use Direct JNDI Lookup to select a
JNDI Server. Select Resource Reference to use Global Resource Settings to specify the name of the
JNDI Server.
The Connection Factory, and the username and password are values that have been provided to us since
they are configured at the application server level.

Destination name is the full name of the queue or topic to which we send the message. As with the JNDI
server, this is provided by the system administrator.
Responses are not supported for the publish-and-subscribe model. For the point-to-point model, if a
response is expected, the sender includes the name of the response destination queue in the
JMSReplyTo property of the message. Leave destination and the Response tab empty if a response is
not expected.
If a response is expected, the producer application stops and waits, and the connection stays open until
the message is delivered and a response is received from the consumer. In this case the response queue
is usually a dynamic destination. Meaning the JMS server creates a temporary queue that exists just
while the connection is open. Once the response has been received, the connection is closed and the
dynamic queue goes away. Alternatively, we can specify a static destination name.
The error handling section includes properties related to handling of errors that occur during message
processing. The status value and status message fields are string properties to hold the results of the
attempted message. The standard pyStatusValue and pyStatusMessage properties in the baseclass can
be used.
In the case of an error, control is passed to the error handler flow specified. In our case, we are using the
standard ConnectionProblem flow.

Once you've got your JMS Rule set up, use the Test Connectivity button to make sure the settings are
The Request tab specifies what data will be in the messages that are sent by this producer. Remember
that a JMS message consists of three parts: header, properties and data.
Most of the header values are filled in automatically by the connector when the message is constructed,
like the destination, or by the JMS provider, like the message ID. There are two header properties we can
JMSType - used to distinguish between different types of messages, for example, notifications
versus requests.
JMSCorrelationID - used in asynchronous messaging to associate a response with the original
Message properties are name-value pairs that are defined by the applications that are communicating.
These are often used by a selector when multiple consumers receive messages from the same queue to
determine which messages they should accept, or how they should be handled.
Message data is the content or payload of the message itself. The Data Type to use here depends on
what data type we set on the Service tab. In our case, we specified Text, so we need to pass a single
In this example, we use an XML stream rule to create an XML string from a purchase order case on the
clipboard in the format expected by our payment provider.
The Response tab is applicable only to services from which we expect a synchronous response. On the
Service tab, we must tell the connector what sort of data to expect in the response, either selected
dynamic destination or a static destination name that we provided.

The data settings are similar to those on the Request tab, except instead of describing the data we are
sending, we are telling the connector how to handle the data it receives. The consumer and producer
must agree on the format of the response. In our case, we are expecting a single value in the response,
with the key "status", which we map to a clipboard property .pyStatusValue.
String responses can be copied as-is to the clipboard, or can be parsed using a parse rule or function.
Connector Activity
We need an activity to trigger the connector to send the message. The key step is a call to the ConnectJMS
method, and pass the name of the Connect-JMS that sends the message.
In our example, we pass the RequestPayment JMS connector rule. When this step is reached, the
connector rule creates a message object as we configured it to, establishes a connection with the JMS
server, sends the message to the queue or publishes it to the topic we've told it to, and if a response is
expected, waits for the response.

Remember that a JMS message interaction includes an application that sends the message - a producer;
a named destination for the message; and a consumer that gets the message and processes it. Now, let's
learn how to configure and use a JMS Service to consume messages from a queue or topic.
A JMS consumer is implemented using a JMS listener. The listener is configured to wait on a particular
queue or topic for a JMS message (request) to arrive.
When a request message arrives, the listener dispatches it to the correct service. It's possible for the
listener to dynamically choose which service to use based on message properties, but in this lesson we
are only going to cover the case where a listener is configured with a single service.
The JMS service maps the data and properties of the incoming message to a clipboard page, and then
invokes an activity to process the request. The activity does whatever processing is needed to handle the
message, such as creating or modifying a case, storing or retrieving data from a database, or executing a
flow action.
Optionally, the activity can set return data and the service creates a JMS response based on that data
and passes it to the listener, which sends it to a configured response queue.
A JMS consumer includes the following components:
A JMS Service Package
A Service JMS rule
A JMS or JMS MDB listener
Any necessary data mapping components for mapping message data to and from the clipboard
An activity to do whatever processing is required for incoming messages
And a JNDI server data instance to locate the destinations by name
We can use the Service wizard (DesignerStudio > Integration > Services > Service Wizard) to create a
service. Let's have a look at the JMS consumer records.
There are two types of listeners for JMS depending on how our application is deployed. If the application
is deployed as an enterprise application or EAR, Pega takes advantage of a J2EE feature implemented
by the application server called Message Driven Beans or MDBs. MDBs are configured at the application
server level to receive messages from a queue and pass those messages to the appropriate enterprise
java bean, in this case Pega.
Pega can only be deployed as an enterprise application on J2EE compliant servers like JBoss,
WebSphere or WebLogic. It cannot be deployed on Tomcat. If Pega is deployed as an EAR, create a
JMS MDB Listener rule.
If our application is deployed as a web application or WAR, the application server doesn't provide MDB
capability, so instead we need to create a JMS Listener, which runs within Pega.
Pega can be deployed as a web application on any supported application server, including Tomcat.
However, remember that Tomcat doesn't provide JMS services itself. We need a third party JMS provider
to use JMS on Tomcat.
Service JMS Rule
Most of the Service JMS rule settings are the same as for other service rules so we won't get into those
here but rather point out the JMS specific ones. The Request tab describes how we want to map the data
in the incoming JMS message.
The Message Header section describes how to map the standard JMS properties that are set by the JMS

The Message Properties section is very similar except that the properties are application specific. We can
add a row for any properties we are expecting the message producer to set that we care about, specifying
the data type, the property name and description, and how and where to map the value of the property.
The Response tab does what the Request tab does but in reverse. It describes how to map data to send
a response to the message producer. In our example we are using a publish-and-subscribe model, so we
don't send a response.
JMS Listener for WAR Deployment
Remember that the listener is responsible for receiving messages from a topic or queue and passing
them to the service rule. It might also respond to messages if that's part of the integration.
A JMS listener runs in its own Java thread, started when our application starts. The listener works by
attempting to retrieve a message from the queue. If there's no message waiting, the thread stops and
blocks until one is available.
When a message arrives, the thread retrieves it and passes it to the service, and then goes back to
waiting. Periodically it wakes up to see if it's received a signal telling it to stop. If not, it returns to waiting
for a message. Having a long-running thread like this is not supported by J2EE, which is why this model
only works in WAR deployments.
The JMS listener can be found in the Integration-Resources category in the Records explorer.

Depending on what type of service this is and how frequently we expect it to be used, this might have
significant impact on our application's performance, so we will need to work with our system administrator
to decide how to configure startup and the number of threads.
Usually a JMS Service rule is specified in the Service Method field. It is possible to leave service class
and method empty and have the message producer pass them as message properties, but this feature is
rarely used and is not covered in this lesson.
The wait interval is how often the listener checks to see if it's received a shut-down notice.
Send error messages is what to do if an error occurs for which the JMS Service didn't generate a
response. If that happens and this option is selected, the listener sends an empty message. This option
only applies if responses are used in this integration.

First thing we need to do is indicate which messaging model we are using point-to-point or publish-andsubscribe.
The next section describes how the listener connects to the topic or queue it is waiting on.
Acknowledgement is a feature JMS provides to support reliable, in other words guaranteed message
delivery. A message is not considered delivered until it has been acknowledged by the recipient.
We could choose On Message Receipt, which means we acknowledge before calling the activity which
processes this message, or After Message Processing, which means we will wait until the message has
been processed by the service activity, in which case we send no acknowledgement if there was an error
during processing. If guaranteed delivery is necessary, JMS attempts to re-deliver the message.
In the publish-and-subscribe model, usually a consumer that subscribes to a topic only receives
messages published after it starts listening. The publisher determines how long a message lasts before it
expires, so when our listener starts, there might be messages already published and still present. If we
want to be able to receive those messages, we check durable subscriber. Note that only some JMS
providers support this feature.
Checking No Local Messages to filter out messages we sent. This allows us to use the same queue for
messages and replies; otherwise, we'd receive our own responses as new messages.

The Request Destination Name is the queue or topic the producer is sending messages to. This is the
destination where we will wait for a message.
Message Selector is a feature that allows the listener to filter out messages based on the JMS header or
properties. Our inventory management system publishes inventory notices on behalf of a number of
suppliers, and sets a property in the message called SupplierID. We want to receive messages only from
suppliers we use, and ignore the rest. If we entered a message selector string "SupplierID=3 OR
SupplierID=8" we would only receive messages relating to those suppliers.
In the Response section, the Preference menu lets us indicate how responses, if any, will be sent for
messages this service receives. For Preference we can choose Message, meaning responses should be
sent to whatever destination is specified in the message's JMSReplyTo property. Use this option if our
producer uses dynamic destinations, which are created as needed for each message sent. If we select
Listener the JMSReplyTo property in the messages is ignored and all responses are sent to the queue we
specify in the Response Destination Name field. Here No Reply is selected because the publish-andsubscribe
model doesn't support responses.

Now let's look at how to create a JMS MDB listener. Message-driven beans work somewhat differently
than the blocking thread model used by plain JMS listeners because they are running in the context of
container managed by the application server. The container monitors the queue or topic, and invokes the
MDB when a message arrives.
This provides better performance and a more flexible architecture, but requires configuration at the
application server level, which non-MDB listeners do not.
Much of the configuration of a JMS MDB listener is the same as a non-MDB JMS listener, so we'll just
highlight the differences.
On the Listener Properties tab, everything is the same other than the helpful reminder and that the fields
related to the listener Java thread is not there. These fields aren't part of the MDB-based listener because
it doesn't run in a thread.
On the JMS properties tab, the differences reflect the fact that when using an MDB listener, the
application server handles connecting to the request destination and retrieving the messages, so settings
related to identifying and connecting to the incoming destination are configured at the application server
level are not here.
However, it is still responsible for connecting to the destination for response messages we send. So for
point-to-point integrations that require synchronous responses, we still need to identify the response
destination and connection factory.
The key difference is the Deployment Descriptor. Although we can configure the listener, it won't run until
we deploy it to the application server.
Details of how this is done varies between different application servers, but usually comes down to editing
XML files called deployment descriptors. If possible, it attempts to generate a full deployment descriptor
file that we can use to replace the one we have. On some systems this isn't possible in which case it
generates XML fragments to insert in the existing deployment descriptor file. Click on the link to view the
XML fragment.
Each connector running in parallel makes a copy of the step page and returns the results to its copy. Because of this each connector needs a separate step page even if the connectors share the same appliesto class. If the same step page were to be used the connector finishing last would overwrite the results from the connector that finished first.
The request parameters are set in steps 3 and 4.
In steps 5 and 6 the Connect-SOAP method is invoked with the RunInParallel parameter selected.
The Connect-Wait method in step 7 joins the current requestor session with the child requestors that were created. If the WaitSeconds parameter is -1, the current requestor waits indefinitely until the child requestors complete. A positive integer waits for the maximum number of seconds entered.

It is not possible to catch exceptions or check the step status using the StepStatusFail when rule in the connector step transition for connectors that run in parallel, since the result is not communicated from the child to the parent. This also means that the error handler flow specified in the connector is not invoked.
Instead, make sure the Error Handling section on the connector rule specifies properties for the status values ensuring that the values are set on the connector page in case an error occurs.

When awakened, we need to examine the pyStatusValue on the connector pages for errors.
In step 8 we copy the response data to the case using a data transform.

In this scenario, two connectors were called in parallel. However, it would be possible to call the connector and perform any other type of task in parallel. In this flow a connector is run in parallel. The flow continues allowing the operator to capture data while the connector is being executed. Later the Connect-Wait is called and the parent and child requestors are joined.

Named pages are not passed to child requestors, so do not use named pages in the data mapping on the Request and Response tabs. Because connector simulations rely on named pages it does not work for connectors configured to run in parallel.

The parameter page is passed to the child requestor so it can be used in the data mapping for the request. However, the parameter page is not passed back to the parent requestor so it cannot be used in the response.

In addition to being executed synchronously and in parallel the SOAP, REST, SAP and HTTP connectors can also be executed in queue mode. Select queueing in the Processing Options section on the connector record's Service tab to configure queuing.
When queueing is used, each request is queued then processed later in the background by an agent. The next time that the agent associated with the queue runs, it will attempt to execute the request. The queueing characteristics are defined in the connector's Request Processor.

n addition to specifying the intention and request processor in the processing options section within the connector rule, within the calling activity the execution mode needs to be set to Queue when invoking the connector. If Run is selected the connector will execute synchronously.
The queuing of the connection request is a database operation, which means that a Commit step is required after the Connect-* step in the activity, or in the parent activity that calls this activity.

The name of the queue, how many times the associated agent should attempt to execute the request if it fails, and whether or not the results should be stored in the queue is configured within the connector's Connect Request Processor.

The queue class defines the type of queue that is used. It is possible to use the standard class System- Queue-ExecutionRequest-Connect-Default, or to use a custom concrete class derived from System- Queue-ExecutionRequest-Connect-
When more than one queue is specified, when rules are required to determine which queue to use. The conditions are tested top to bottom so always leave the bottom row empty as the default queue.
The Dequeuing Options tab contains instructions for the agent on how to handle requests stored in the queues.

The maximum number of execution attempts specifies how many times the agent should attempt to process the request. Typically, we want to set this value to more than 1 so the agent can try again if the request fails the first time. We have the option to keep the item in queue after all execution attempts have failed or after successful execution.
There is a standard agent called ProcessConnectQueue in the Pega-IntSvcs ruleset. The ProcessConnectQueue agent is configured to process items in the default connector queue class.
For each custom queue that has been defined an agent needs to be created and configured to process items from it. Specify the custom queue class in the class field. The agent must be of Advanced mode. Specify the standard activity ProcessQueue to process items in the queue.
The queue item instances can be viewed in the SMA by selecting Agent Management > System Queue Management.

The queue item first gets the status "Scheduled". If the request executes successfully, the status of the queued item is set to "Success". If the request fails, the number of failed attempts is incremented and the request status is either set to "Scheduled" or, if the maximum number of attempts has been reached, "Broken-Process". If the status was set to "Scheduled," the request is re-queued for the agent.
If the Connect Request Processor was not configured to keep items in the queue they are deleted after they are processed. For example, in our configuration items were not kept in the queue after successful execution, which means that there are no items in status Success.

Asynchronous services might be an alternative for long running services where either the response is not required immediately or can be reattempted if the service, itself, has to wait for some event to occur..
The following describes how requests are processed when a service is configured to process requests asynchronously.
An external application sends a request to a service named CreatePurchaseRequest. CreatePurchaseRequest uses the agent queue functionality to create a queue item for the request.
The name of the queue, how many times the associated agent should attempt to execute the request if it fails, and whether the results should be stored in the queue is defined in its service request processor.
The service queues the request with the status "Immediate" then spawns a batch requestor to run the request. The service returns the queue item ID to the calling application and ends the service call.
If the request is executed successfully the service page is populated with the results and included in the queued item, the status of the queued item is set to "Success".
If the request fails, the number of failed attempts is incremented and the request status is either set to "Scheduled" or, if maximum number of attempts has been reached, "Broken-Process". If the status is set to "Scheduled", the request is queued for the agent associated with the queue. The next time the agent runs, it attempts to run this service.
To retrieve a queued response the external application sends the queue ID to a service called GetPurchaseRequestResults as will be discussed.
To configure a service such as CreatePurchaseRequest to run asynchronously, we need to configure the Processing Options on the service rule's Service tab. There are two options for queued execution. One- way Operation places the request on the queue and then quits. This option should only to be used if the service either does return a response or the response can be ignored entirely. Execute Asynchronously is the second option which we will continue to discuss.

For queued services a Service Requestor Processor needs to be specified.
Let's have a look at the Service Request Processor in more detail.
The queue class defines the queue used. It is possible to use the standard class System-Queue- ExecutionRequest-Service-Default, or to use a custom concrete class derived from System-Queue- ExecutionRequest-Service.
If we specify multiple queues we must use a when condition to determine which queue to use. The conditions are tested top to bottom so always leave the bottom row empty as the default queue.
The Dequeuing Options tab contains instructions to the agent about how to handle requests stored in the queues.

The maximum number of execution attempts specifies how many times the agent should to attempt to process the request. Typically, we want to set this value to more than 1 so the agent can try again if the request fails.
We have the option to keep the item in queue after all execution attempts have failed or after successful execution. This should be selected so the external application can call back and retrieve the results.
The service page is stored as part of the queued item. Therefore, the service activity should write the result data to the service page so that it is available to map the response.
The standard agent called ProcessServiceQueue in the Pega-IntSvcs ruleset is configured to process items in the default service queue class.
For each custom queue that has been defined an agent needs to be created and configured to process items from it. Specify the custom queue class in the class field. The agent must be set to Advanced mode. Specify the standard activity ProcessQueue to process items in the queue.
Configure the data mapping for the pxQueueItemID parameter on the Response tab for a SOAP, REST, SAP, HTTP, JMS, or MQ service rule as shown below or on the Parameters tab for an EJB or Java service.
Note: If the play button on the service rule form is used to test the service, it executes synchronously. The service only execute asynchronously for external requests.

In addition to the service performing the actual asynchronous processing, a service to get the results of the asynchronous processing is also needed. In this case we have created a SOAP service called GetPurchaseRequestResults for that purpose.
The page class is the same as for the first service. The standard service activity @baseclass.GetExecutionRequest is used to retrieve the service request data stored in the queue item for the request. This service must be configured to execute synchronously.
Configure the ItemID parameter on the Request tab for SOAP, JMS, MQ, and HTTP rules or the Parameter tab for EJB and Java service rules.
When the service creates its response, all data from the retrieved service request is on the clipboard and available to be mapped. If the external application needs the queue execution status (whether the request ran successfully) it is possible to configure a data mapping for the pxExecutionStatus parameter.

It is also possible to configure a synchronous service to queue requests that fail for additional attempts. For example, an external application sends a request to a service that updates a case. If the case is locked an error is returned.
In this case we can configure the service to queue the request for additional attempts based on a condition. The service returns the ID of the queued item to the calling application as part of an error message. Note that the service runs synchronously if it does not fail or if none of the conditions are true.
The calling application must be configured to respond appropriately to either case. Thus, if it receives the information it requested from the first service it can continue with its processing but if it receives an error with a queue ID, it must call back to retrieve the results.
This approach is useful if the error causing the service to fail is temporary and the response is not required immediately.
To configure this, set the Processing options to specify that it executes synchronously and the Service Request Processor to use.
Select the Queue When option on the Faults tab. Specify the when rule in the When Key field. If the when rule evaluates to true, the system returns an error with the specified data and the request is queued in the same way an asynchronous request would be according to the details in the specified Service Request Processor. If false, the next row in the list is evaluated. If all when condition rules return false, the normal response data is returned.
In using SOAP and a when condition evaluates to true a SOAP fault is generated. The string in the Map From Key field is used as the Fault String and the value of the pxQueueItemID parameter is used as the Fault Detail.
The configuration is slightly different depending on the service type. Use Help to get the details for your specific service type.
The Integration Email landing page helps us get an overview of email accounts and listeners available on the system. Select DesignerStudio > Integration > Email to open the landing page. Let's review each of the tabs.
The Inbound tab displays a list of all email listeners on the system. Click on a row to view, add, or edit inbound email details.
The Outbound tab displays a list of all email server connections on the system. For outbound email the system looks for an email account with a name that matches the classgroup of the work object. It uses the standard email account named Default if no match is found.
The Email Accounts tab displays a list of all email accounts on the system.
The Email Listeners tab displays a list of all email listeners on the system.

In this section we will learn how we can use the Email Wizard (DesignerStudio > Integration > Email > Email Wizard) to configure our system to process incoming emails.
We can use the Email wizard to either configure an email account, only, or to create an email service that lets us create and manage work as well as configure an email account.
If we only want to send email then all we need to do is to configure an email account.Instead, let's say we want to create an inbound email service that creates a purchase request for a purchasing application.
The create work checkbox needs to be checked if new work objects are to be created by the email service. It also needs to be checked if email-based processing, such as approvals or other actions on a work object, is to be performed using an email reply.
The organization is used for newly created work objects. Select the ruleset version in which you want to save the created rules.
Next we need to configure the Email listener. Select an existing email account or enter the name of a new email account to be created.
Specify the name for the Email listener and the folder the listener is going to monitor. If an existing email listener is specified, that instance is overwritten.
Specify the service package to use or enter a name of one to be created. Next specify the Service Class.
Select the operator ID and password that the service is to use while it runs. Select Disable Startup to deactivate the listener.
In the next screen we need to configure the service package.
The Processing Mode should be set to stateless if the services in this package can be executed by any requestor in a pool of requestors without regard to processing that the requestor performed earlier; otherwise, select stateful.
Specify the access group for the service package. If Requires Authentication is selected, the service expects an Operator ID and password in the arriving request.
Next we need to configure the email account. The email account details are pre-populated if we selected an existing email account.
In the last screen we can review the inbound email configuration. Click Next to complete the Email Wizard.
The summary below shows the created records. Let's have a look at the records created.
The wizard created an email service rule named CreatepyStartCase in the work type class. The Primary Page is set to the work type class and is named ServicePage. The Service activity is the standard activity called pyCreateAndManageWorkFromEmail, with the starting flow and organization parameters as specified in the wizard.
The Request tab defines the Message data mapping. The Email wizard maps the Message Header data to the page called pyInboundEmail on the work item. The Delivery Status Notification data is not mapped, we'll have a look at that later. The Message Data is mapped to the pyBody property on the pyInboundEmail page. It is possible to use parse rules to parse the email body.

The Response tab defines the response that is sent back when the service processing is complete. If the email was successfully processed a "Thank you" email is sent to the sender. If an error occurs during processing an email with the issue is sent to the sender.
The email account record holds the data required to access the email account and contains the data entered in the wizard.
The email listener data instance contains information needed to route inbound email messages to an email service rule. It identifies the listener, the email account name, the name of the mail folder to monitor, the message format of the incoming messages, which email service rule to route messages to, and so on.
Note that the listener needs to be manually started after it has been created.

We just saw that the wizard created an email service with the standard activity Work- .pyCreateAndManageWorkFromEmail. The Work-.pyCreateAndManageWorkFromEmail activity both creates and manages work from the incoming emails. An email related to an existing case contains an identifier linking it to the case. We will look at this in detail in the next section. We can assume a case needs to be created if an identifier is not present.
When an email without an identifier is received a case of the primary page class is created by the pyCreateAndManageWorkFromEmail activity using the starter flow and organization specified as parameters. The email data is mapped to the page called pyInboundEmail on the work item as defined on the Request tab.
Let's try it out. Before testing the service with an email it is a good idea to check if everything looks good using the Run option in the Actions menu.
If the test was successful we can go ahead and test it with an email. We can provide a subject and a body and even an attachment.
A response was returned with the work object ID (PR-17) of the newly created case. The confirmation response is defined in the HTML rule Work-.EmailResponse and can easily be customized.
Let's have a look at the case created on the clipboard. We can see the email data is available in the pyInboundEmail page. Email attachments are added to the attachments list for the case.
If the email is not processed as expected try monitoring the inbox to make sure email messages arrive and are deleted. Make sure that the Keep Messages on Server option is cleared for the messages to get deleted. If messages are not deleted the Email Account might not be configured correctly or the listener might not be running. Check the log for any errors related to the email service. Use Trace Open Rule on the service rule to trace an incoming email to see how it is processed.

Emails can be used for work processing in situations where a decision is required, such as a manager review. We can configure an email to be sent to the manager allowing her to either approve or reject the request directly from the email rather than having to login to the application.
This functionality requires an email service as configured by the Email wizard. For example, using the standard service activity Work-.pyCreateAndManageWorkFromEmail with request parameters mapped accordingly. If you are using a custom service activity in your email service rule, make sure that this activity is called and the request parameters mapped.
Email work processing requires that the ProcessEmailRequest agent in the Pega-ProcessEngine ruleset is enabled; by default it is disabled.
We need to configure the parameters on the Notification section on the Review assignment to enable an email to be sent and automatically processed when returned.
The following notification options are available.


Send an email to the specified party.
Send a single email message to each party in the case.
Send an email to the assignee. If the case is assigned to a workbasket an email is sent to the first operator listed in the contacts list.
Send an email to the assignee. If the case is assigned to a workbasket an email is sent to all operators listed in the contacts list.


Send an email to the specified party.
In each case the email subject and correspondence name needs to be specified. Notify, NotifyAll, and NotifyParty allows us to send emails to parties rather than named operators on the system.
The standard correspondence Work-.pyEmailApprovalActions is typically used as a template when creating a custom correspondence rule containing sufficient information for the manager to make a decision. The pzEmailActions tag causes the flow actions to appear in the outgoing email and must be available in any custom correspondence rule used. Here we just use the standard rule for demonstration purposes.
The outgoing email is attached to the case. The email contains two links, one for each flow action. The manager approves or rejects by clicking one of the flow action links in the email and sends the reply.
If the case is locked when the approve email is received the sender is notified with an email telling them they can retry at a later time.
It is the identifier in the email subject that links the email response to appropriate case and flow action. It might take a while depending on the Email Listener settings before the approval email is processed. The incoming approval email is also attached to the case.
Support Email Conversations
When an email is sent from a Pega 7 application and the recipient replies, that email reply message can be attached to the original case.
Pega 7 creates a unique ID in the Message-ID field in the email header of outbound emails. This ID is used to route any reply messages to the appropriate case.
There is a standard Service Email rule called CreateEmailFlow that attaches a response message to the appropriate case. It uses the standard ProcessInboundEmail service activity. We need to configure an email listener to use CreateEmailFlow.
If we want to create and manage work as well as support email conversations in parallel, separate listeners are required, or alternatively we need to combine the Work- .pyCreateAndManageWorkFromEmail and Work-.ProcessInboundEmail activities in one service.
In addition to attaching the email to the case, the email is also forwarded to the party who sent the initial email.

Email messages sent from an application can be bounced back for many reasons. The recipient's email address might have changed or been entered incorrectly, or the recipient's mailbox might be full. In such cases the outbound email triggers Delivery Status Notification or DSN messages.
Additionally, outbound email messages can trigger Auto-Reply responses from recipients who are, for example, travelling or on vacation. If the email listener finds the string "AutoReply" anywhere in the message, it sets the DSN status code to 2.2.0 and maps the Auto-Reply text as the message body.
DSN messages are ignored unless Process Delivery Notifications are selected on the Process tab on the email listener. If choosing to process DSN messages, it is a good idea to determine and implement a business process that performs error handling for DSN messages. For example, we might want to configure the email service to handle emails that were addressed incorrectly differently from those that triggered an AutoReply message.
When an Email listener is enabled to process DSN messages, the DSN data is available and can be mapped in the Service Email rule. The standard page Work-.pyInboundEmail contains properties that can be used to map the DSN information
Pega 7 puts the case ID, correspondence ID, and subject into the message's Thread-Topic message header. If the message triggers a DSN, the Thread-Topic value is still intact. It is possible to map the values from the Thread-Topic header using a utility named parseThreadTopicHeader.
The parseThreadTopicHeader utility processes the string in the Thread-Topic header and maps the values of the Thread-Topic header to the following properties:
.pyInboundEmail.pyThreadTopicWorkID .pyInboundEmail.pyThreadTopicAttachID .pyInboundEmail.pyThreadTopicSubject
The information in the DSN and Thread-Topic fields can be used to create a business process for investigating bounced messages. For example, if the mailbox is full, the case can be routed back to determine an alternative way to contact the recipient.
If your Email Service is creating new cases we recommend either disabling DNS handling or implementing a business process that handles DSN message since DSN messages might otherwise cause looping resulting in an infinite number of cases being created.
The REST Connector wizard simplifies the process of integrating with REST services. The wizard walks us through the process of gathering the details of the service. The wizard then creates the records needed to interact with the external REST service.
From the Designer Studio menu select Integration > Connectors > Create REST Integration to start the wizard. We are prompted to enter a URL for the REST service. If we have a URL used previously to obtain a response from this service we can paste it in this field. In this particular example we want to integrate with a service that returns airport status and delay information in a JSON format.
The wizard analyses the URL and suggests the elements that may represent parameters. Each resource path and query string element in the URL is listed individually.
Resource path elements are assumed to be static by default. For resource path elements that are not static, and the value is treated as part of the request by the remote service, you should select Is Parameter as shown below. The system generates a property as part of the request data model and at run time substitutes that property's value for that part of the URL. The Endpoint URL at the top encloses each run time-set parameter's name in parenthesis. In this case we specified the AirportCode as a parameter.
Query string parameters are always considered part of the request. A property is created for each query string parameter.
If the service requires authentication, click the Edit Authentication link (below) to configure it. It is possible to configure a single authentication profile for the service or different profiles for each selected method.
Typically GET or POST is used for obtaining data from a REST service, but PUT and DELETE are supported as well.
Use the Configure button to adjust the query string parameters recognized by this method. By default all methods will use the same parameters if we do not adjust the query string..
If we have a file that contains a typical response for the GET method, or, in the case of POST and PUT, a typical request, we can upload that file here. This is used to generate the request/response data classes and properties as well as the XML parse and stream rules to map the data. We are expecting a JSON response rather than XML so it is not applicable for us.
The Test button allows us to verify that the service is accessible and returns the expected response. Provide any parameters necessary and adjust the authentication details if required.
The response message can be viewed in either Tree or Raw format for JSON and XML. If we are testing the POST or PUT method we can use the tree view to configure the body of the request.
In step 4 we need to provide the integration class, connector name, and ruleset.
We also have the option to create a data layer. Selecting this creates a data class and data page.
Clicking Create generates the integration.
We can use the Undo Generation button to remove the records that were generated. Select DesignerStudio > Application > Tools > All Wizards to see a list of all wizards. Use this page to complete a wizard in progress or undo the generation of a completed wizard.
Let's have a look at the generated records.
Classes and properties are created to hold the request and response parameters.
Below the base class, MyCo-Int-Airport, is the AirportStatus class. This class holds the request and response properties for the method in the service as well as the connector and, if mapping is required, holds mapping rules, as well. We can see that the request and response are represented as classes. The query string is also a separate class.
Let's have a look at the Connect REST rule. The Service tab was populated with the information entered in the wizard.
On the Methods tab only GET is configured. The request query string parameters are mapped to a property.
The same is true for the response message data.
Configure a REST Service
In this example we want to create a REST service that returns the details of a purchase request, with a given ID, in XML format. Similar to other service types the following records need to be configured:
REST Service
Service Activity
XML Stream for the response Service Package
Let's start by having a look at the REST service record. The primary page class is the purchase request case type. Here we have defined the ID input parameter as a resource path parameter.
The Methods tab allows us to specify the service activity and request/response mapping for each method type. This service uses the GET method only. We haven't specified any header or query string parameters. Alternatively, we could have specified the purchase request ID as a query string parameter as opposed to a resource parameter.
The service activity loads the purchase request data into the service page.
The response is mapped using an XML stream rule.
The XML stream rule assembles the purchase request data for the message.
Finally, let's have a look at the Service Package. We can see that our GetPurchaseRequest REST service is listed.
Test the service using the Run option in the Actions menu to verify that it works as expected.
The REST service is now available to be called by external systems. It is accessible under the following URL:
http://servername:portnumber/contextroot/PRRestService/packageName/className/methodName/resour cePath
Package, class and method name is the three-part key of the service rule. The resource path is as specified in the service rule.
If the service package is configured to require authentication the request must include username and password of an Operator ID. The external system can send these either in the header, or appended to the URL query string as name/value pairs for the parameters UserIdentifier and Password. The password must be base64-encoded.
Here is an example a URL for our REST service:
http://Pega7:8080/prweb/PRRestService/ADVPurchasingWorkPurchaseRequest/PurchaseRequest/GetP urchaseRequest/PR-21?UserIdentifier=Service@ADV&Password=cnVsZXM=
To participate in two- phase commits Pega 7 needs to be deployed as an enterprise application using an EAR file. In addition to deploying Pega 7 as an EAR, we also need to make sure that the application server uses a JDBC driver with XA (Extended Architecture) support. XA support allows database drivers to handle transactions against different data-stores such as databases, application servers, message queues, and others in an atomic fashion. XA uses a two-phase commit to ensure that all resources either commit or rollback the transaction consistently.
Let's look at couple of examples where distributed transactions make sense:
Data is saved in more than one database. For example, a case created in Pega 7 saves data to an external database (not part of the Pega 7 schema) while case history and assignment information is saved to the Pega 7 database. If the update to the external database fails, then the history and assignment records must be rolled back.
Pega 7 uses multiple resources, JMS and database, in a single transaction. In the same example above, Pega 7 saves the history and assignment information in Pega database and sends the request to update the external system using a JMS queue.
Despite deploying Pega 7 as an EAR application, there are a few limitations where interactions cannot use a two-phase commit. We cannot use a two-phase commit when:
1. Database-write operations use SQL queries instead of Obj- Methods.
2. A connector uses a protocol, such as SOAP or HTTP, which does not provide transactional support.
3. Multiple connector calls execute in parallel. Multiple parallel connectors cannot participate in a two-phase commit since running in parallel creates child requestors.

There are two types of transactions based on which resource is responsible for starting and committing the transaction:
Bean-Managed Transaction (BMT)
Container-Managed Transaction (CMT)
For BMT, Pega 7 directly uses JTA (Java Transaction API) to begin the transaction and commit or roll back changes. Pega 7 is responsible for the transactions and this is the default state of PEGA 7 system when deployed as an enterprise application (EAR).
For CMT, the application server, also called the container, is responsible for the transaction and Pega 7 participates as one resource.
Which scenarios use BMT and which ones use CMT? When Pega 7 is accessed by users through a web browser it must use BMT. PEGA 7 service requests can use both BMT and CMT.
Transaction Boundaries
The boundaries of a BMT are different from those of a CMT. To illustrate the boundaries of a BMT in Pega 7, let's consider an assignment which requires a database write to two different databases, called DB1 and DB2.
A transaction starts when the commit for the assignment is triggered, meaning when Pega 7 starts writing the deferred write operations to the database. A transaction ends when the commit or rollback succeeds. Both write operations must succeed for either to succeed; if one fails the other also fails. One thing to note in this BMT example is that the database / JTA transactions occur within the scope of the assignment. A subsequent assignment or any other subsequent business operation can then rely on the results of the database write operations.
Now let's explore what happens in the case of CMT. Let's look at another example where an EJB service rule is configured to participate in CMT with PEGA 7 deployed and configured accordingly.

The service processing requires two write operations to two different databases as in the previous example. In this case, when the EJB service rule is finished marshaling its response, the write operations are submitted to the database resource manager. The operations are committed if all parts of the distributed transaction succeed. If Pega 7 throws a runtime exception or any other participant in the distributed transaction has a failure, all operations are rolled back. The transaction starts outside of Pega 7 and ends outside of Pega 7. This means that no business operation within the transaction, yet occur after the EJB response is sent, can rely on the results of the database write operations from the same transaction.
If using CMT, a Pega 7 application must be designed accordingly. In the above example, if the service were to create a new case, the case will be transient until the container commits the object. Any subsequent actions that need to be performed on the case must be performed in a separate transaction once the case is actually created. All data operations such as the Commit method and the WriteNow parameter of the Obj-Save and Obj-Delete methods are simply ignored.

Pega 7 services using EJB, JAVA, or JMS protocols can participate in both BMT and CMT. Pega 7 services use BMT by default and we need to make additional configuration changes to use CMT.
JAVA Services:
Service Java rules are packaged as JAR files by using a service package data instance. This JAR file will then be integrated in the class path of the external system for it to access the Pega 7 java service.
The external application that uses this jar file can manage the transaction in which the JAVA services participate. To do this, update the file in the JAR file and add environment entries to it for each of the service methods.
For example, to set the transaction attribute for a service method named createPurchaseRequest to required, the environment entry will look like this:
The transaction attribute specified for the service method tells the PRService EJB which of its invokeService methods to use when routing a request to the Java service rule. Then, the PRService EJB uses its method invokeServiceTxnReq to invoke the createPhonebook service rule.
EJB Services:
Similar to Service Java rules, Service EJB is also deployed as a jar file which serves as a proxy jar to access the service rules from the external system. The external system accesses the service rules as though it is a business method of a session bean (EJB). We need to modify the generated ejb-jar.xml file so that the container in which the proxy is deployed manages the transactions of the EJB services. In the ejb-jar.xml file we need to configure transaction attributes appropriately for the methods (service rules) that the proxy represents.

1. First, we must change the transaction type from "Bean" to "Container" in the enterprise-beans section.
2. Then we add the environment entry for each of the service methods, in this case we added for StartPurchaseRequest. The value of the env-entry-value tells the PRService EJB which of the invokeService methods to use when routing a request to that EJB service rule.
3. Lastly, we add the assembly-descriptor section with the container-transaction entry for each of the service methods. The value specified for the env-entry-name and the method-name value in the assembly-descriptor section must exactly match the service method name which is the third key of the service rule.

When Pega 7 is deployed as an enterprise application, JMS MDB-based listeners are available to be deployed as application components and can be managed by the application server. The MDB listener routes messages to the JMS service rule.
We need to create a new JMS MDB listener data instance in Integration-Resources category. This instance can determine whether we want messages to be redelivered if something goes wrong while they are being delivered or processed. To do so, select the container-managed transaction field on the Listener Properties tab of the object.
When the container-managed transaction field is selected, the MDB starts a transaction when it gets a message from the queue and the service processing participates in that transaction. If there are problems either with the message delivery or with the service processing, the message is delivered again so the service can attempt to process it again.
When the container-managed transaction field is cleared, message delivery and service processing occur outside of a transaction. If the service processing fails, the message is not redelivered.

PEGA 7 connectors can participate in a distributed transaction. However when the connector is accessed in a user session (the most common use case) it can only be managed by BMT. CMT can only be applied when connectors are launched by Pega 7 services such as Service EJB or Service JMS.
When a connector runs in a user's requestor context, transactions are bean-managed. In this case, Pega 7 starts the transaction when the commit is triggered and ends it when the Commit or Rollback succeeds. Therefore, when the connector is invoked, there is no transaction open for it to participate in. Whatever processing and database writes that occurred within the external system occurred separately from any processing in Pega 7 therefore cannot transactionally depend on a commit operation from the application.
On the other hand, if the service processing for an EJB, Java, or JMS service invokes a connector, the connector runs in the service's requestor context. Therefore, if the service participates in a container- managed transaction, the connector also participates in that same container-managed transaction.
When using a JMS connector in a CMT context, remember that the JMS message is not actually delivered to the JMS service provider until the container managing the transaction issues the commit. This means that the message is not delivered until the container-managed transaction is completed. Therefore, having a JMS connector waiting for a response when running in a container-managed transaction would hang the session at runtime until Pega 7 times out and ends the server interaction.
In contrast to traditional Java applications, PRPC based applications are not deployed as enterprise archives (EAR) or web archives (WAR). Whilst PRPC requires deploying an EAR or WAR, the ear/war has minimal functionality and does not contain any business specific logic or features.
An application developed on a PRPC platform consists of a set of process flows, UI components, decisions, declaratives, service levels, activities, etc. - collectively known as "rules". At design time, these rules are created using rule forms and are stored in the PegaRULES database. When a rule (flow, UI, service, etc.) is first called at runtime, the PRPC engine executes a sophisticated 'rule resolution' algorithm to find the most appropriate rule version based on rule name, class inheritance, ruleset, application version and rule circumstance and then loads the appropriate rule definition.
Note: Refer to the glossary for additional information about rule circumstance.
Many of the PRPC rule types are executed at run-time as java classes and methods. PRPC 'assembles' executable java as needed by generating and compiling java, then storing the class files back into the database table named pr_assembledclasses. When the same rule is accessed again, it reads from this database table until the rule changes.
The run-time PRPC environment includes 'engine' and 'application' classes. The core PRPC engine is implemented through Java classes. Starting with PRPC v6.1, most of the PRPC 'engine' is actually loaded from the database table named pr_engineclasses. Storing the engine in the database provides the ability to apply engine maintenance without redeploying any files on the application servers.
PRPC utilizes custom Java class loaders. The custom loaders enable PRPC-based applications to be changed in a live system without having an outage or the need to redeploy. PRPC applications are typically hosted inside an application server and accessed by human users via browsers or by partner applications such as a service. PRPC facilities can also be directly embedded in Java applications (JSR- 94 API). Selected PRPC features and facilities may run from command-line java scripts that are distributed with PRPC.
Applications developed on the PRPC platform, require two main systems for its operation.
1. A Database Server containing the rules
2. An Application Server that contains the PRPC rule engine
To setup a PRPC system, the administrator needs access to these two systems. The database server is used to store the following objects:
1. Transactional data, such as case instances (often referred to as "work objects" or "work items") that are created during runtime. Case history, case attachments, and work assignments are examples of work objects creating during runtime. .
2. Rules that comprise applications that the PRPC engine uses to generate the application code at runtime. It also includes the rules from the Pega rulesets that make up the PegaRULES base application.
3. Reference data instances required for processing a case.
4. The core engine Java classes.
5. The assembled application Java classes.

PRPC generates database schema during installation which we discuss in detail in the Installing PRPC lesson. PRPC can generate database schema specific to different vendors, namely Oracle, DB2, SQL server, Postgres and so on. This schema can be directly applied to the database during installation or can be given to the DBA to create the database after the installation is complete. In Pega 7, the PRPC database supports split schema (two separate schemas to store rules and data instances respectively). We will learn more about database tables and their structure in the lesson in the Database Architecture lesson group.
In addition to creating a new database, the administrator must also deploy three PRPC application archives as Java Applications in the application server. One of these archives is the actual PRPC engine which is bundled as an EAR (Enterprise ARchive) or as a WAR (Web ARchive).
The choice of implementing PRPC as an EAR or a WAR is usually decided by the enterprise architecture design team and in some cases by the project sponsor. PRPC can be deployed as EAR on IBM Websphere, JBOSS, and Oracle Weblogic and so on. When using Apache Tomcat, PRPC can be deployed only as a WAR.
Enterprise-tier deployments support Java capabilities like JMS Message services, Two-phase commits, JAAS and JEE Security, etc. PRPC supports heterogeneous application deployment using both WAR and EAR engine deployments within same environment if there is a need for platform-specific services on particular nodes. In theory one could use a different model for each environment (WAR/Tomcat for Dev and QA, WebSphere/EAR for production) but this is not recommended.
In general, EAR should be used when you have cross-system transactional integrity (two phase commits), JEE security requirements or enterprise standards for applications to be distributed as enterprise beans. WAR can be used in the absence of such requirements and when simpler configuration and operations are desired.

PRPC ships with three main application archives that have to be deployed in the application server (if using EAR deployment) or in the servlet container (if using WAR deployment).
1. Pega Engine is shipped as prweb.war or prpc_j2ee.ear (Note: PRPC ships several vendor specific EARS). The archive specifically contains the classes that bootstrap PRPC and start loading classes from database. PRPC ships with some application server specific EAR versions, refer the install kit and the installation guide to find the file that needs to be deployed. The installation guide provides specific instructions on how the ear or war can be deployed in the application server. Deploying this application archive facilitates users to access PRPC using the URL http://<servername>:<portnumber>/prweb/PRServlet .
2. System Management is the system management tool that we learned about in the previous lesson group must be deployed. The System Management Application (SMA) can be configured to monitor one or more PRPC nodes. SMA helps administrators and system architects to monitor and control agents, requestors, listeners and other processing. Deploying this application archive enables administrators to access the system management tool using the
URL http://<servername>:<portnumber>/prsysmgmt .
3. PRPC Online Help provides excellent contextual or inline help on all rule forms and most of the dialogs. To access this help, we need to deploy the prhelp.war. Help is shipped only as a WAR file and can also be accessed using the URL http://<servername>:<portnumber>/prhelp .
The actual steps to deploy EAR and WAR vary by application server and are typically done using the administrative tool provided by each application server. In the case of tomcat, we use the Tomcat Web Application Manager to deploy these archives and this is how it looks after the archives are deployed successfully.

SMA and Help are usually deployed on a single node in a multi-node clustered PRPC environment. A single SMA can be used to monitor multiple PRPC nodes. SMA and Help are accessible from the browser using the URL or can be launched from the Designer Studio. The URL's for SMA and Help can be defined using the 'dynamic system setting' records in the PRPC database. We will learn about this specific dynamic system setting in the next part of this lesson. As part of PRPC deployment it is necessary to set the SMA and help URLs by using the System Settings Landing page, which is accessed in the Designer Studio by selecting System>Settings>URLs.

To access the inline help on all the rules form and to access the help menu, we must set the help URL in the settings page.

We must set the SMA URL in the settings page if we want to access the System Management application from the Designer Studio.

In addition to deploying the PRPC application archives, we must also configure the HTTP port and a JNDI data source for the PegaRULES database in the application server. (This differs forapplication servers, so refer to the administrator guide of the application server for these details). For WebSphere, WebLogic and JBoss this can be done using the administrative console. In Tomcat, we will need to edit configuration files including server.xml and context.xml located in the /conf directory. Please refer to the PRPC Installation guide specific to your target environment for details.

This diagram shows the PRPC components and how they fit in the JEE environment when deployed
as an enterprise application. The application server that is shown here is not really a PRPC
component but instead the environment or platform on which PRPC is deployed. This is a third-party
application. As of now, PRPC is compatible with the following application servers:
Apache Tomcat which is a free web container.
Oracle WebLogic, IBM WebSphere, and Red Hat JBoss, all three being enterprise application
Within the application server, one of the key components is the "web container" which hosts the two
main PRPC archives - "prweb" and "prsysmgmt". prsysmgmt war uses Java Management eXtension
(JMX) technology to access those PRPC facilities, represented as MBeans or Managed Beans and
hosted inside the entity called MBean Server.
Another key component within the big box is the "EJB container" which provides common functions,
such as transaction support through the EJB engine and the connection to other enterprise
information systems through the resource adapter. The EJB engine is hosted by PRPC prbeans.jar
which also provides all EJBs support including the eTier engine EJB. The PRPC resource adapter is
implemented through the pradapter RAR (Resource Adapter aRchive). Engine EJBs are stateless
EJBs providing an interface between the PRServlet in the Web container and the Rule Engine
container in the EJB container. Rules Engine Container is a single container which does all
processing and contains static content and data (stored as clipboard pages). Processing in the
Rules engine container can be Bean managed or container managed transactions.
The database EJB handles secondary transactions such as locking and caching, this EJB uses
bean-managed transactions since direct access to the database is required for each transaction.
The diagram also represents other JEE frameworks. Let's review them one at a time.
The first framework is JAAS or Java Authentication and Authorization Service. It is a java security
framework that provides, for example, a representation of user identity, a login service and a service
that tests whether a user was granted a specific permission.
Another framework is JDBC or Java Database Connectivity. PRPC uses JDBC for all database
interactions including persistence. JDBC enables java-based applications to execute SQL
Next is the JTA or Java Transaction API. It allows JEE components to participate in distributed
transactions within the confines of a Transaction Manager. With the EAR deployment, JTA allows
the support of two-phase commits as long as the database driver is an XA-type driver.
The JMS server plays an important role in transparently handling many aspects of message pulling
or pushing through a message queue or a message topic area. When deployed as a WAR file,
PRPC only supports a JMS Listener whereas when deployed as an EAR file, it is possible to
implement a JMS MDB Listener.
PRPC leverages other JEE frameworks which are not represented in this diagram but are equally
used in both deployment models.
JNDI and JavaMail:
JNDI stands for Java Naming and Directory Interface. In an application server, a JNDI is defined like
a tree whereby you add nodes and refer to the nodes rather than referring to what the nodes point
to. This allows named java objects of any type to be stored and accessed. We will encounter the use
of JNDI in PRPC when creating Data-Admin-DB-Name instances for storing connection details of a
database or when creating connection details in integration rules.
JavaMail is another JEE API. It supports POP and IMAP protocols only and, as a result, PRPC
supports only those two protocols. JavaMail is leveraged during Correspondence rules execution.

While PRPC applications usually are entirely self-contained and maintained inside the PRPC database, PRPC has three configuration files local to each JVM configuration that often require some degree of modification or customization by application server administrators. These files are:
1. prconfig.xml - PRPC server configuration file. Administrators must modify this file if they want to 'classify' a node (web, batch, agent, and listener) tune cache sizes, set alert thresholds or other behaviors specific to a single jvm (node).
2. prlogging.xml - logging configuration file. Administrators modify this file to implement log rolling, add custom log files, configure settings for integrating with PRPC 'autonomic event services' and 'diagnostic cloud' monitoring systems and to configure log filtering if required. Again editing this directly must be done only when behavior is specific to a single node.
3. - the 'mbean properties' file controls the facilities available to the 'management bean' interface used by the pega-provided system management application or any custom JMX scripting. By default, there are restrictions on certain operations, like viewing user session business data on the clipboard. It is quite common to edit the mbean properties to lift such restrictions in the pre-production environment to provide better centralized debugging capabilities. Editing prmbeans was covered in detail in the system management application lesson.
Now, let's see how to access these files to make modifications.
The files for WAR deployment in a tomcat server are located in the <tomcat-install-
dir>/Webapps/prweb/WEB-INF/classes directory.
The files for the EAR deployment are located in <InstallDir>/APP- INF/lib/prresources.jar. [prresources.png]
To access the file we need to extract prresources.jar. After editing the files, we need to package them into prresources.jar and then package prresources into the EAR file. After packaging the EAR, we need to redeploy it.
PRPC allows administrators to override default configuration file names and locations without having to package or redeploy PRPC. Follow one of these options to change the configuration and logging options.
1. Define JVM Custom Properties - In WebSphere we can define custom properties for accessing the prconfig and prlogging.xml file. In other application servers, the custom property can be directly added in the startup file by using the syntax -Dpegarules.config =/<path>/prconfig,xml.
2. Set User.Home parameter - When a default value is set for this parameter in Websphere it instructs PRPC to look for the prconfig,xml and prlogging,xml files at this location. User.home also works in JBoss.
3. URL resource references - Some application servers like Websphere, offer an elegant way of accessing them using URL resource references. In EAR implementations, these are modified by changing url/pegarules.config and url/pegarules.logging.configuration. Refer to the WebSphere Administration Guide to see how to access these reference URLs.
Starting in PRPC v6.2, most of the PRPC server configuration settings can be stored in the PRPC database as instances of the dynamic system settings class (Data-Admin-System-Settings, or DASS). To simplify administration and ensure consistent behavior across servers, we recommend that configuration settings be set using Dynamic System Settings (DASS) instead of editing the prconfig.xml. The entries made using DASS are stored in the database table named pr_data_admin. By storing these entries in the database, we avoid modifying the configuration settings across multiple nodes in a clustered PRPC environment. Dynamic System Settings can be added using the Designer Studio. Dynamic System Settings - are in the SysAdmin Category and are accessed using the Records Explorer since the settings made apply to all applications across the system. Not all settings are shipped as DASS in the product and we need to create DASS only when it is required to overwrite the default value.
Configuration Categories
There are several configuration categories that we can configure using DASS. These categories are:
1. Alerts which contain configuration settings to set alert thresholds. If the threshold value is exceeded, the system writes an entry into the PRPC alerts log file.
a. Database operation time threshold can be set by defining a new DSS as shown below. The setting purpose has the following value prconfig/alerts/browser/interactionTimeThreshold/warnMS. By default this value is set as 1000ms (1 second).

After entering these values, click Create and Open to display a new rule form. Then enter the value in milliseconds (3000). After this change, PRPC writes an entry to the alert file only if the time exceeds 3 seconds.
1. ProComHelpURI is the help URL which must be set using DASS in a clustered PRPC environment.
Owning-Ruleset: Pega-ProCom
Type: String
Value: http://servername:portnumber/prhelp
2. SystemManagementURI is the SMA URL which must be set using DASS in a clustered PRPC environment.
Owning-Ruleset: Pega-ProCom
Type: String
Value: http://servername:portnumber/prsysmgmt
3. SearchSOAPURI helps in identifying the URL of the node where the indexing is enabled. In a clustered PRPC environment, all nodes which do not have the index files connect with the indexed node through a SOAP connection.
Owning-Ruleset: Pega-Rules
Type: String
Value: http://servername:portnumber/prweb/PRSOAPServlet
Refer to the PDN link prpc-62 for additional categories and information.
PRPC applications may need to leverage non-Pega external Java classes and jar files to support PRPC implementations. The external java may be from infrastructure or interface vendors, such as IBM MQ Series java classes, or from custom java applications. These external classes may be referenced directly in rules that support Java. PRPC must have access to the appropriate java at 'compile time', when the rule is first assembled, and at run time, when rules are executed.
All application servers provide a standard 'library' directory (lib) for the server to automatically load java classes on behalf of PRPC applications.
When a PRPC application is loaded, the 'boot loader' logic in the WAR/EAR needs to access the database. To access the database, the application server must load appropriate JDBC drivers from the library directory.
To access vendor or custom Java in applications, the class must be both loadable and accessible to the internal Java assembly/compilation facilities. While the application server automatically loads classes from the lib directory, Java assembly requires that each class or jar file be explicitly passed in compiler arguments. To make an external class visible to the PRPC compiler, it must be added to the compiler class paths system setting.
defaultClasses: To add additional class files, a semicolon is added at the end to add a new entry. Either a period or forward slash can be used to separate the name of the classes.
defaultPaths: This is to include the location of the jar file that is placed in the application server, since the entire path is stored, the file can be placed in any directory. If using a different directory, we need to make sure that the jar is placed in the same directory on all nodes in a clustered PRPC environment.
To simplify server administration and configuration, it is possible to load external java code other than JDBC drivers into the PRPC database directly, and let PRPC manage class loading and class compilation. All external java loaded into PRPC is automatically recognized at system startup and does not require explicit listing in the compiler class path argument. To load external java into the PRPC database, one may directly import class and jar files from the Import gadget in Designer Studio. Once imported, the external java may be distributed as part of PRPC "product" rule and data bundles.
If we do not want to stop and start the application when adding new classes, we can use SMA to refresh the classes. This can be done by navigating to the Advanced section and click the Class Management link to load that page. In the page, click the Refresh External Jars button to refresh the classes.
PRPC works on most leading databases such as Oracle, SQL Server, IBM DB2, PostgreSQL and so on. PRPC installation comes with a database schema that stores information on rule, data and work instances in tables. It is expected to use these tables as-is except for the work and custom data classes built for the applications. By default, the New Application wizard creates a custom mapping table for all work classes created in PRPC. Similarly wizards such as the Live Data and the optimization wizards create custom mapping tables for all data and indexed page properties.
Persistence Model
The PRPC persistence model is designed to work similarly to the Object Relational Mapping (ORM) tools such as Hibernate. A PRPC class is mapped to a database table and the instances of that class are stored as rows in that table. The data gets saved in the pzpvStream column as BLOB Data, properties which have to be exposed are marked for exposure and the table structure can be changed which results in those properties persisted as columns. Not all PRPC classes get stored in database tables. Only tables that are marked as concrete classes and ones that have instances are mapped to tables. Both Work and Data classes get mapped to tables, however they are mapped differently. Work table mappings work on the basis of the work pool they belong to; each class group is mapped to a database table and all work classes that belong to that work pool (inheriting from the class group) gets saved in that table. Similarly Data classes can be mapped to database tables. Data classes can also be mapped as external classes, the primary difference being the external classes do not have Pega specific columns such as pzInsKey, pxpvStream and so on.
The BLOB that is stored in the pzPVStream column of the table is obfuscated and is zipped to save space. The obfuscation is proprietary to Pega. For example, an aggregate such as a Value List property can have multiple values. When the system saves an object that includes an aggregate property, its values are compressed together (or "deflated") into a single column. When the instance is later opened and placed on a clipboard, the column is decompressed (or "inflated"). When deflated, the property names and values are present in a single text value. This text value has a proprietary format; the values are obfuscated.
The newer versions of PRPC support split schema consisting of a rules schema which includes the rule
base and system objects and a data schema which includes the work tables and data objects. Split
schemas are mainly useful in performing PRPC upgrades to rule tables without bringing down the server.
Connection Pooling
Software object pooling is not a new concept. There are many scenarios where some type of object pooling technique is employed to improve application performance, concurrency, and scalability. After all, having your database code create a new Connection object on every client request is an expensive process. Moreover, with today's demanding applications, creating new connections for data access from scratch, maintaining them, and tearing down the open connection can lead to massive load on the server. Connection pooling eliminates JDBC overhead. Further, object pooling also helps to reduce the garbage collection load.
PRPC can leverage the JNDI data source defined as part of the application server or the servlet container (in case of tomcat) for the database connection details. While installing the system, it is highly recommended that we set the connection pool size in the application server to make sure users are not waiting to acquire database connections. The PRPC application uses caching which limits interaction with the database. However each submit requires a DB connection to update the work item. Careful introspection of alert log files helps to set the connection pool size. PRPC also functions if we specify the database connection properties in the prconfig.xml, but this should not be used even when PRPC is deployed in Tomcat. In Tomcat the database connection should be defined specifically for our application by defining a resource definition in the context.xml file.
In PRPC we need to create a Database Table data instance to map a class to a database table. Database Table is associated to the SysAdmin records category. A Database Table instance maps to a schema name (in this case, PegaDATA) as well as a table name. The PegaDATA and PegaRULES database instance are created by default; we need to create a new Database record if additional databases are used.
PRPC uses several caching mechanisms to enhance application performance. Below is list of PRPC caches.
1. Rule Instance cache - stored in memory
2. Rule Assembly cache - stored in memory and database
3. Lookup list cache - stored in disk (Server File System)
4. Conclusion cache - stored in database and memory
5. Static Content cache - stored in disk (Server and Client File System)
6. Declarative Page cache - stored in memory
7. Declarative Networks cache - stored in memory
Most caches are stored in memory except a few that use the file system or database. If caches are primarily built to restrict accessing rules from database each time, why do we save some of these caches in database? We will learn about the significance of storing caches in the database when looking at the Rule Assembly in detail.
Cache content is automatically controlled by the engine and PRPC initializes the cache using default values for all caches. The default values are usually a good starting point however each application is unique and it's hard to come up with a number that can work for all applications. It is extremely important to look at conducting some load testing to simulate real time performance. This helps in understanding which settings need some adjustments. In general, it is highly recommended that you work with the Pega Support team to fine tune caches but let's learn about the various caches, what the default values are for each cache and how to modify these values if required.

The system then searches the Rule Instance Cache to find the rule. Cache returns one of the three things:
Cache Hit - Means that the rule data is stored in the cache
Cache Miss - Means that the rule has not been requested before, check the database
Not found - Means that the cache has registered that the rule was not found in the database
When a "cache miss" is returned, the system then requests the rule from database and if found, returns the rule. To improve efficiency, the system does not record the rule in cache until the same rule has been requested multiple times (three and in some cases more than that). The system keeps track of the number of times a rule is requested and keeps the entries in probationary state, so even if the rule is recorded as found in database, the system makes requests to the database every time until it has been accessed multiple times.
When the rule gets updated, the system checks the cache for any rule related to the changed rule and invalidates those entries, so that subsequent requests for this rule go to the database and get the updated rule information (rather than getting the outdated information from the cache).
The rule cache has two distinct parts.First, the "alias cache" which captures rule resolution relationships and provides the pzInsKeys of the candidate rules.
The cache is structured as a hashmap with the key assembled from a hash of the requestor's RuleSet list, the rule class, the applies-to-class of the rule and the rule purpose which is basically the rule name. As structured, multiple keys may link to the same exact rule.
Now that the system is able to access the candidate pzInsKeys through the alias cache, the instance cache which is the second part of the rule cache, allows us to link the pzInskey to the content of the associated blob. The instance cache captures the blobs from database and stores the XML format in memory.

The rule cache can grow substantially and take upon a growing size of the total memory available for the system. To prevent this, we would invest time in tuning the cache size. Rule Instance cache details can be viewed in SMA by expanding the Advanced section and then clicking Rule Cache Management.
The Rule Cache Summary section lists the instance count (this is the same count that we saw on the Memory Management screen). This section also provides additional details for the cache hits (rules found), cache misses (rules not found) and so on.
The SMA can also be used to tune rule cache size. Rule Instance cache sizing is extremely important. By default, the rule cache is set to 3000 entries. We can change this limit by using dynamic system settings. We can create a new dynamic system setting (DASS) with the Setting Purpose defined as:
We do not want this number to be high or low and PRPC allows tuning only by setting the instancecountlimit, value which can be determined using the procedure described below.
1. Developers should start by testing all the important parts of the application- create, modify and resolve cases, run the most commonly used reports, make sure all the agents that are used in the application are enabled and execute without any errors. Login as different operator profiles and so on. It is important for the user to run all the rules. Doing all this will make sure all the rules are preassembled.
2. Now, clear the Rule Cache using the Clear Cache button in the Rule Cache Management page in the SMA. After clearing the caches, repeat testing all the important parts of the application and then go to the Rule Cache Management page in SMA and check the instance count.
3. If the instance count comes out to be anything significantly higher or lower than 2000, then modify the instancecountlimit using DASS. Set the instancecountlimit to 1.5 times the instance count you noticed in the testing. In a multi-node clustered environment all nodes should use the same value except if a node is designated to serve a specific purpose such as processing only for agents or services.

Important Note: Apart from the tuning procedure, the Clear Cache button should not be used at all because clearing the Rule cache slows the system down. This cache was created to provide efficiency in rule lookup; clearing the cache means that all the cached information will have to be rebuilt.
Setting size is critical since setting a value too high is not good because too many rules gets cached taking up more memory than required. Setting a value too low requires the system to access the database frequently.
PRPC also improves rule cache efficiency by using two types of mechanisms to clear entries in caches, the first one is pruning and the second one is draining.
Pruning is triggered when the instancecountlimit is reached. The system uses the Most Recently Used (MRU) algorithm to delete the old ones to make room for the new ones.
Draining happens on an ongoing basis, every time a rule is read from the database and added to cache, the oldest cached item greater than the minimum age threshold is removed from the cache.
Rule cache should never be cleared unless lots of pruning and draining happens. At times it is harder for developers to predict and simulate the correct size for a production system. In this case, if performance testing indicates a lot of cache misses, the administrator might need to perform the steps above in production to estimate the cache size.
When accessing the Rule Cache Management page in the SMA, pay close attention to the Instance MRU section. The Limit Size is what is set using dynamic system settings. The target size and max size are computed based on the limit size. The current size reflects the number of rules currently cached.
Amongst the pruned counts in the second line, MaxPruned should never be greater than zero. Cache tuning exercise should happen in this case. DrainingPruned displays the number of rules that are getting pruned, once the current size exceeds target size. LimitPruned appears when the current size exceeds limit size. This should not occur, and when it does occur it makes sense to increase the limit size before max pruning occurs.
The Instance MRU (Most Recently Used) section displays the MRU details, pruning and draining. These are used by PRPC to remove the entries in cache. We will learn more about this in the next section.
There are three types of reports offered by PRPC. We can click the report button which gives the option to export the cache results to a csv file.
The Instance Cache report contains information about the cache status (hit, miss, etc.), rule aliases, cache hits, hits per second, estimated size, added (date and time), accessed (date and time)
The Rule Resolution report contains details on Rule Resolution Alias keys. A rule resolution alias consists of the class name, the rule identity and a hash of the user's ruleset list. When the Alias is cached, each of its candidate instances (narrowed by rule resolution logic) should be in the Instance cache entry, if it does not exist, it creates an entry and mark the status as Preloaded.
The Rule Identities report contains information such as the rule identity name, rule name, rule class, ruleset list and requested applies to class.
We can get cache information for a specific rule type or a particular rule by entering them in the text boxes above the rule cache summary. After entering the rule type(Ex:Rule-Obj-Flow) or a specific rule name, we need to click the Rule Cache Detail button to see the caching behavior of a specific rule type (flow) or even a specific rule name. In the Rule Cache Detail page, we can see the instance count specific to that rule type, cache hits, misses and other information such as the date and time added and the date and time accessed.

Rules Assembly is the process whereby all of the rules which are run by the Pega 7 system for which Java compilation and generation is needed and completed. It actually provides access to the constructor of the java class generated. Since Rules Assembly is an expensive process (in terms of system resources), the process has been broken down into these four steps to maximize performance and efficiency.
When the system calls a rule to be run, if that rule was never executed, the Rules Assembly process must:
Assemble the rule, including all the other rules required. For example, if this is an activity rule, it may call other rules. This requires calculating which rules apply to the current user's request and context.
Code generation logic creates the Java code for the rule
The Java code is compiled and the resulting class is saved to the database
The class must be loaded by JVM for execution
Of these steps, the code compilation is the most expensive. We use cache to remember which class applies to what request in order to avoid repeating any of this work. The techniques used to cache assemblies and avoid extra work have changed over the past few PRPC releases.
In all PRPC releases prior to v6.3 this was referred to as FUA (First Use Assembly) cache. The FUA cache stores the key which is a combination of the requested rule name, the Rule-Obj-Class to apply it against, and the user's ruleset list. The assembly process also uses "inlining" which means the logic from many rules are included in one assembled class. The cache metadata is stored in three tables in the database so that this can be shared across other nodes and is retained even when the server is restarted. Caches are grouped based on the ruleset list and if multiple users share the same ruleset list they are able to share the cache content. Rule Assembly cache details can be viewed in SMA by expanding the Advanced section and then clicking Rule Assembly Cache Management.
Application Based Assembly Cache (ABA)
In PRPC v 6.3, the cache was grouped in applications instead of ruleset lists. This reduced the number of assemblies considerably and the size shrunk hugely by avoiding redundancy. ABA stores the Rule Asembly cache in three database tables
pr_sys_appcache_shortcut: This table stores the assembly cache, mapping cache key (rule name, requested applies-to class, and top level application) to an assembled class name
pr_sys_appcache_dep: This table stores the inlined rule names for each assembled class
pr_sys_appcache_entry: This table stores the assembly cache, mapping cache key (rule name,
requested applies-to class, and 'owning application') to an assembled class name In addition to this,
to support the sharing, the system also maintains the application hierarchy and rulesets
in database tables namely pr_sys_app_hierarchy_flat and pr_sys_app_ruleset_index respectively

The Application Based Assembly Cache Management page in the Advanced section in the SMA helps in viewing the ABA. ABA is used in 7.1 applications as well for all UI rules such as Harness, flow actions, sections, and paragraphs. There are various reports that provide additional information on caching entries that are stored in both memory and database.
. It is essential to look at the count field in both ABA shortcuts and Assembled Class entries to see how they compare to the target, limit and max size. It might require resetting these entries depending on how many entries are being used.
The ABA cache detail can be viewed for a specific rule by entering its name in the Rule Name field and clicking the ABA Cache detail button. This lists all the entries in the table for that specific rule.

In 7.1 ABA has been replaced by the Virtual Rules Table (VTable), which does not include inline rules and eliminates the context. VTable cache is stored in the database table named pr_sys_rule_impl and it contains only the mapping of the rule to the assembled classes.
The VTable cache stores the key which usually is the combination of (Rule Class and the purpose which is the name of the rule). When the rule gets saved, it creates an entry in the pr_sys_rule_impl table, saves the assembled class in the database table named pr_assembledclasses. When the rule gets invoked, it looks for the VTable cache and gets the mapped class name by searching the purpose.
VTable caching offers several benefits such as the drastic reductions in the cache size and database table size, zero contention since updates happen only on the rule save and improved the performance significantly because the product ships with all Pega shipped Rules preassembled in the database table.
VTable cache does not require any configurations since a very small footprint is stored in the table. In rare cases, when the system pulse throws some exceptions or if rules are not consistent along different nodes, we have the option to use SMA to reload rules to make the same rule appear in all nodes. To do this, we navigate to Virtual Rule Table Cache Management page in SMA and then enter the name of the rule which we want to reload.
The page refreshes to show the options to reload and reassemble that specific rule. Reassemble triggers assembling the rule again, this might be required in rare cases when the log files show exceptions such as UnresolvedAssemblerError.

The Static Assembler is useful in pre-assembling rules that are developed by your development team. Running the static assembler builds the VTable and Appcentric assembly cache. When the rules are
accessed for the first time, the rule assembly kicks in which takes a big performance impact. To avoid this, it makes sense to run the static assembler when new application rulesets are imported.
The static assembler can be run from the Designer Studio by navigating to System > Tools > Static Assembler.

There are two caches that use File System for its storage; they are the Static Content Cache and the Lookup List Cache. Let's look at them one at a time.
Static Content cache holds text and image files corresponding to the extracted text file and binary file rules. It is actually cached in two places, on the server and on the client. Static content refers to files that rarely change. Some examples include:
1. Binary file rules - image files of png or jpg extension
2. Text file rules - javascript and CSS used by PRPC rules
3. Form file rules - older rule forms, most of them are deprecated in 7.1
4. EForm file rules- rules that are used to generate PDFs
When they are requested, these files are extracted and stored in a directory named StaticContent in the application server. They are stored in the server's file system so even restarting the server does not remove these files unless they are manually deleted by the user.
Some of these files are stored in the client machine, typically on the browser cache. When users request data, it retrieves the data from the browser cache, which eliminates the need to request information from the server. By default, PRPC sets an expiration tag of 24 hours, so the file does not get updated from the server for 24 hours. Users can still use the browser options menu to clear this cache.
Static Content cache stored on the server file system is structured as sub-directories of the StaticContent common directory in the Pega temp directory. If you need to clear the static content cache at the next startup, delete the PegaRULES_Extract_Marker.txt file from the Pega temp directory before restarting the system.
PRPC also creates the sub-directory names with a hash code derived from an entire ruleset list.

Static content stored in the server can also be managed using SMA in the ETier Static Content management page in the Advanced Category. There are reports and the summary provide statistics such as the number of files written, the number of files read from database, the elapsed time to write files, read files from database, time to invalidate rules, etc. This page also offers the ability to clear caches individually, such as lookup list, rule-file-cache, rule file and service export cache.
The WebTier Static Content management page displays the cache information for the files on the web server such as images, javascript files, css files.
Static Content caching performance can be improved by exporting the static content caching to edge servers. If the users are located all around the globe, users connecting to PRPC might see some performance issues when accessing the files since the cache is stored in a server and it depends on the network speed. Edge servers are web servers that are installed closer to the user's location so static content caches can be quickly accessed.
Once Edge servers are installed, users who can access those servers should be assigned to a specific access group. Then we need to customize the stub activity shipped in the product (ApplicationProfileSetup) to set the edge server URL and then save it in a production ruleset. The production ruleset should then be added to the access group. Use the Extract Edge Server Files landing page to create the zip file and then import it to the corresponding edge server for the users to access them.
Refer to the linked PDN article for more detailed information on these steps.
How to improve response by distributing static content to remote edge
servers: content-to-remote-edge-servers (Node ID: 11728)
Lookup List Cache
Lookup list cache stores data instances that are requested by rules that do a lookup on the PRPC database tables. This contains the XML format of the lookup list information displayed in drop-down boxes such as Smart Prompts or other list based controls. They are stored under the LLC /Class Name directory under the Pega Temp Directory. These results are then saved as an XML file or as a gzip file in the temporary directory on the application server disk. The gzip file is used when lookup list cache is served to the web browser and the XML file is used to populate the clipboard. The system automatically deletes these files if the list becomes stale due to some operations. This file should be cleared rarely, but should definitely be cleared on the occasions when you import a file that updates these records or when there is an exception in the system pulse.

Certain rules such as classes, field values, properties, property-alias, etc. are saved in the Conclusion cache. These rules are not saved in Rule cache and the rules also do not require assembly or compilation. This cache is , saved in the database table named pr4_rule_sysgen.It groups similar rules, hence searching is quicker even by accessing the database. The most common example is Field Value, (one conclusion for all locale values) conclusion cache contains only the bare minimum information needed to run the rule. This cache also uses memory caching, the metadata information about these rules are stored in memory.
The conclusion cache can be viewed in SMA in the Advanced category. The Conclusion cache page lists the details of each cache across rule categories (property, class, field value, etc.), it lists the size estimate, instance count, prune age, etc. It also lists options to clear cache for both memory and database. Clearing this cache has an adverse effect and hence should not be done.
Starting in PRPC v6.2, the conclusion cache parameters were externalized and can now be overridden using dynamic system settings. The following table shows the default values. To override these values, we should use the following as the setting purpose defined as:
prconfig/conclusioncache/typeName/minimumsize (typeName = Property or any others in the list) prconfig/conclusioncahe/typeName/pruneatage (value entered in second)

In the list of factory reports, scroll down to locate the Property Reference Pool Report. Make sure that the Pruned field is 0. If you see "2nd class objects now ignored" then the system performance is being affected. What does this mean? It means that the property reference pool has stopped tracking the embedded properties such as pages, page lists and page groups because the pool limit has been reached. This severely impacts performance while iterating those properties.

Declarative Page cache
This cache supports the data pages that are being used in the application. A data page stores data which is used by applications during case processing.
Data pages are invoked declaratively- when a rule accesses a data page, the system looks for that data page in memory, if the page does not exist in memory, the system looks for the data page definition (rule) and then loads the data page in memory. The data page exists in memory for the next access, so when we access it for the second time the system picks the data page that is already loaded in memory. To avoid stale data, the data page can be configured to refresh on a specific condition, time interval or whenever it's being accessed and so on.
Data pages are also scoped to decide who can access this page -
REQUESTOR (data page is accessible in all its user session),
THREAD (data page is accessible only in that specific THREAD, for example, the page is available for a case and if the same requestor requires information for another case then the cache has to be populated again)
NODE (data page is accessible by all requestors logged on that server).
NODE scoped caches remain in memory until node gets restarted, while REQUESTOR scoped caches
remain in memory until the user logs out.
The Declarative Page Cache management page in the SMA (Advanced Category) lists the options to clear data pages that are defined as Node scope. This is useful for administrators and developers to clear node specific data pages when the application renders stale data. This is one of the few caches that can be cleared ad-hoc though it is not mandatory.

Declarative Network cache
This cache supports how declarative processing functions. PRPC declarative rules are executed by the engine using declarative networks which are triggered either by forward chaining (calculating the target based on the changes on any of the source properties) or backward chaining (calculating the target when it's requested and the value is null). The declaratives rules management page in the SMA (Advanced Category) provides details on this cache. This cache still uses the ruleset list to generate the cache and it lists various declarative networks that are associated with each ruleset list on a specific operator. When we select a specific ruleset list and click Get Cache detail we see a list of all classes and then we can click through to get the list of all rules defined in the class. All the declarative network instances are also stored in the pr_sys_decchg database table and is mapped to System-Declare-Changes class. We can use the reports that are shipped as part of the product to run and see the details of these instances. This act as helper tables in constructing the cache that is available on memory and also stores the relationships between various rules, however caching happens primarily in the memory.

When PRPC is installed on a multi-node system, a copy of the various caches are stored on each node, and each of those nodes must be updated with rule changes. This update process is managed by the System Pulse functionality.
Saving a rule change (update, add, or delete) to one of the Rule tables in the database fires a PRPC database trigger, which then registers the change in the pr_sys_updatescache table (in the same database). In addition to rule updates, there are other events which add entries to this table.
The types of event which are saved to this table include:
Cache - for any changes to the rule cache (updates or deletes) Index - when the Lucene index is changed
DELLC - when the lookup list cache is deleted
RFDEL - when any static content or rule-file- is deleted
IMPRT - when import occurs and it clears the lookup list and static content cache
RUF-X - when a function library is regenerated
Saving a rule change also automatically invalidates the appropriate information in the caches on that
node; however, all the other nodes in the system now have out-of-date information about that rule.
Every 60 seconds, the pulse (which is part of the standard PegaRULES agent) on each node wakes up (independently) and queries the pr_sys_updatescache table. The query retrieves records which are "not from this node" (which this node did not create), and which have a timestamp which falls within the window of time starting with the last pulse (or system startup) and now. In this way, all of the changes originating on other nodes are selected, and the appropriate cache entries on this node are invalidated by marking them as dirty. The next time one of the rules which has been invalidated is called, it is not found in the cache, and the updated version of the rule is read from the database and is eligible again for caching.
System Pulse activity can be seen in System Management, under Administration > Pulse Status Page.
The concept of clustering involves taking two or more PRPC servers and organizing them to work together to provide high availability, reliability and scalability than can be obtained by using a single PRPC server. PRPC supports both horizontal and vertical clusters (scaling).
Horizontal scaling means that multiple application servers are deployed on separate physical or virtual machines.
Vertical scaling means that multiple PRPC servers are deployed on the same physical or virtual machines by running them on different port numbers. PRPC natively supports a combination setup which uses both horizontal and vertical clusters.
A cluster may have heterogeneous servers in terms of hardware and operating system. For instance, some servers can use Linux; some can use Windows, and so on. Usually, the only restriction is that all servers in a cluster must run the same PRPC version.
What does clustering involve? Redundancy across all tiers of an N-tier architecture - there can be multiple load balancers handling traffic. For true high availability, there can be multiple load balancers handling traffic, multiple JVMs using Physical (horizontal cluster) and/or Virtual (vertical cluster) machines. Similarly, we can also have redundancy in Shared Storage repositories and database servers.
A Shared Storage repository is the key component in achieving high availability because it's used for crash recovery and Quiesce (which we will learn more about later in this lesson). A Shared Storage interface allows PRPC to manage stateful application data between other PRPC servers. Out of the box, PRPC supports a Shared Storage system which can either be a shared disk drive or use NFS. Both of these cases require read write access on those systems for PRPC to write data. If organizations decide on a different Shared Storage system then they need to make sure the Shared Storage integrates with PRPC.
Pega qualifies a server as High Availability if it is available for 99.99% of the time. This means that the system should be down for a maximum of 53 minutes over a year. What does the 53 minutes include? This includes any unplanned outages due to a system crash and all planned outages for upgrading the system.

PRPC must be deployed on application servers such as websphere, weblogic or JBoss. These servers offer features such as shared messages and buses to handle services and listeners during planned or unplanned outages.
High Availability Roles
PRPC comes with two roles (PegaRULES:HighAvailabilityQuiesceInvestigator
and PegaRULES:HighAvailabilityAdminstrator) that can be added to access groups for administrators who will be managing high available applications.
HighAvailabilityQuiesceInvestigator is given to administrative users who perform diagnostics or debug issues on a quiesced system. When a system is quiesced, the system reroutes all users other than the ones having this role. By default, a privilege named pxQuiesceAdminControl is created for granting this access.
The HighAvailability Administrator role in addition to the pxQuiesceAdminControl privilege, offers the ability to access the high availability landing pages in Designer Studio.
In general, High Availability Cluster management is performed using AES or SMA rather than using the landing page since Designer Studio updates rely on System Pulse for the update to be applied on other servers. This mechanism is slower (requires at least two minutes for the system pulse to update other servers) than using AES or SMA.
Similarly, the High Availability Cluster settings are set using Dynamic System Settings that applies to all servers. Let's look at the different options to update these configuration settings.
Setting Configuration Values
There are three ways you can set the configuration values .
1. Using prconfig.xml - This approach requires making configuration changes to each PRPC server. This approach can be used on cases where we want to make changes only on specific PRPC servers.
2. Using DASS (Data-Admin-System-Settings, also known as dynamic system settings) — You create a new DASS instance which is stored in the database and is accessible to all servers in the cluster.
3. Using a shared prconfig.xml file - We use a shared drive or NFS to store the prconfig.xml and all PRPC servers can be configured to access this using JNDI settings.

Pega Applications have four sets of requestor types logging in to an application. They are:
1. Browser Requestors are all users, including developers who are accessing the application through the browser. The requestor IDs all start with the letter H.
2. Batch Requestors are background processes that are performed by PRPC agents (daemon processes) and child requestors. The requestor IDs start with the letter B.
3. Application Requestors are created when PRPC is accessed as a service from an external system or when the PRPC listener triggers a service request, the requestor IDs start with the letter A.
4. Portal Requestors are created when PRPC is accessed as a portlet using Service Portlet rules. The requestor IDs start with the letter P.
The requestor management page in the SMA (System Management Application) identifies all the requestors that are currently logged into a PRPC server at a specific time. Notice that it gives the number of browsers, batch and application requestors and also their IDs and the client address (machine name) from where the requestor is logged in. This page allows administrators to monitor these requestors individually by selecting the radio button and then by clicking the button above to perform a specific task such as looking at the performance details of the session or stopping the session.
The requestor type is a data instance record and it exists under the SysAdmin category. We can see the requestor type definitions using Records Explorer. Alternately we can also use landing pages (System > General).
There are two sets of requestor types and the current version of PRPC uses the one with pega (all lowercase) as the system name. The requestor type using prpc as the system name was used in prior releases and exists for backward compatibility.
This page also lists all the PRPC servers connected in the cluster. Notice that it displays both horizontal and vertical clusters.
If we click the BROWSER link in the TYPE column, it opens the associated requestor type data instance. We see that it has an entry for the access groups and uses a standard access group
named PRPC:Unauthenticated.
When we open the access group record, we see that it provide access to the standard PRPC application (configured in the APPLICATION). The users belonging to this access group get guest access (configured in the ROLES).

When the user accesses PRPC using its URL, they are presented with a login screen. The default login screen is defined as part of the PRPC application, so all unauthenticated users should have access to this screen to view them. Even in cases when they use a third party sign on such as LDAP or single sign on (SSO), PRPC requires guest access for BROWSER until they successfully login to the application.
Once they successfully login to the application, they switch to a different access group with their own access group. Requestors belonging to type APP or PORTAL also get the same access group (PRPC:Unauthenticated), if the application requires a separate access group we may need to modify it to use a different access group.
BATCH Requestors use a separate access group named PRPC:Agents which was used for legacy (older version) agents. This was configured to provide a separate role named PegaRULES:BATCH for PRPC agents. If the agents we create use this access group, we may need to modify it to make sure it gets the access to the application.

A clipboard page is a data structure that holds name-value pairs. A clipboard page is created based on what the user performs at runtime, when they create a case; it typically creates a user page for storing the case details and also has a few other top level pages for storing static data, user information, application information, etc. In a typical user session, there are a lot of pages created and these pages exist in memory unless the application clears them out after their usage.
A thread or PRThread is an object created by PRPC in memory. PRThread consists of a set of clipboard pages and it provides the context in which the user is making changes. Thus, a PRThread is not related to a JVM thread but is a namespace used as a container for clipboard pages.
For instance when the user is creating a new case, PRPC creates a PRThread and it creates various clipboard pages as part of that thread while the user is working on that case. In a requestor session, the system generates multiple threads so users can work on multiple cases in parallel. When a developer works on Designer Studio and opens 10 rules, it creates 10 separate PRThreads and multiple clipboard pages in each of them.
Having all these in memory significantly enhances productivity and performance since the data is in memory and users can switch between tabs to work on multiple cases or rules. The only downside is that it ends up using lot of memory footprint.
Passivation and Requestor Timeouts
BROWSER requestor sessions get closed in one of three ways:
1. When users log off and close the browser
2. When the PRPC server is restarted.
3. When users get timed out.
If users do not log off, their sessions remain in memory along with all open PRThreads and clipboard pages. PRPC uses timeouts to clear these idle requestor sessions, PRThreads and clipboard pages from memory thereby the resources are utilized mainly by active users. The timeout used to clear requestor information from memory is known as Requestor timeouts.
By default, a requestor session times out after 60 minutes of inactivity, while a PRThread and clipboard page times out after 30 and 15 minutes respectively. These values can be overwritten using dynamic system settings.
When the requestor timeout is reached, the corresponding information is saved in the disk by default. The requestor information is retained on the disk for a specified amount of time after which it is removed. This process is known as Passivation. PRPC by default passivates data on timeouts. If users access the same session, then the system restores their session back to memory. This reverse process of restoring the passivated data back into memory is known as Activation.
Passivation is performed by a background daemon which runs periodically and looks up the idle requestors and then moves data to the disk. When the data is again retrieved within 24- 48 hours this data is cleared from the disk as well. Again we can customize the dynamic system settings to save it in a database instead of disk (change prconfig/initialization/persistrequestor/storage to database instead of filesystem). However, passivated data cannot be saved in the database for applications that require high availability. When using disk, it uses the temporary file directory by default and we can use dynamic system settings to set the directory location where we want these files to be saved (changing prconfig/storage/class/passivation/rootpath). In case of High availability applications, session passivation is stored in a shared disk so that the session can be activated in any of the nodes. High availability applications can also use custom passivation mechanism by using custom classes to implement passivation. Look at the article titled Creating a Custom Passivation method on the PDN that is in the related content for a sample implementation using Memcached.
Passivation helps in managing heap sizes as idle objects are cleared from memory frequently. Lowering the default timeout values a little bit helps in clearing idle objects from memory quicker. However, we need to make sure that it does not passivate really soon, for example if we set a requestor session to time out in 5 minutes, then the system might end up activating way more than its required since their sessions are passivated way too soon. You can check the Passivation settings using the SMA.

PRPC uses another timeout named Authentication timeout. This timeout is configured in the access group record. PRPC forces users to login after this timeout is reached. In cases where external authentication is used this timeout is disabled. If authentication timeout is expired then the user should login again to activate their session.

Load balancing is a way to distribute the workload across multiple clusters in a multi-node clustered environment. Since the requestor information (session, PRThread and clipboard page) are stored in memory, PRPC requires all requests from the same browser session to go to the same JVM. In network parlance, this is also known as sticky sessions or session affinity.
For high availability applications, the load balancer must also support:
Automatic monitoring for any failure detection, the terms are usually defined during requirements
Ability to disable a JVM so that it does not allow any new user sessions but allows existing user sessions to continue. Disabling is done to facilitate shutdown.
Scripting capabilities to support cookie management, allocation of work to jvms, and so on. Load balancing can be achieved by using hardware routers that support "sticky" HTTP Sessions. Cisco
systems Inc. and F5 Networks Inc. are examples of vendors who offer such hardware.

In addition there is also software, virtual and cloud-based load balancer solutions (Amazon EC2's elastic scaling) that are available.
Session affinity or sticky sessions can be established in many ways, and PRPC uses cookies to manage sticky sessions.
Using cookies is the preferred option and is commonly used in the industry. The cookies can be set using the session/ha/quiesce/customSessionInvalidationMethod in prconfig.xml or in dynamic system settings.

Failover support allows requestor sessions to continue on a multi-node clustered environment when outage occurs. The outage can happen in two ways - the server where the user is logged in crashes or the browser where the user accessing the application crashes.
Failover strategy is determined based upon the cost and time effort. There are broadly two types of strategies - cold failover and hot failover. All high availability applications require hot failover. Let's briefly look at them.
In a cold failover, load balancing systems send a heartbeat to the nodes to check if they are still running. This heartbeat can be configured in the application server and can be sent each minute, or once every two minutes or so. When the server is down, it sends a notification to the load balancer and moves the sessions to another server. The session information that is not stored or committed to database is lost.
In case of high available applications, we can enable failover by changing the passivation setting to use shared storage (Changing storage/class/passivation: /rootpath). This setting is usually set using the dynamic system settings. When configured, the requestors accessing a JVM are redirected to another JVM through the load balancer when the server crashes. The user must authenticate again to establish their session in the new node. All information that is not committed is lost in the process.
We can achieve that by configuring another setting (session/ha/crash/RecordWorkInProgress=true). This setting stores the user interface metadata in the shared file system. During a crash, this setting helps in redrawing the user screen to bring back all values and restore the user interface to the previous state.
These are the detailed steps that occur to help recover a crashed session.
1. PRPC saves the structure of UI and relevant work metadata on shared storage devices for specific events. When a specific value is selected on a UI element, the form data is stored in the shared storage as a requestor clipboard page.
2. The Load balancer continuously monitors all connected PRPC servers. When one of the servers fails, it removes that PRPC server from the load balancer pool so that future requests are not directed to that server.
3. When the user is sending the next request, it goes to the new server. It creates a new requestor, and then their session is activated using the passivated data stored in the shared storage layer. Then it uses the UI structure and work metadata to paint the screen for the user.
Browser Crash
PRPC also handles browser crashes seamlessly, when configured appropriately. In the case of High Availability applications, when the browser crashes, a new browser session connects to the correct server based on session affinity. The user interface metadata and clipboard are used to repaint the screen.
PRPC applications must be HTML5 compliant for browser crash and node crash to work. For user metadata recovery, PRPC applications must use dynamic containers (feature used in Pega Portals for
displaying work area of users). The Dynamic Container display work area uses tabs and the application tab recovery feature to recover the data.
The following table explains the events that occur during a browser or a PRPC server crash.

One of the main reasons to perform planned outages is to upgrade PRPC to a new version. In High Available applications, we could upgrade the PRPC server without impacting the user sessions.
In a multi-node clustered environment, these are the steps done to follow to upgrade the system.
1. In the Load balancer console, disable the PRPC Server that is slated to be upgraded. This ensures that the load balancer does not direct new user traffic to this server; however it continues sending existing user traffic to that server.
2. To start moving existing users, PRPC recommends using a process named Quiesce. We can use AES, SMA or landing page to Queisce the PRPC server that has been disabled in the load balancer.
Quiesce Process
When the server gets queisced, it looks for the accelerated passivation setting. PRPC sets this to 5 seconds by default, so after 5 seconds it passivates all existing user sessions. When users send another request, it activates their session in another PRPC server without them losing any information.
The 5 second passivation timeout might be too aggressive in some applications, so it is expected to increase the timeout to reduce the load. In general, this timeout must be coherent to the application, the typical time taken by a user to submit a request.
Once all existing users are moved from the server, we can upgrade this server and then once the process is complete, we enable it in the load balancer and cancel Quiesce from AES/SMA. We can use the Requestor management page in the SMA to check the requestors.
To perform a rolling start of all servers, we need to follow the same steps to disable each server in the load balancer and Quiesce them. After all users are migrated to another server, the server can be started.

Pega 7 supports split schema database architecture which is useful to perform PRPC upgrades that minimally impact the user experience. The split schema separates rules and data by saving them into separate schemas. Splitting schemas enables minimal to zero down time during PRPC, application, and framework upgrades or patch installation. PRPC System Administrators can install and upgrade a new rules schema in the production database while the old schema is still in use. Pega 7 high availability features can then be used to move users from the old rule schema to the new schema on the production system to complete the upgrade.
The steps to do this upgrade are:
1. Freeze rule development in the existing system
2. Create a new schema
3. Migrate rules from the old schema to the new schema
4. Upgrade the new schema to a new PRPC release
5. Update the data instances in the existing data schema
6. Modify the DB connections to point to the new schema
7. Quiesce one server after another to perform a rolling start
Pega Mobile Client - The Pega Mobile Client is available in a generic form in the app store as well as in a more customizable form using the Build from PRPC functionality.
Pega Mobile Offline option - The ability to use some features of a Pega application in offline mode (when not connected to the internet).
Pega Mobile Mashup SDK - Supports both the iOS and Android Development kits so that the Pega application can be embedded in any custom iOS or Android Application.
Pega AMP - Pega AMP (Application Mobility Platform) is a platform for building, integrating, and managing native and hybrid mobile applications with PRPC. Pega AMP consists of a communications framework, a set of APIs, and services for mobile-specific tasks such as authentication, integration, GPS position reporting, and push notifications.
Pega AMP Manager - Pega AMP Manager provides the ability to manage users, devices, apps, and backend mobile services in an enterprise. Pega AMP Manager is the main component in implementing both Mobile Application Management (MAM) and Mobile Device Management (MDM) services.

Pega applications can be accessed as a mobile application without any additional development effort. All Pega 7 applications use dynamic layouts and the skin rule can be configured to be responsive and adjust the layout based on the screen resolution. All prior PRPC versions require that you use a special ruleset to render the application correctly on mobile devices. The mobile ruleset is also useful in Pega 7 applications because you can enable device specific functionality such as location services, camera, etc.
So what are the different ways a user can access Pega applications?
1. Open a browser such as Safari or Chrome from your device (iPhone or iPad or any other android device).
2. Download the Pega 7 app from Apple iTunes or Google Play store.
3. Build and then distribute the custom app for the Pega application.
4. Embed the Pega app inside another mobile app using the Mashup Model.
Let's look at the first two options here and then we will learn about building a custom native app and Mashup model.
Open Pega as a Mobile App in Browser
The simplest way is to open you browser on your device and enter the URL of the application to access them. Once the page opens, we can create a shortcut using the Add to Home Screen option. This option is available on both iOS and Android devices.
The Add to Home Screen option provides the ability to define a name for the application and that name is used as a label for the icon that is being used as a shortcut. In newer releases, PRPC offers us the ability to quickly access the application, in the Designer Studio, when we click the About PRPC link in the lower part of the window.
When clicked, it opens the About PRPC page, which shows the bar code that can be scanned to open the URL without entering it manually.

Pega applications built on Pega 7 can be accessed directly on a browser automatically, however for applications built on prior versions it requires an additional step. The Pega 7 application has been modified especially on the user portals (case manager) to handle different screen resolutions automatically. If Pega 7 applications use dynamic layouts, the responsiveness feature goes a long way in making sure the applications renders appropriately.
Dynamic layouts are new to Pega 7 and hence all applications built in prior versions require a special ruleset ( PegaMobile ruleset). The Pega Mobile ruleset is shipped as a jar file and, if the application requires it, administrators can use the import wizard to import it.
Using the Pega Mobile ruleset helps in the following situations: Having mobile device support for PRPC 6.2 and 6.3.
Getting access to specific mobile functionality that is not yet available in Pega 7 out of the box.
Open in the Pega 7 App
The Pega 7 mobile app shipped by the product team is a quicker way to access the Pega app than using a native App. The current version of the Pega 7 app can be downloaded from Apple store
using or from the Google Play store
using .
All Pega applications built on Pega 7 are enabled to be accessed using the Pega 7 mobile app. The newer release also shows the QR code directly, so we can use the mobile device to scan the code to launch the app.

These are some advantages in using an app rather than accessing it as a web application. The app can either be the Pega 7 app or a custom mobile app (which we will discuss shortly).
Access to device capabilities- we could use device features such as geo-location services, push notifications, and so on.
Offline Access - The application can be accessed offline even when the device is not connected to the Internet. There is limited functionality in offline mode but it does allow many common actions such as creating a new work item, editing an existing work item, and submitting work.

Customers prefer using a custom mobile application instead of using the Pega 7 app. To do this, the application rule needs to be modified to select Custom mobile app (which requires a license). The application rule can then be configured to set the application name, URL and so on.
There are two important things to note - the first one is to enable the Push notifications which send notifications to mobile device. The Push notification content is configured in the flow using a smart shape.
The second one is to customize the icon and splash screens to use different images than the Pega 7 images. The Help icon next to the label opens a PDN page which provides the Assets zip file. The development team can modify the images and upload it.
After updating the assets file, we can use the iOS Settings and Android Settings sections to make necessary changes that are required to create a build and publish it as an app in the Apple Store and Google play store.
The build process for the hybrid mobile app uses a hosted build server, which is a licensed product. There are two system-wide settings that we need to put in place based on our separate build server account: the build URL and the authentication profile.
1. Dynamic System Settings
The build server is entered in the pega/mobilebuild/baseurl defined as part of Pega-AppDefinition ruleset.
2. Authentication Profile: Pega ships a default profile named MobileBuildServerAuthentication, which should be modified to set the authentication scheme, user name, build server host name and so on.
Distributing Hybrid Apps
The Hybrid apps can be built using the shipped build Server and then hosted in the iOS or Android Store respectively. However, enterprises have reservations in hosting custom apps built in the public domain. To help alleviate these concerns, Pega offers the ability to use AMP (Application Mobility Platform) Manager to host and distribute the apps.
When using AMP, Pega offers the ability to push apps directly to devices instead of people pulling them from the app store. Administrators or lead developers managing AMP get a user portal that provides various pages that help them to manage mobile apps, manage devices, manage users and so on. The dashboard gives a holistic view of how many users, devices and apps are set up.

An Administrator creates user groups and each group has apps assigned to them.
Users can then be added into the group. Users then receive a notification in on their mobile device to install the app automatically.
The administrator can remove users from the group and the app is removed from the user's mobile device automatically.
AMP Manager is separately licensed software and these fields in the application rule are used for communicating between Designer Studio and the AMP Manager when pushing or pulling user rights and distribution of your mobile application across various mobile devices.
In addition to this option, the customized app can also be distributed by using a smart banner. We can create a banner that appears in mobile web browsers prompting users to download the application from the app store. Again the icon can be customized to show a different image.

The last mode of accessing a Pega Application is by embedding the application inside another existing mobile app built using other technologies. Pega applications can be launched from within a native iOS or Android development kit. This mashup approach also supports access to device capabilities.
This is relevant when an existing mobile app has many different functions, but only some part of that is a Pega process. For example, the custom mobile app is a complete customer-self-service app where only one part of the process (update billing information or requesting a refund) is controlled through a Pega process on the backend. That one piece of the process can be seamlessly integrated into the existing mobile app without new development efforts.
So the Pega 7 screen that we see here in this rule form can be embedded inside a custom mobile application.
This form is presented to users when they press the Report Accident button on their phone. Pega Mobile ships a mashup SDK which supports both Java and Objective C (for Android and iOS respectively). Native App developers can import the jar file into their environment and then use it as a bridge between the native app and web view. Pega applications are still launched as a web view inside the native app.
The Pega Cloud is built as a highly available and redundant architecture scaled to fit the customer's requirements.
Standard deployment consists of VPN and load balancers, multiple application servers scaled horizontally, a Primary Database and a Secondary Database for DR (Data Replication) purposes.
Each Pega customer gets a dedicated Pega Private Cloud and Pega Cloud uses AWS (Amazon Web Services) as the IaaS (Infrastructure as a Service) provider. The Pega Cloud offering builds additional security layers on top of AWS to secure data in an encrypted form using Pega Cloud Cryptology. Data stored in Disk (log files, data files, caches, and so on) are stored in an encrypted format. The key to decrypt is stored in memory and not in the disk itself. The Pega Database can also be optionally encrypted using JCE (Java Cryptography Extension).
The Pega Cloud firewall restricts users from breaking into the Pega Cloud by using a three-tier firewall and the Pega Cloud encrypted overlay creates distance from IaaS and enables encryption in server traffic.
Pega Cloud provides automatic nightly backups and a quicker decommissioning process if the client decides to discontinue using the cloud.

Shared layers: The shared layer consists of the base PRPC that comes as part of the installation. All tenants share the same PRPC version and when a new version is released, the upgrade happens at the shared layer and all tenants upgrade to a new version when the shared layer gets upgraded.
All PRPC frameworks are also stored in the shared layer. All tenants see the frameworks and hence the tenants are determined based on the frameworks that they use.
Tenant Layers: All application development done by the development team gets stored in tenant layer. The shared layer does not usually store any rules even if they are common to different tenants. The primary reason is that any information in the shared layer is accessible by all tenants, information shared by multiple tenants must be replicated across all tenant layers. This is not a limitation per-se but is helpful in ensuring that the data in tenant layer is applicable to the tenant while the data in shared layer is applicable to all tenants.
However, if it requires us to publish a shared application, we can create and store a multi-tenant implementation application in the shared layer. This application is shared by all tenants who can customize content if required, however the data they create in tenants is not shareable to other tenants. Multi-tenant administrators provide a non-multitenant system for developers to build this implementation layer, which is then moved to the shared layer by the multi-tenant administrators.

When accessing a Pega application hosted on a cloud system, the debugging tools (Tracer and Clipboard), the System Management Application (SMA) and other performance tools behave the same way as that in the regular system. You can access the log files from Designer Studio and they are stored in the file system.
Remote tracing is allowed, so developers can login and trace the session of another user logged in to the system. Similarly, the System Management Application displays all requestors logged into the cloud instance and we can run the Tracer and Clipboard or view performance statistics for a specific user.
Multitenant systems work a little differently in a Cloud deployment. This is a sample URL for two separate tenants.
Both the URLs are identical except the string that appears right after PRServlet. That string identifies the Tenant ID and when the user logs in, the Tenant ID is used to load the data specific to that tenant from the Pega database tables.
When hosting an application on a multitenant system, we get administrative access for the shared layer to access the application, so in this case it is . The System Management Application, remote tracing and few other features are disabled when logging in the tenant.
When we open the SMA for the MTTesting server, we see all the tenants logged in to the server. The Requestor page displays the tenant name to the left of the Requestor ID to identify the tenant.
There are four main roles involved in the deployment process.
Development Manager of Build Master
Leads the application and ruleset version planning efforts. This is of particular importance for multi-stream development.
Responsible for creating and versioning the deployment infrastructure: Application rules, rulesets, product rules and access groups.
Create branches for development teams.
Merges branches back into the trunk.
Oversees creation of the deployment package.
Verifies version numbers of exported rulesets.
Coordinates the transfer of rules with System Administrator.
System Administrator
Imports the deployment package into target systems.
Performs additional configuration if necessary according to the Lead System Architect
Runs verification tests to ensure package is deployed correctly.
Database Administrator
Works with the System Administrator to identify tables/schema changes which must be deployed into the next system along with the application assets.
Lead System Architect
Lead developer on project.
Knows the intricacies of the application and assists in defining rules, data, and work instances for
the product rule to create the deployment package.
May ensure all rules have been checked in by other developers.
May oversee the creation of certain test cases prior to development to QA.
Roles may vary based on each organization and some responsibilities may be carried out by the same individual.

As a best practice, we recommend that rulesets and data instances be included in the same product definition. In some specific cases, for example if you have a lot of data table content or if the data follows a different lifecycle than the rules, a separate Product rule might be appropriate to hold the data instances. If you decide to separate the data, make sure to balance the number of Product rules versus any complication of delivering the data.
Changes to data instances must be reported from the development team to the Development Manager before the release. The recommended communication vehicle is the Product rule itself, which should be created at the same time as the application ruleset versions.
Any new or updated instances should be listed directly in the Product rule by the developer who added or updated it. The Development Manager might need to extend this process for special cases, such as when a data instance needs to be deleted.

Create a Deployment Package
It is the responsibility of the Development Manager or Build Master to supervise the creation of the deployment package.
Before the planned release the structure for the next release needs to be setup and the creation of the deployment package initiated. The following tasks needs to be performed by the development team:
Create a release document in collaboration with the release engineer Create versions of the application rulesets for the next release
Create a new application rule for the next release, if required
Create an product rule for the next release
Point the developer's access group to the next release Make sure that all rules in the release are checked-in Lock versions of the release's application rulesets
Make sure the smoke tests are executed
Validate that the Product rule contains all changes
Export the Product to the deployment package
If applicable, export SQL DDL scripts approved in the release version to the deployment package If applicable, export PRPC hotfixes approved for the release version to the deployment package
If applicable, export environment change information and resources in release version to the deployment package
Finalize the deployment package and upload it to the release repository In all exception cases, the release plan needs to be amended accordingly.
We recommend that the delivery package be created as a single zip file containing all information and resources needed to execute the deployment of the release to a higher environment. The archive file should be versioned and stored in a repository. Subsequent drops to the environment are typically incremental. In other words, only the fixes are promoted with subsequent deployments.
We recommend that the deployment package archive file contains the release documents and one or more subfolders with the deployment artifacts.

The release document contains the release information, such as content, and installation and smoke test instructions.
If security policies allow, database schema changes can be applied automatically by the import process using information in the archive file. In other cases, this process needs to be done with the involvement of the Database Administrator (DBA). The SQL Script folder contains the SQL files provided by the development team. These files contain all the database modifications required for the release. Follow the this naming convention "ABC_XX_XX_XX.sql" where ABC is the file name, for example, install or update, and XX_XX_XX is the current release version. Try to package the SQL commands in as few packages as possible to ease deployment. If multiple files are necessary, mention the order of execution in the release document.
One or more Product files are provided by the development team. Follow this naming convention "" where ABC is the application name, XX_XX_XX the build version, and YYYYMMDDHHMM the timestamp. The timestamp can be useful to track differences in data instances.
Place changes to the environment, such as PRPC hotfixes, libraries and java property files, in the Environment Specific folder. Use subfolders to organize the different type of environmental changes. Do not forget to include instructions in the release document.

Deploying a Release
The System Administrator supervises deployment on the target systems. The following steps, typically outlined in a release plan, are required to prepare for the deployment on the target system:
1. Obtain the deployment package from the repository.
2. Read the release document.
3. Perform a backup of the database.
4. If necessary, perform a backup of the node.
5. When the preparation is complete, the actual deployment can start.
6. Apply the environment changes as described in the release document.
7. Execute contents of the SQL file against the database.
8. Import the product archive file into the target system.
9. Copy the PegaRULES log file containing the import logs and store it for future reference.
The import tool displays a list of all rule or data instances where the IDs are same but update times differ. Normally rules should never appear in this list since the rulesets should be locked on the target system. If rules appear on the list it should be investigated since it probably means that someone has unlocked the ruleset and made changes. Verify the data instances that will be replaced before selecting overwrite existing data instances to complete the import.
Execute the smoke test as described in the release document when the product file has been successfully imported. If the smoke tests fail and the product requires rebuilding proceed to the development environment, create a new patch ruleset version, make the change and create a new product file.
In certain situations you might need to revert the database to the backup taken prior to importing the new release. In that case it is important to understand if cases have been created or updated from the time the backup was taken and decide on a strategy how to handle those.
There are several tools available to support the application deployment process. In the Senior System Architect (SSA) course we looked at the Application Packaging wizard and the Import and Export utilities. In this lesson we'll continue and look at the Migrating Cases wizard and how to import and export applications using the Command Line tool. We'll also look at how applications can be migrated automatically to their target systems using the Product Migration wizard and how the rulebase on two systems can be compared using the Rulebase Compare tool.
At the end of this lesson, you should be able to:
Migrate Cases
Migrate an Application using the Command Line Tool Use the Product Migration Wizard
Compare the Rulebase of two systems
Common Use Cases for Case Migration
There are several situations in which you might want to migrate cases from one system to another. For example:
If you need to investigate a reported issue on a production system you might migrate cases from the production system to a test system.
If cases are part of the application itself, for example the lessons in Pega Academy are modeled as work objects, in such circumstances the cases need to be migrated as they are promoted through development to production.
If applications are used offline. Imagine a complaint application used on cruise ships. When the cruise ships are out at sea complaints are entered in the system and when the ship reaches a harbor the complaint work objects are packaged and uploaded to the master system for further processing by the customer care team.
The Package Work wizard enables us to quickly create a product rule, also called a RAP file, which contains cases, assignments, history and attachments. Start the Package Work wizard by selecting DesignerStudio > Application > Distribution Package Work. The wizard consists of three steps: Enter Description, Enter Criteria and Select Work Classes.
First we need to enter the name and version of the product rule to be created. The text entered in the description field appears on the history tab of the product rule. We also need to specify the ruleset and version in which we want to create the product rule.
In the next step we select the work pool that contains the work classes to be included in the product rule.
In addition to the work item instances, it is also possible to include assignments, attachments, and history instances in the specified work classes.
In the last step we need to select the work classes in the work pool we want to include in the product rule.
A Work ID Range must be provided. Make sure to enter the correct prefix. If the prefix is incorrect or missing, the work items are not included in the archive file.
The final screen shows the product rule generated.
It is possible to start the Product Migration wizard using the Start Migration wizard button.
Let's have a look at the product rule generated by the Package Work wizard. As expected, there are no applications or rulesets included.
The class instances section contains the instances selected in the wizard.
The first three lines are related to the work item itself. The first line specifies the work class as selected in the wizard. The second line contains the Link-Folder class, which defines the work items that belong to a folder. The third line contains the Index-WorkPartyURI class, which allows reporting and searching for work by party.
The next ten lines include assignment instances related to the work objects.
The following seven lines include work object attachment instances.
The last line for the class instances includes the work object history.
The ID counter for the ID-prefixes are stored in a separate database table. The values of the counters for the included work object classes are included to ensure that there will be no duplicate IDs generated on the target system.
You can create the archive file using the Create Product File button.
In addition to the product rule the Package Work wizard also creates a set of when rules that are specified in the when filter fields. They are used to filter instances to make sure that only items relevant to the included cases of the classes are included.
The naming of the when rules follow the same pattern: Include, then the type of class it filters, for example, Work, then _ followed by the name of the product rule, in our case Candidates, and then _ followed by the product rule version. For example: IncludeWork_Candidates_01-01-01.
Use the Command Line Tool
The PRPC utility command line tool is part of the software distribution zip file. We can use it to import and export rules.
The command line tool should only be used when scripting implementations. Otherwise, use the import and export functionality in the Designer Studio.
The utility command line tool works directly on the database, the source or target systems do not need to be running when the script executes.
Extract the content of the software distribution zip file into a directory.
You need to have a JDK, Java 5 or higher, installed to run the command line tools
The path to the JDK must be defined in a JAVA_HOME environment variable
The target database vendor's JDBC driver Jar file must be available along with the information required to connect to the database
The utility command line tool files are located in the directory called utils, which is in the directory called scripts. The file needs to be updated with the parameters required for the utility to be run.
The database connection details in the common section are required for all utilities.
Typically you want to use a user's access group to determine runtime context rather than use the default App Requestor's access group. Specify the operator ID and password in the pega.user.username and pega.user.password properties.
There are two scripts available: prpcUtils.bat for Windows Server and for Unix derivatives. The command to run a utility looks like this (Unix is shown here):
./ <utility> [--driverClass classname] [--driverJAR jarfile] [--dbType name] [--dbURL jdbc_url] [--dbUser username] [--dbPassword password]
Where the utility parameter is mandatory and can be one of the following:
importPegaArchive - import an archive file
importCodeArchive - imports code instances
importAppBundle - imports an application bundle
export - exports an archive
exportRAP - exports a Rule-Admin-Product (RAP)
scanInvalidRules - fetches all invalid rules present in an application runagent - starts an agent
The database parameters are only needed if they are not provided in the file. If the parameters are provided they override the ones specified in the file.
After supplying these parameters, the utility starts running an ANT script, which performs the required actions based on the settings in the file. The ANT script is defined in the file called prpcUtils.xml in the same directory. The script puts the generated logs in a directory called logs in the scripts directory.
Each utility has its own options in the file. Let's start by having a look at the import tool.
The Import Tool
Each of the three import commands has its own options in the file. All import commands require the full path for the file to be imported to be specified.
The importPegaArchive command is used to import archive files, typically created from a product rule or the export landing page.
Three modes are supported for the import.mode property.
install - does not update existing instances, but only imports new ones. A message is written in the log for each instance that already exists.
import - updates existing instances and removes duplicates.
hotfix - updates existing instances and removes duplicates only if the rules to be imported are
newer than the existing ones.
We can define if we want the import to fail on an error in the import.nofailonerror property. Never disable the inference engine. Specify how many records to import between each database commit in the import.commit.count property. Always leave the property empty.
The importCodeArchive command is used to import code instances into the system.
Leave the import.code.mode property set to its default value as the system determines which mode to use. Specify the codeset name and version in the and import.codeset.version properties. Leave the import.codeset.patchdate property commented out, it is set by the system.
The importAppBundle command is used to import application bundles into the system.
An application bundle is an archive, similar to the archives that are produced by product rules or the export landing page. However, an application bundle contains an XML document known as the manifest
that defines the order in which rulesets, rules, and other items in the bundle are imported. Application bundles are typically used to install a Pegasystems solution framework.
The import.slow.install property can be used when there are issues with the database driver. We need to specify if we want to have a report generated in property. Use the import.compile.libraries to specify if imported libraries should be compiled or not.
The Export Tool
Both the export commands require the full path to the exported archive to be specified.
If the exportRAP command is used the only relevant property is the export.archive.productkey which identifies a Rule-Admin-Product instance by its pzInsKey.
Use the View XML option in the actions menu to get an XML representation of the product rule showing the pzInsKey.
The rest of the properties in the export tool section apply to the export command.
It is mandatory that we specify the classes to be included in the export.classes.included property unless exporting a list of pzInsKeys. Enter classes to include separated by a comma. We must specify if we want descendants of the class included in the export.included.descendent property.
The properties export.classes.excluded and export.excluded.descendent allows us to filter specific classes and the properties export.startVersion and export.endVersion allow us to specify the ruleset version range. Always leave the export.template.file property empty.
We use the export.keys.file property if we want to export a list of pzInsKeys. Enter one pzInsKey per line and do not provide other properties such as export.classes.included when using this option.
The property allows us to specify a ruleset name to include. In the last property we have the option to preserve the lock details on exported rules that are checked out. Use this only if the corresponding locked instance is moved.
Use the Product Migration Wizard
The Migrate Product wizard lets us automatically archive, migrate, and import a product rule also called RAP (Rule-Admin-Product) to one or more destination systems.
This can be very useful when, for example, moving a product from the development system to one or more test systems. The wizard eliminates the need to:
Create the archive file on the source system.
Upload the archive file into the target systems
Log into each destination system and manually import the archive
Select DesignerStudio > Application > Distribution > Migrate Product to start the wizard.
First we need to select the name and version of the product we want to migrate. If the archive was already created and exists on the server we can select use existing file to eliminate the need to rebuild it.
Next we need to specify the target systems. The default server context root is prweb, but can have been changed. It is possible to use HTTPS for the transfer.
Click Next to continue to the next screen.
Enter the username and password needed to authenticate access to each of the target systems. Click Finish to submit the request and start the migration process.
1. The product archive file is created on the source system. It uses the following naming convention: product name _ version number - . In our case the file is called since there is no patch number available.
2. A covered work item is created for each target system. The process then attempts to make a connection to each target system and places the archive file in its ServiceExport directory.
3. The target system returns a success message if the connection succeeds and the file successfully loads onto the server and is imported. The source system resolves the work item upon receiving the success message.
Migration Failures
If the connection fails or the file is not loaded due to a system error the work item stays unresolved.
It is possible to either retry the connection or cancel it, which withdraws and resolves the work item.
Save Target Systems
Rather than having to re-enter the host name, context root, and port number each time we submit a migration request, it is possible to create a list of saved target systems by creating an instance of the class Data-Admin-System-Targets.
The list of saved target systems appears the next time the Migrate Product wizard starts.
Upgrade Innovation Centers
Before we commit to performing our own upgrade, it is worth mentioning the Upgrade Innovation Centers. These centers provide a service offering that focuses on upgrading existing system and bringing them up to date with the latest features. We should first evaluate if leveraging one of these centers is in the best interest of our business.
For the sake of this lesson, we'll assume we are not leveraging a UIC and will be doing the upgrade ourselves.
Prior to starting an upgrade
Before we can upgrade our system, we need to identify the version of our existing system, and the platform we are upgrading from. We also need to determine if we're going to be doing an in-place upgrade or a parallel upgrade. The recommended best practice is to perform an in-place upgrade when possible; however, some situations may require a parallel upgrade. We will describe the process for parallel upgrades a little later in this lesson. For now, let's focus on an in-place upgrade as it is the best practice.
Upgrade guides
Within the release media, and available on the PDN with the deployment guides, are two upgrade guides we can use to perform our upgrade:
PRPC 7.1.X Upgrade Guide
PRPC 7.1.X Upgrade Guide for DB2-z/OS
The second one is specific for DB2-z/OS installations; all other installations will use the first upgrade
guide. These guides should be reviewed prior to initializing any upgrade.
Other considerations
Wait, the upgrade guides aren't the only things we need to consider. Several other factors need to be evaluated before we can commit to upgrading a system.
Is there a framework installed? Some PRPC frameworks may not yet be compatible with the newest version of PRPC. We cannot commit to upgrading a system until the framework is also ready to be upgraded.
Are there multiple applications on the system? In some instances, multiple applications may be concurrently installed in a system. While not as prevalent in Production systems, unless we're supporting multi-tenant, this is a common occurrence in Development systems. We cannot commit to an in-place upgrade of a system unless all of the applications on system are ready to upgrade.
Is there active development in progress? We want to plan our upgrades when there is sufficient time and resources to regression test our applications in the new system. This is to identify if any customizations done in the system are no longer compatible with the latest version of PRPC. In some businesses, they request to implement the upgrade at the same time as a new development release, but this often becomes problematic as we cannot pinpoint the issues that may arise to either the upgrade or the new release. Therefore we want to make sure our upgrades are done separately from any new development.
Is the business ready for an upgrade? It is also important to identify if the business has the bandwidth for an upgrade. Most businesses go through cyclical seasons of high and low business. We want to ensure our upgrade concurs with one of the businesses low seasons. For instance, we would not want to upgrade an accounting application during the critical tax season. Nor would it be a good idea to upgrade a call center just before a new highly anticipated product launch.

Plan the upgrade
When committing to an upgrade, we should ensure we have a plan in place that the whole team is following. Both development and the business needs to be aware than an upgrade is being performed as well as commit resources to test and address any issues that might arise. Upgrades should always be performed in the lowest environments first, such as a sandbox or the development system, and then propagated through the environments similar to any other release.
Run the guardrail reports
Prior to implementing an upgrade, the guardrails reports should be run for all the applications on the server. These reports identify rules that do not follow the PRPC guardrails and as such might encounter issues during an upgrade. The best approach would be to have the development team address any of these items identified prior to the upgrade. This is not always possible as occasionally a specific business need requires the breaking of a guardrail. For those instances, careful notes should be taken on which rules are outside the guardrails, and how they have been implemented. This is so that the development team can specifically target these rules post upgrade and validate there are no recurring issues.
Backup, backup, backup
There is never a guarantee when we're dealing with changing an underlying system. Before any upgrade, we should always ensure we've taken a backup of both the database and the application server files. This shouldn't be new to us as we should be taking backups before any migration anyways. We just need to ensure we also backup our ear or war and any configuration files at the same time.

The steps of an in-place upgrade
When implementing an in-place upgrade, we should follow the published upgrade guide for this release. The guide will provide the details of what needs to be done to upgrade our system, but let's review the high level process now.
First, we need to validate our system will be compatible with the upgraded version. As new PRPC versions are released, occasionally backwards compatibility with all versions of application servers or databases cannot be maintained. We should consult the platform support guide to ensure the new version can run on the existing environment. If not, we will need to do a parallel upgrade.
Stop the existing applications from running.
There are occasionally some pre-upgrade scripts or processes that will need to run. This is to prepare systems that may have gone through multiple upgrades and ensure they will are in the expected state for our upgrade.
The next step is to upgrade the database. This can either be done automatically, using the Installation and Upgrade Assistant (IUA) or manually by running the provided scripts. The best practice is to use the IUA for this task.
After the database is updated, we then need to import the new rulebase. This can also be done automatically using the IUA or manually via scripts. Again, the best practice is to use the IUA.
The next step is to deploy the new archives. We would first undeploy any existing war or ear files and then replace them with the new ear and war files. Follow the steps in the installation guides for deploying the ear or war.
Once the archives are in place we can perform any configuration changes in the environment to support the new installation. If we upgraded using the manual scripts, we should first log into the system and run the Upgrade Wizard. This process ensures the additional steps necessary, such as updating rule types have been completed.
At this point, we should be ready for addressing our applications. Our current applications have all been created based on the previous version of PRPC. The development team should lock and roll their Rulesets to a new version prior to continuing any new development so that it is easily identifiable which version of the application is compatible with this upgraded version of PRPC. During this process, they should ensure their application rule is also updated to being built on the latest version of the PRPC rulesets.
Next, we work with the development team to run the upgrade tools available in the designer studio. These tools attempt to automatically update the existing application to take advantage of new features, such as an updated CSS or to point a deprecated rule at its replacement.
The system should now be ready to be validated. Validating an upgrade is covered later in this lesson.

The steps of a parallel upgrade
When necessary, especially in the case of multiple applications, we may need to take a parallel upgrade strategy. This process leaves the existing system intact, and implements the upgrade in a parallel system. Instead of following the upgrade guides for our system, a parallel upgrade requires us to install a new instance of PRPC using the installation guide. The lesson on new installations covers this process.
After the new system is installed
Once our new system has been installed and verified, we need to start migrating our rulebase and, in the case of production, our cases.
First we need to implement any customizations that have been done in the existing system into the new system, such as encryption, additional database tables, JNDI connections, certificates for SOAP and other connections, etc...
Next we migrate the application, just like we would for promotion through the environments. We can then install the application package into the new instance we've just created.
We will also need to migrate the data records, such as operator ids, business calendars, organization data, etc... These can be packaged and migrated just like we did for the application.
If this is a production system, we should migrate the cases from the existing system into the new instance. It is a best practice to also perform this migration of cases in one of the earlier environments, such as Preproduction or QA, to validate the process includes all the necessary case data.
In most instances we should be ready to begin validating our installation. Occasionally, the lead system architect (LSA) may have identified additional steps necessary for this specific implementation. Work with the development team to ensure there are no additional steps required.
Once the development and business teams have validated the new system, we can allow the users access to the new system. This can be done by either:
o Updating the DNS servers so that the new server now receives all request for the previous URL (preferred)
o Updating the links the users use to access the system to point to the new URL.

Validating an upgrade
Whether we did an in-place upgrade or a parallel upgrade, we need to validate the upgrade was successful. We do this by having the development/QA team run their suite of regression tests against the new instance.
What to do if there's an issue?
In most cases, if an issue arises, it is due to the development team's customization of rule that was outside the guardrails, as we've identified before our upgrade. The development team will now need to address the customization and determine what changes will need occur in order to fix the issue.
The second most common issue with an upgrade is attempting to upgrade a PRPC system without upgrading the associated framework. This can come from staggering both upgrades. It is important to upgrade both PRPC and any installed frameworks in one shot. Failure to do so can potentially corrupt any existing cases.
Rarely, an issue occurs that cannot be immediately fixed. In these cases, we should first rollback to the previous version. A careful analysis of what went wrong will then need to take place before we commit to implementing the upgrade again.
Plan Performance Testing in Development Systems
Pega 7 offers several tools that help us to validate the application we are building meets the desired performance guidelines. It is highly recommended that we use these tools periodically to ensure the application is performing well.
After implementing any major functionality (user story), developers should perform the following steps to test application performance.
1. View Guardrail reports to verify that the application does not have any warnings.
2. Run Tracer with the performance settings enabled and check to see if there are any fixes we should make to improve the application.
3. Run through the entire application by creating a new case, navigating through all screens and then resolve the case. Then use the My Alerts tool to check the alerts that are captured in the current session.
4. Run PAL and then take readings for each interaction.
Before looking at each of these steps in detail, these are some points to consider during performance testing on development systems.
1. When looking at PAL readings, focus on readings that provide Counts. Counts are always accurate and hence more reliable than the timers. If we repeat the same test again and again, the counts (for example, Total Number of rules executed) always remain the same while the timers (for example, Total Elapsed time for the reading) might vary between tests.
2. Focus on Counts in the Alert logs as well. When looking at alert logs, times may not make sense when similar testing is done in production systems due to differences in the infrastructure, data volume, concurrent users, and so on. In addition to counts, pay close attention to all alerts caused by database interactions. This might suggest where to define indexes, tune queries and so on.
3. Lesser is better: Pega 7 offers a wide variety of features, for example, we can use SQL functions in report definitions to filter results or present data in a specific format, we can sort on any number of columns, we can query from an unexposed column. It is always important to make sure the feature is required and use it only when it's appropriate and required because they do impact performance. Be sure to use them judiciously.
4. Testing early helps to address issues quicker. Use tracer, look at alert logs, and take PAL readings frequently. These are as important as creating applications to meet the requirements. During the development phase, it is very easy to pay more attention to building the rules and ignore testing. Not testing performance periodically impacts delivery if the performance is not meeting expectations.
5. When saving rules, pay close attention to the rule form to make sure there are no warning messages. These warning messages do appear in guardrail reports, but it is mandatory that all developers make sure that there are no warning messages in the rules they write. At the minimum, write a justification as to why this warning cannot be avoided. This provides some context when the lead developer runs these reports.
6. PRPC is deployed in an application server and uses a database server to read and write cases, read rules, report from database tables and so on. Make sure JVM settings are configured in the application server and appropriate database indexes are defined.

Let's take a look at various tools that developers can use while testing a Pega 7 application.
1. Clipboard: The clipboard is primarily used for application debugging to check if the properties are getting accurate values, the page names are accurate and so on. The clipboard can also be used as a performance testing tool by checking the page size. The tools menu provides two options - Analyze and Collect Details.
When we select Analyze a popup window opens displaying a table that contains information about all pages in that specific thread as well as the requestor. The most important thing to check is the page size and the INFO column for all pages returning a page list.
The Collect Details option (when enabled) shows details on all pages including the ones that are deleted. After enabling Collect Details, click Analyze to see the details. Collect Details is disabled by default, and can be enabled to see which process created a page that is not expected to be there in the first place.
Enabling this flag shows a new column called CREATION STACK which shows the rule that created this page and which one deleted a page. The flag should not be enabled unless we need to look at these details.

Guardrail Reports: Guardrail reports play a critical role in ensuring applications are built conforming to the best practices. Pega 7 offers guardrail reports that make a developer's life extremely easy, it scans all the rules built in the entire application and provides a report with the list of rules that do not adhere to the recommended best practices.
Pega 7 also provides a compliance score report which gives an indication of how many rules are not complying with the recommended practices. As we can see we can apply filters to run through a specific sprint, we can look at the number of alerts generated by the system and how it increases over the lifecycle of the project. This report can be exported and can also be scheduled to be delivered to a set of users automatically.
We can look at the list of rules with warnings by clicking the number 82 (above), or we can use additional reports on the landing page to drill down to the next level. The compliance details report adds a little more context and analysis in terms of when to resolve these issues, who introduced these issues and so on.

The warning summary report provides the warnings grouped by rule categories and also the severity. The warning summary also offers the ability to see all reports even if they are justified by developers. Having this as a filter enables the lead developer to focus on unjustified warnings before addressing the justified warnings.
Expanding any of the rows displays the list of all warnings for a specific rule type which is identical to the warning details report which displays the list of all warnings along with additional filtering capabilities.

Tracer: Tracer is similar to the Clipboard in that it is mostly used as a debugging tool while stepping through the case. However, Tracer in recent releases has been enhanced significantly
so we can use it for analyzing performance as well. Shortly, we will learn about the various tracer settings and additional tools that are useful when interpreting tracer output.
4. PAL: The Performance Analyzer is extremely important when testing the application in development and using this tool helps us to identify potential performance bottlenecks that can occur.
5. Alert Logs: Alerts are captured in log files and as we mentioned earlier, we do not need to pay too much attention to alerts that are written when the time threshold is exceeded.
Using Tracer to test performance
Tracer is an extremely useful tool for application debugging. It traces the session in real-time and we can use this when unit testing the application to see which rules are getting executed, the value of the properties before and after the rule gets executed and the exact step where the exception occurred.
Tracer has been enhanced to report on performance statistics and we recommend that developers use this extensively in the development phase of the project. The tracer settings dialog displays options to trace all performance related information.
Let us talk about a few important settings.
Abbreviate Events - Enable this setting when using tracer to debug performance., It limits the page output and helps Tracer to run faster.
Interaction - This flag enables PAL statistics and groups the tracer output by interaction in the Tracer viewer.
Tracer captures output directly in the tracer window. Viewing the output in this window is helpful when there are not too many steps displayed. When it goes beyond a few hundred lines, viewing the output becomes extremely difficult. Use the tracer viewer tool to view the output instead. Set the Max Trace Events to Display to 0 so that the tracer does not write any events.

After the tracer output is disabled, continue testing normally and when all tests are completed, click the Save icon in the top which prompts you to save the file on the local system. Use the Tracer Viewer to interpret the results. The following article on the PDN explains this tool in
detail output.
The Tracer Viewer allows us to look at the key events by highlighting them in red. We can expand the branch and drill down to the actual step which causes the issue.
On development systems, it is often required to trace sessions belonging to other users. The Tracer offers the ability to pick another operator who is logged in to the system. When Remote Tracer is clicked, it displays all users who are currently logged in to the system. Selecting a user from the list starts a tracing session for that user. It is still possible to modify the tracer settings and download the tracer output for other user's session.

The Remote Tracer link in Designer Studio allows us to trace operators logging in to that specific node. If the development system uses a multi-node clustered setup, then we should use the System Management Application (SMA) to connect to that node and then trace user sessions. Running tracer from the SMA is helpful in development systems though it may be prudent to use security. Pega 7 can help to achieve this by configuring the PegaDiagnosticUser role in web.xml.
The most important tool a developer should run in development systems is the Performance Analyzer (PAL). Though Pega collects the performance statistics at all times, we need to use PAL tool and take readings to get this data.
Taking a PAL Reading
When we take a reading, it displays as a DELTA (meaning the incremental difference in the statistics from the previous step). The first step after starting PAL is to check the Int # (Interaction number), this should be zero when taking a fresh reading, click reset to delete all the interactions already captured. This number is always higher since PAL readings are collected in the background.
After setting it to zero, perform the actions for which we need to conduct performance testing. Once done, click Add Reading, this displays another row of reading. The DELTA row displays key statistics and we should focus only on the counts such as Rule Count, Activity Counts and total Bytes.
Clicking DELTA displays another window listing additional counters and other performance statistics. In Development, pay close attention to counters in the Rule Execution Counts, which provides information such as how many rules are executed and also the distribution by rule types ( number of data transforms, declarative rules, activities, when rules and so on). The other section we should look at is the Database Access Counts which provides information such as how many rows are being read from the BLOB(storage stream), how much data is being read from the BLOB(storage stream) and so on.
Tips on PAL Testing:
1. When testing applications, take readings for each screen. In the reading, look for factors such as number of interactions on that single screen (server trips from browser), number of bytes read and written, DB I/O count, number of rules, number of declaratives, clipboard size estimates, number of declarative indexes written.
2. Look for outliers such as 20 server trips, 8 MB of data read from BLOB, 200 rules being executed in that interaction. When we notice anything abnormal, investigate to see the details. Run Tracer to specifically identify the 200 rules, enable the DB query in Tracer to see the query and identify which column is read from the BLOB.
3. Look for the growth pattern: When testing in development systems, repeat the performance testing periodically. Watch for statistics such as the number of rows being returned from DB, is it growing based on the number of cases being created. The counts should increase with more cases in the beginning but watch out if it keeps rising after an extended period of time. This may be a sign that there may is an issue.
4. Tracer and PAL should help us in narrowing down the cause of the performance issue in most cases. DB tracer and the Performance Profiler can be used in special cases when Tracer cannot supplement all the details. The DB Tracer is useful to generate a stack trace to identify the hidden problem while Performance Profiler can be used in tracing inline when rules (when rules not explicitly referenced).
Viewing Global PAL Data
PAL as we know runs in the background and but how do we leverage this information other than taking readings whenever we would like to?
We can access the Performance Details screen using System> Performance > My Performance Details.
This shows the list of sessions available for that requestor. We can pick a different User ID in the User ID field to look at the performance details of that user. This list shows only the information for the current day since the passivation is set to 1 day. The performance details can go back only until when passivated data is stored.
In addition to this we can also use the Log-Usage reports through the SMA. Refer to the Monitoring and Diagnosing Production systems lesson for more information on the Log-Usage reports.
Performance Alerts
Alert messages are captured in log files and can also be viewed in both the racer output and PAL data. Developers however must view the Alerts using the Alerts link in the Designer Studio. Clicking the Alerts link opens a window that displays all the alerts that happened during that session. If we need more information, click All My Sessions to see additional alerts. All My Sessions displays all the alerts on the current and all other passivated sessions for that requestor (typically set as 1 day).
We can customize the data by clicking the Options link and in the additional fields that appear we can modify the Filter by field to display alerts on a different developer's session. When viewing alert logs, identify alerts that do not involve time, in some cases the time is caused by the other alert. Notice that there are three different alerts on Interaction ID 18 and the first one is a summary alert which usually follows with additional alerts. The second one is a time alert. Let's look at the third alert (BLOB Size Read).
Expanding the alert, we can see additional details such as the Alert ID, Line (explaining the cause of the alert) and PAL stats.

Use the PDN to search on the Alert ID (PEGA0039) to get additional details about how to fix it. In this case the following information appears.
The pzInskey, in this case indicates an attachment is read from the BLOB and it exceeds the threshold value. Alerts as we can see are extremely useful in narrowing down and resolving issues.
PegaRULES Log Analyzer (PLA)
Pega ships another tool, the PLA which is effective in listing all alerts by importing a log file. If we need to debug alerts that happened on sessions that are past the passivated timeline, we must use PLA to import the alert log files and identify how frequently these alerts occur and then determine the priority in which to fix them.. Repeated occurrences of alerts can cause performance implications and the PAL data that is included in the alert helps us to identify the key statistics.
After importing the log file in PLA, the alerts summary displays various alerts seen and how frequent they occur. The Alert Count by Application report which groups based on applications is quite useful in implementations using various PRPC applications.

The following are the list of performance alerts that must be resolved if found in development systems. Most of the alert thresholds can be configured using dynamic system settings. We can adjust the values appropriately in development systems if we do not want to see too many alerts. Setting threshold values must be done cautiously since it might suppress displaying a potential performance issue.
PEGA0004 - Quantity of data received by database query exceeds limit PEGA0016 - Cache reduced to target size
PEGA0017 - Cache exceeds limit
PEGA0018 - Number of PRThreads exceeds limit PEGA0019 - Long-running requestor detected
PEGA0021 - Clipboard memory for declarative pages exceeds limit
PEGA0024 - Time to load declarative network time exceeds limit
PEGA0025 - Performing list with blob due to non-exposed columns PEGA0027 - Number of rows exceeds database list limit PEGA0028 - GC cannot reclaim memory from memory pools PEGA0029 - HTML stream size exceeds limit
PEGA0030 - The number of requestors for the system exceeds limit
PEGA0031 - Generated stream overwritten and not sent to client
PEGA0033 - Database query length has exceeded a specified threshold
PEGA0034 - The number of declare indexes from a single interaction exceeds a threshold PEGA0035 - A Page List property has a number of elements that exceed a threshold
DB Bytes Read
Cache Reduced
Cache Force Reduced
PRThreads Limit
Long Requestor Time
Declarative Page Memory
Loading Declarative Network
Reading Blob Need Columns
DB List Rows
Memory Pool Collection
HTML Stream Size Requestor Limit Stream Overwritten DB Query Length Declare Index Clipboard List Size

Alert Category
PEGA0036 - PegaRULES engine intentionally shut down PRPC Shutdown

PEGA0039 - The size of a BLOB column read exceeds a threshold Blob Size Read

PEGA0040 - BLOB size written to the database exceeds a threshold Blob Size Written

PEGA0041 - Work object written to the pr_other table
Work Object PR_OTHER

PEGA0042 - Packaging of database query has exceeded operation time threshold DB Query

PEGA0043 - Queue waiting time is more than x for x times
Asynchronous Declare Page

PEGA0044 - Reached threshold limit for message ID: PEGA00XX, will send again after [Date]
Throttle alert

PEGA0045 - A new request has been submitted for a page without using the existing one
ADP Duplicate Request

PEGA0046 - Queue entry not yet started by the load activity
ADP Queue Not Started

PEGA0047 - Page copy time is more than the loader activity execution time for the data page
ADP Copy Too Long

PEGA0048 - Page copy time and waiting time are more than the loader activity execution ADP Copy Wait Too time for the data page Long

PEGA0049 - Search status check alert. The alert identifies that there is a problem with Search Status Check the Lucene Index, and that search results may therefore not be accurate

PEGA0050 - Lightweight list has been copied n times.
Lightweight List Copy

PEGA0052 - Wait time exceeded for the page ADP to load asynchronously
ADP Load Wait Exceeded

What should we do when alerts occur? Look in security-alerts-and-aes and clickon them to open the corresponding PDN article for that alert. The article on individual alert gives additional information on how to handle some of these alerts.

Alerts are usually one of these types:
Threshold Alerts: Alert messages written when a threshold value is exceeded, the threshold value can be represented in milliseconds, in bytes, as a whole number and so on. Most of the alerts belong to this category, fixing these alerts in development prevents issues from reoccurring in production. Look for alerts such as PEGA0021 and see if the application can handle these errors in one of these ways - expire the pages after its non-use, restrict the number of rows being returned, restrict if all the information stored in the data page is referenced and if we could remove some of them.
Best Practice Alerts: Alert messages written when a best practice is violated. Note that alerts are not written on all best practice violations, the guardrail reports are still a primary mechanism to identify rules violating best practices. For example, PEGA0050 which is an alert indicating that a rule is copying a clipboard page list to another page list in an inefficient manner and in Pega 7.1, it is not necessary. Read this page in
PDN been-copied-n-times for more details on this alert. PEGA0025 is another alert which displays alerts if the report is reading from unexposed columns, these are also shown in guardrail reports.
Observation Alerts: Alert messages written based on what happened or what is being noticed. For example, Pega0028 which is an alert generated when Garbage Collection (GC) process cannot reclaim enough memory to remedy the performance impact.
Event Alerts: Alert messages to indicate an event, this may or may not be important. For example, PEGA0049 is written when there is an issue in the Lucene server. This might mean that the search will be erroneous until it is corrected, there may be a problem in the search settings or it requires reindexing to fix the problem.
1. PegaDifferTool:
Consider this scenario: When testing a Pega 7 application, we notice that specific functionality is not working and we were pretty sure it worked earlier, so something might have changed recently which is causing this issue. Sounds too familiar, there is a pretty easy way to track down the issue. Pega offers a PegaDiffer tool, which can be downloaded from exchange/components. There is a pretty good user guide for this tool available on
This tool allows us to compare Tracer outputs - we can test the application on the current ruleset version and also create another access group to test the previous ruleset version. While testing the application, we should run Tracer to get the tracer output and then compare the tracer output to see exactly what changed in between.
2. DB Tracer:
DB Tracer can only be run when we need to get additional details while tracing database issues. Tracer must always be used to check the database issues but when additional information is needed, we can use DB Tracer. DB Tracer can be started in the performance landing page System> Performance>Database Trace.
On the Performance tab, click Play to start collecting database trace. Enable the Trace options to only trace what is required.
3. PerformanceProfiler:
Profiler is useful to identifythe list of rules that are being executed. It is useful in terms of explaining the CPU time for each step in an activity rule. In addition to activity rules, it also traces when rules and data transforms. Profiler traces the whole system and impacts performance severely. So we should stop running Profiler immediately after troubleshooting. Tracer is useful in tracking all these rules, so usage of the Profiler is extremely rare unless we need to trace inline whens.
Performance Profiler can be started by from the Performance landing page.
Performance Testing Tools
When it comes to debugging production applications, most of the issues related to application performance must be solved, however there may be some lingering issues which will require us to reproduce the same issue in other environments if it requires an application fix. Let's take a look at tools that a developer can use while testing a PRPC application.
1. Pega Predictive Diagnostic Cloud (PDC): When it comes to diagnosing or proactively monitoring production issues, Pega offers two separately licensed products. Pega PDC is extremely useful in proactively monitoring nodes and sending reports to concerned users. PDC is hosted on the cloud. Autonomic Event Services (AES) is the other tool which is similar to PDC, except AES is an on premise installed tool. PegaRULES Log Analyzer (PLA) is a lightweight tool and can be installed by the lead developer on their own system. It can be downloaded from the PDN for free.
2. System Reports: Export the results of pr_perf_stats table so it can be used for offline analysis in a non-production environment.
3. System Management application (SMA): SMA is shipped with Pega 7 and can be deployed as an EAR or WAR application. SMA can be deployed in any Pega 7 server and can be used to monitor multiple Pega 7 servers. SMA can be configured to connect to the production servers if required. Pega 7 supports security using the PegaDiagnosticUser Role so access is restricted. Using SMA against a live production server comes with a huge performance hit, so we should be cognizant of the reports we are running from SMA.
a. Log-Usage Reports: Useful tool that is launched from SMA to check various PAL statistics across the whole system.
b. Log files: These files can be accessed from the SMA if access is enabled. SMA also allows setting logging levels on specific loggers by selecting Logging Level Settings under the Logging and Tracing category.
This opens a screen that allows us to set the logging levels.

We can select a logger and then check the current logging level for that logger. We can set it to a different level if we need some additional debugging. We also have the option to reset all the loggers to their initial level, which must be done in a production system after the issue is resolved.

c. Tracer: Tracer is useful to trace any requestor session, which can be done from the Requestor Management Page. However, it is important to know that tracer adversely impacts performance, so it should be run with caution. We should attempt to debug using other tools and try to reproduce the issue in another environment before running tracer. When running tracer, we should control what is being traced. Using the settings icon, enable only the rules that we want to trace, enable abbreviate events and disable logging on the screen. The tracer output should be saved and interpreted using the Tracer Viewer. We can also use PegaDiffer which is extremely useful in production systems to compare two different systems using the tracer output files taken from those systems.
d. Clipboard: The clipboard can be launched from SMA to lookup requestor sessions. The clipboard can be used to check the size of different pages in the requestor session or in the global session.
e. Performance Details and Profiling: We can run PAL or performance profiler on a specific requestor session from the Requestor Management Page.
f. DB Tracer: DB Tracer can be run on any session to debug DB issues. DB tracer is expensive in terms of memory and other resource consumption but can be relied on to look at debugging database issues. Using DB tracer helps to displaythe stack traces to find the hidden problem.
Requestor Management Page in SMA provides access to the tools listed above.

Using SMA we can also run the Global DB tracer to trace DB sessions across the whole node. Similar to Tracer, we should enable only the options that we are interested since it is being run on a production system.

Performance Debugging in Production Environments
Testing performance in a production environment, can be done;
1. After the issue occurs, to diagnose the cause of the issue and fix it quickly.
2. By constantly monitoring the system to identify the potential candidates which might cause performance issues.
Pega Predictive Diagnostic Cloud (PDC), is an extremely useful monitoring tool that can be configured to receive alerts and exceptions from Pega 7 systems. Alert and exceptions can also be interpreted easily using the PegaRULES Log Analyzer (PLA). If a PDC license was not purchased, you will need to use the PLA.
Another tool that is useful in on a production system is the Log-Usage reports. These reports provide hourly statistics showing time spent, CPU usage, memory, network and other key performance statistics. We will learn more about this later in this lesson.
Time becomes interesting in production and receives significant importance. If performance testing is always performed in development, most of the issues relating to counts should be addressed. Alert logs provide clues to where the problem is potentially in the system.
Alerts are usually one of these types:
Threshold Alerts: Threshold Alerts are written when the threshold value such as elapsed time, data size that is being read or written exceeds the default threshold value. For example, if the log file has a lot of PEGA0026 alerts, then we need to add more database connections in the connection pool of the application server. If there are lots of PEGA0037 alerts, then it might suggest that we run the static assembler or maybe the database connection is slow, in such cases other database alerts will also occur.
Observation Alerts: Alert messages written based on what happened or what is being noticed. For example, PEGA0028 which is an alert generated when the Garbage Collection (GC) process cannot reclaim enough memory to remedy the performance impact. Occurrence of this alert indicates that Garbage Collection statistics need to be collected on the node where this occurs.
Event Alerts: Alert messages indicate an event, such as the server restarts, agent disabled, cache disabled and so on. For example, PEGA0008 is written when the server is started. If this is in the alert log file then it might be a problem if there is no scheduled restart. It might mean that there is another alert PEGA0036 indicating the server is shut down. This alert in a development system may not be that critical because the servers may go down more frequently but in a production system this could be a problem.
Summary Alerts: Alert messages belonging to this category are usually a consequence of other alerts. This alert can be handled only by addressing the other alerts. Quite often this alert is not indicative of any one particular long running process but instead indicates there are other alerts associated with this to cause this alert. PEGA0001 is a good example . This is a most commonly observed alert and when this alert is thrown we need to identify other alerts that are thrown along with this alert. We can do this by looking at the interaction ID field.
Check for all the performance and security alerts and clicking on them opens the corresponding PDN article for that alert. The article on individual alert also provides information about how to resolve these alerts.
Alerts can be seen in multiple places and in production it is usually not possible to login to Designer Studio and click the Alerts icon to open up the My Alerts window. Therefore, request a copy of the alert logs and open it using PLA.
Pega PDC, as the name suggests is hosted on the Cloud, and is managed by the Pega Cloud team for all customers who license this product. After acquiring the license, administrators or lead developers need to configure the Pega 7 Server to push alerts and health messages to the Predictive Diagnostic Cloud.
A single Pega PDC is usually capable of monitoring several Pega 7 servers that are running on different environments (development, staging or production). Pega PDC is accessed from a browser by entering the URL which is typically
The xxxx at the end of the URL varies by customer and contains a set of alphanumeric characters. After logging in, users see the PDCManager portal. The Portal provides varied sets of dashboard reports that are extremely useful in identifying the issues. There are also adhoc reports that are available in the reporting tab on the portal.
The landing page menu offers the ability to automatically send the scorecard reports by email. This can be configured using the landing page menu in the portal.

On the landing page, we can configure email specific reports by entering the email IDs of users who can assist in debugging the issue.
The manage subscriptions tab can be used to setup the users to subscribe to these reports so that they are delivered on a schedule.
PDC is useful not only in identifying the top 10 issues but also in providing recommendations about how to fix these issues. The Top 10 Performance Items report identifies the top 10 issues; each has a unique ID and is an action item case. The description field provides the recommendation and in some cases cause of the issue. Click the ID to see additional details. Notice that fixing these 10 issues provide a 67% performance improvement which is a big gain. It should be our goal to not just fix these top 10 issues but get the list closer to a zero.

Some actions include assigning the action item to yourself , assigning it to someone else or resolving the action item.
In some cases other than the DB alerts, we need to look at the Analysis tab which lists all alert occurrences and the associated data such as the rules, PAL data, frequency of occurrence, node where the alert occurs and so on.
Similar to fixing top 10 performance issues, it is highly recommended that we fix the top 10 exceptions report which appears right below the top 10 performance report. Fixing exceptions in most cases improves application performance, so it makes lot of sense to fix them immediately. Exception report parses the system log and displays the stack trace. It is quite helpful to use Pega PDC rather than reading the log file for locating stack traces.
Another important report can be found in the Top Offenders tab on the dashboard.

System Activity Comparison Report
This report provides a summary in terms of how the system is performing compared to the previous week. PDC automatically highlights the important metrics such as the alert counts, exception counts and the average response times. This report also indicates that the number of alerts has increased considerably this week compared to the previous week.
In addition to the dashboard reports, PDC presents additional reports. Let's look at some of them.

For some reports the data can be filtered by selecting the start and end date. All Alerts report provide a comprehensive list of all alerts but if we are monitoring the system constantly, Recent Alerts report should be the safe bet to see what is going on.
Using the reports that are listed in With Date Range category, we can check how these alerts are distributed by day and by node. If a specific node is reported then these reports are useful in checking the alerts on that node If a specific time is reported then we can use the day to figure out the distribution.
Lead developers should use Action Item Reports category to monitor the progress of the action items. Use various filters to drilldown to a specific action item. If the customer has various applications onsite then the Enterprise Charts and Enterprise Reports category can be used to find the specifics of each application. There are also some relevant standard reports that are available to review.
The reports can also be run for different Pega 7 servers by selecting the system before running the reports.
PegaRULES Log Analyzer (PLA)
PLA is used if Pega PDC licensing is not purchased. PLA can import system logs, alert logs and GC logs.
The log files can be downloaded directly from the Logs landing page in the Designer Studio by clicking System >Operations> Logs. In production systems, it is usually protected using a password typically set in the PegaDiagnostic User Role.
If developers cannot login to the Designer Studio (which is true in most production systems) then they can use the System Management Application (SMA) which allows downloading the log files from the Logging and tracing section. Again we can set up role based security and the files can be downloaded in zip or in text format.
The third option is to download directly by logging in to the application server. The location of the log file is usually configured in the prlogging.xml that is defined as part of the prweb.war in case of war implementations or the prresources.jar.
An administrator can download the alert files as a text or as a zip and then email them to developers for offline analysis. The developers can use PLA which parses the alert log files and provides some abilities
to categorize the results. PLA data can be exported using the Excel which can be handed over to the development team, but we can look at the list of all alerts in the log formatted in the PAL Alerts screen.
The items in blue under the Msg ID column are hyperlinks to more information about that alert on the PDN.

PAL statistics are collected in production systems, however it is not feasible for developers to run the Performance Tool in production. So, how do we leverage the Performance details? This is where Log- Usage reports come in use.
Log-Usage reports can be viewed in the SMA by clicking
Collector and Log Usage. We can click the node ID and use View to look at the results on this page or
Logging and Tracing category > Garbage
use the CSV button to export the data as CSV file for offline analysis.
The Log-Usage statistics report in the SMA displays key statistics which can help in looking at the time elapsed, rule counts and bytes being read from BLOB and so on.
Log usage can also be used to get the hourly statistics which helps to narrow down when the issue occurred. This might be useful for cases when we need to determine what exactly happened at a specific time. In a customer application, the system performed badly at a specific time every single day. When looking into it further, they found that this behavior happened because a system agent was starting at that time and it usually processed a lot of cases as it is run only once a day.
Administrators who have access in production can also export the My Performance details report that can be accessed from the System > Performance > My Performance Details page.
PAL statistics are collected in a database table named pr_Perf_Stats, DBAs or application administrators can package all the records in that table and import it into another instance for developers to debug. In addition, Pega 7 ships with several reports in the Log-Usage class (that is mapped to pr_perf_stats table). Take a look at these reports and if required create customized versions of these reports to which you can subscribe to by email.
Application Server Tuning
Pega 7 is a JEE application that is hosted on the application server. It is crucial that the JVM arguments are configured so that Pega applications use the JVM memory appropriately. Properly setting these arguments includes various factors and getting this correct in the development stage helps to ensure the system scales and performs well in production. When tuning JVMs for performance the main thing to consider is how to avoid wasting memory and draining the server's power to process requests. Certain automatic JVM processes, such as Garbage Collection (GC) and memory reallocation, can chew through memory if they occur more frequently than necessary.
VM Heap Size
The Java heap is where the objects of a Java program live. It is a repository for live objects, dead objects, and free memory. When an object can no longer be reached from any pointer in the running program, it is considered "garbage" and ready for collection. A best practice is to tune the time spent doing Garbage Collection to less than 3% of the execution time.
The goal of tuning the heap size is to minimize the time that the JVM spends doing Garbage Collection while maximizing the number of clients that the Application Server can handle at a given time. It can be tricky to determine the most balanced configuration . When setting JVM, we need to make sure the heap size is set correctly, use -Xms and -Xmx to set minimum and maximum heap size. The recommended setting from Pega is to set both of them at 4096m. Set it bigger if the application requires supporting more users per JVM. When using Oracle JVMs, there are other parameters such as PermSize and NewSize are configured to set additional heap size allocations. Check with JEE expert or recommendations from IBM or Oracle for specific instructions.
The heap sizes should be set to values such that the maximum amount of memory used by the VM does not exceed the amount of physical RAM available. If this value is exceeded, the OS starts paging and performance degrades significantly. The VM always uses more memory than the heap size. The memory required for internal VM functionality, native libraries outside of the VM, and permanent generation memory (for the Oracle JVM only: the memory required to store classes and methods) is allocated in addition to the heap size settings.
The heap has two areas - nursery (young) and tenured (old) generation. Every clipboard page created by the application is allocated in the nursery space. If that clipboard page has been active for a long time, then it is moved into the tenured space. The nursery scavenger collection runs 20 to 50 times more than the Concurrent Mark Sweep (tenured generation) collector. The goal is to have the nursery big enough so most of the objects get collected in the nursery.
Garbage Collection
Capture GC by appending -verbose:gc to the JVM arguments to collect garbage collection statistics. Use -XloggC in case of Sun JVMs and -Xverbosegclog in case of IBM JVMs to capture the GC output in a log file. IBM JVMs use Mark Sweep Collector, for Pega applications, set -Xgcpolicy:gencon since gencon (Generational and Concurrent) policy is optimized for highly transactional workloads. Gencon GC considerably reduces the time spent of garbage collection by reducing the wait times. For Oracle JVMs, we use -XX:+UseConcMarkSweepGC, and there are additional settings to be configured such as TargetSurvivorRatio, policy for nursery objects GC, and so on.
What tools are good to analyze the Garbage Collection results?
1. PLA: Garbage Collection logs can be studied using the Pega supplied tools such as PLA (PegaRULES Log Analyzer). Import the GC log in PLA and look for the GC summary in the Manage Data tab. It displays the % of GC Time and like we saw earlier, it should be lesser than 3%.

PLA also offers GC summary reports and
2. GC reports in SMA: We can run adhoc GC reports from SMA in Logging and Tracing > Garbage Collector and Log Usage. We can import the GC log directly in the SMA to look upon certain key statistics.
Pega recommends using a separate JVM for agents, especially in multi-node systems. On an agent- specific JVM, use the following policy for Garbage Collection: Xgcpolicy:optthruput since the agent sessions are short lived and setting Gencon or concurrent mark sweep is expensive in this case.
3. PMAT: IBM PMAT (Pattern Modeling and Analysis Tool) can be downloaded from the IBM developer works community and used for looking up GC reports. To analyze GC, we need to import the GC log file. PMAT parses the IBM verbose GC trace and provides a comprehensive analysis of the Java heap usage by displaying them in charts. It then recommends key configurations by first executing a diagnosis engine and then employing a pattern modeling algorithm in order to make recommendations to optimize the Java heap usage for a given JVM cycle. If there are any errors related to the Java heap exhaustion or fragmentation in the verbose GC trace, PMAT can diagnose the root cause of failures. PMAT provides rich chart features that graphically display the Java heap usage. PMAT offers various statistics, but one of the key statistics to be aware of (Total Garbage Collection Duration) can be found by clicking the Statistics icon on the top. In this example it spent approximately 0.04% and this number should always be less than 3%.

4. HPJMeter: When using Oracle JVMs, HPJMeter is a pretty useful tool in interpreting GC log information. There are several other tools available in market such as JConsole and it is up to the discretion of the person who is responsible in tuning the performance.

Application Server Tuning Tips
Unlike some of the other performance areas (which should be checked periodically as the application is being built), tuning for memory usage should be done after the application is completed. Begin the tuning process by enabling verboseGC. Note your start time and then start the application and run it with several users for some amount of time, making sure to sample all the processing. After all the features have been exercised, close the application, noting the stop time, and review the GC log.
There are a number of application issues that may be highlighted by the data in the verboseGC log, including:
High volume of garbage
Quick allocation failures
The same object being loaded repeatedly
Heap size not reducing with user reductions
When any of these issues occur, it is important to be able to know not only what the JVM is doing, but also what the application is doing. Since it is difficult to tell application processing directly from the verboseGC log, the SMA tool can show the verboseGC log information (Garbage Collection perspective) juxtaposed with the Log Trace information (application perspective).
This combined view of the statistics shows an hourly breakdown of the activity in the system, and allows us to see what the system is doing against what the JVM is doing, and how they relate. If there is a Garbage Collection problem, and too much garbage is being generated, we need to know if this is directly or indirectly related to the activities being run. Is a spike in Garbage Collection paralleled by a spike in processing? Is the growth across the two areas consistent - in other words are more activities being run because that means more garbage is being created— or is one growing faster than the other? Use this tool to trace Garbage Collection anomalies to actions in the application.

Volume of Garbage primarily focusses on two factors, bytes collected in each collection and time spent by system in collecting the garbage (should not be more than 3%). If the number of bytes collected is large enough, check alert logs and system logs to see why the application is requesting so much data.
Quick Allocation Failures - Check to see if the GC log file shows a series of allocation failures within a short period of time. Allocation failures can occur because the heap size is small or the objects are not released even if they are not used.
Same Object Loaded Multiple Times - If the system is trying to load the exact same size object repeatedly, then there is something wrong on in the application and we need to figure out which rule is getting loaded and why it is not getting cached.
Heap Size - When tuning the JVM, check the heap size when users are logging in or logging off. Both should impact heap size, if the logoff does not trigger reduction in the heap size there may be a potential memory leak.
Memory Leak and its consequences
Memory leaks may be negligible in some cases but when left alone, on a system with lot of concurrent users working on applications that uses a huge page size it might trigger a bigger issue. Typically, this results in either decreased response times due to constant Garbage Collection or an out of memory exception if these objects cannot be removed.
In addition to tuning JVM, there are several recommendations from Pega, refer
to servers for additional information.
Alert Thresholds
Performance Alerts help in ensuring the application delivers expected performance. However the alert threshold values that are set by default may not apply in certain cases which might cause an overload of alerts in log files.
PEGA0001 (Browser Interaction) - Summary Alert
The most frequent alert, the default threshold value set by Pega is 1000 milliseconds and can be modified to meet the expected time taken to render a screen, which are usually less than 3000 milliseconds. The threshold value can be modified by using Dynamic System Settings (DASS). We can define a new DASS instance with the setting value of prconfig/alerts/browser/interactionTimeThreshold/WarnMS and set the value in milliseconds.
PEGA0004 (DB Bytes Read) - Threshold Alert
This alert is written to the log file when the data received by the database query in a single interaction exceeds the threshold value. This alert can be enabled to behave either as a warning or as an error depending on the size.
There are two separate settings that can be modified using DASS:
prconfig/alerts/database/interactionByteThreshold/warnMB for warnings
prconfig/alerts/database/interactionByteThreshold/errorMB for errors
By default, warnMB is set at 50 and it displays a warning message when the data exceeds 50 MB. However, errorMB is set to -1 because when errorMB is reached, it displays the error message along with the stack trace in the UI in addition to recording it in the log file. When errorMB is reached, it also stops processing the query. It becomes absolutely mandatory to set errorMB in production systems to prevent unbounded queries that can bring down the system. When this error occurs, we need to look at the database query (Report rule like the report definition or activities using Obj-browse method) and check what is being queried. Typically in most cases we might be querying the entire record instead of what we really need.
We can fix the error by doing one or more of the following:
1. Modifythe query to return only the columns that are required.
2. If more than one row is returned, apply appropriate filter criteria to get only the results that will be used.
3. Also set the maximum number of rows that can be returned in the report rule.
This is one of the five alerts that are marked as Critical implying that it needs to be fixed before the application goes to production.
PEGA0019 (Long running requestor) - Observation Alert
This alert indicates that a requestor has been servicing one interaction for a long elapsed interval. This can arise from an infinite loop, or because a request is made to a database that has failed. There are two settings that can be modified for this alert:
prconfig/alerts/longrunningrequests/notifcations - this is set at 3, which means the agent tries 3 times to delete the requestor before sending the alert and
prconfig/alerts/longrunningrequests/requesttime - this is set at 600, which is 10 minutes, and this is the time it waits before trying to delete the requestor again.
PEGA0030 (Requestor Limit) - Threshold Alert
This alert is written to the log file when the numbers of requestors logged on a single Pega server exceed 200 (the default value). This alert helps us to decide how many servers we would need and if the load balancer is distributing the requests equally amongst all servers. This can be modified by creating a new DASS using the setting value of
Setting the Alert thresholds to meet the business service levels prevents from writing too many entries in Alert logs, similarly we need to set the System Logging Level to ERROR so that entries below that level (WARN, DEBUG and INFO) do not appear in the log files. Capturing all information in log files add severe overhead in I/O in terms of writing these log files.
Tuning Database
Database plays a key role in the performance of applications built on the Pega platform. The system uses several caches to limit requesting data from database for accessing rules; however database operations while creating, updating and resolving cases always play a key role in application performance. Pega tables use a BLOB column in their tables which provide flexibility in terms of defining structures but come with a cost when extracting information from BLOB. We can optimize scalar properties which create additional columns and optimizing page list properties creates declarative index tables. Optimizing too many properties means that each database row becomes big and the performance is impacted with additional processing and space overhead: The more exposed columns, and the more native database indexes we have, the more expensive each read, update, or delete operation becomes.
When it comes to tuning databases, involving DBA's is critical, however Pega offer several tools to help in identify issues when it occurs. One of the most important tools is the Alert log which highlights several DB alerts such as:

Alert Category
PEGA0002 - Commit operation time exceeds limit DB Commit Time
PEGA0003 - Rollback operation time exceeds limit DB Rollback Time
PEGA0004 - Quantity of data received by database query exceeds limit DB Bytes Read
PEGA0005 - Query time exceeds limit DB Time
PEGA0025 - Performing list with blob due to non-exposed columns Reading Blob Need
PEGA0026 - Time to connect to database exceeds limit Acquire DB Connection

PEGA0027 - Number of rows exceeds database list limit DB List Rows
PEGA0033 - Database query length has exceeded a specified threshold DB Query Length
PEGA0034 - The number of declare indexes from a single interaction exceeds a Declare Index threshold

PEGA0039 - The size of a BLOB column read exceeds a threshold Blob Size Read
PEGA0040 - BLOB size written to the database exceeds a threshold Blob Size Written
PEGA0042 - Packaging of database query has exceeded operation time threshold DB Query
Time Alerts
The highlighted alert codes above are key critical alerts and their threshold values can be altered by creating a DASS instance using the following setting value.
prconfig/alerts/database/operationTimeThreshold - sets the alert threshold value for all the highlighted alerts, the default value is 500 milliseconds.
prconfig/alerts/database/packagingTime/warnMs — sets the warning threshold value for the operationTimeThreshold value for PEGA0042 only. Set this value lesser than the operationTimeThreshold so the alert provides a warning if the operationTimeThreshold is in danger of getting exceeded.
BLOB and Declare Indexes
PEGA0025, PEGA0039 and PEGA0040 alerts are related to the BLOB. The alert message is thrown immediately as soon the threshold is exceeded, and we need to look at PAL to find out exactly how much BLOB size is read or being written. Again these can be reduced by exposing the frequently accessed properties as columns and also looking at the application is requesting more information than it really needs.
PEGA0034 indicates that the number of declare indexes getting written exceed the default value of 100 instances. Optimizing pagelist properties enhances reporting performance but we need to make sure we are not impacting the performance by exposing all pagelist properties which causes a slow down during update.
In addition to this, consult with DBA for setting LOB (Large OBject) tuning parameters in the database as these differ for each database vendor. Some strategies include setting the chunk size, caching strategy for LOBs, indexing the LOB column and so on. Look in v54 for more information.
Size Alerts
PEGA0027 is raised when the threshold value of 25,000 returned rows is exceeded, this may be a bigger number in some applications and the threshold value can be altered by creating a new DASS with the setting value as prconfig/alerts/database/rowReadThreshold.
PEGA0033 is disabled by default and should be enabled in development system to check the query length that gets generated by the system. Investigate the report definition in terms of what feature is being used (functions, joins, filters, formatting data using SQL expression) and look up the query generated using tracer. Use Explain plan in the database to tune the SQL query.
Connection Alert
PEGA0026 occurs periodically if more users are accessing database at the same time and there are no connections available in the connection pool. Each database operation should not exceed few seconds, if this alert occurs despite the database performing quicker then we need to modify the connection pool setting in application server. The connection pool must be set to 200 and then increase the number if lot of PEGA0026 alerts occur.
Other Database Tuning Considerations
1. Do not write queries using Connect SQL explicitly, if writing explicit queries use proper tools to verify the query before uploading to production. If using Oracle generate AWR report for the SQL statement.
2. Pega writes lots of information in logs, change the configuration file to write the log file in a system different than the one where database files are stored.
3. When using reporting in Pega, report statistics are collected by default. Disable this in production system by creating a new DASS Setting for reporting/enablestatistics to reduce the overhead of writing this data.
Disabling Unused Agents
Pega comes with several standard agents which are in locked rulesets. It is important to review and tune the agent configuration on a production system since there are standard agents that:
Are not necessary for most applications as they implement legacy or seldom-used features
Should not ever run in production
May run at inappropriate times by default
Run more frequently than is needed - which might cause issues on a large multimode system By default run on all nodes but should only run on one node maximum
This can be done by changing the configuration for these agents, update the agent schedules generated from the agents rule. Let's have a look at some of these standard agents by walking through them by ruleset.
The agents in this ruleset are mainly used for integration with the Project Management Framework (PMF). Disable the agents in production and other controlled environments since PMF integration is usually done only in development to track the application development.
Disable the agents in production. The agents in this ruleset run test suites periodically and should only be enabled in other controlled environments such as QA only if that feature is used.
Disable the agents if AES is not used. Pega-EndUserUI
Make sure the reoccurrence time setting does not conflict with when the system is used for the DeleteOrphanTags and PurgeRecentReports agents. Disable the DeleteOrphanTags if your application does not use Tags.
Agent SystemEventEvaluation is rarely used in most applications. This should be disabled.
The agents in this ruleset support the purge/archive wizard and support certain one-time post-upgrade processing. Disable the agent if the purge/archive feature is not used, which is typically the case.
The agent checkPrintErrors, checkFaxErrors, purgeRequestsTable support the PegaDISTRIBUTION Manager and should be disabled unless that component is installed and used, which is very rarely the case.
The ProcessConnectQueue and ProcessServiceQueue agents support asynchronous connectors and services respectively and should be disabled unless the feature is used, which is very rarely the case.
The frequency for the ProcessFlowDependencies agent may be 'turned down' depending on how the application makes use of the functionality.
The frequency for the AgentBulkProcessing agent may be 'turned down' for some applications.
The frequency for the SendCorr agent may be 'turned down' for some applications.
The GenerateStartingFlows agent updates the developer portal to add starting flows for unit testing. Disable this agent in production.
The ServiceLevelEvents agent is used for processing SLAs and this runs every 30 seconds and maybe tuned down for some applications. There are three other settings namely 'slaunitstoprocess', 'slaunitstoretrieve' and 'slarefreshlisteachiteration'. The first two settings help in configuring how many cases are processed and retrieved each time the agent is run, this is set based on the throughput in terms of number of sla events being generated during each run(30 seconds or whatever the new value is). The third parameter namely slarefreshlisteachiteration is disabled by default but it is useful when multiple nodes are configured to process ServiceLevel Events to avoid contention. Look in the following PDN article for additional information. pega-procom-sla-agent
The agent in this ruleset supports the ruleset maintenance wizards. Disable the agent in production.
Change the SystemCleaner schedule from every 24 hours to a daily off peak time.
Change the SystemIndexer agent execution from every 60 seconds to every 600 seconds since rule changes occur rarely in production. The agent should only run on the index-owning node.
Change the RuleUsageSnapshot agent schedule from every 24 hours to a daily off peak time.
Change the frequency of the PurgeAssemblyDatabaseCache agent to weekly in production.
The ScheduledTaskProcessor agent runs scheduled report definitions and may be 'turned down' depending on application.
The PropertyOptimization agent is used by the Property Optimization tool. Disable this agent on production.
Change the frequency from daily to weekly in production for the DeleteUnusedCSSFiles agent.
Purging and Archiving Cases
Cases created in applications are saved in the database. When the cases get resolved they are not
deleted or moved to a different table. Resolved cases remain in the same table along with all active cases
for multiple reasons.
The Resolved cases may be reopened at a later point.
The Resolved cases may still be needed if their parent cases are not resolved yet. To support reporting requirements that also includes resolved cases.
Implement the Archiving Strategy
An application can contain millions of cases. Typical decisions about the long-term retention of information are based on IT operational needs and capacity. However, a better approach is to supplement capacity-based retention rules with policies aligned with the business value.
Examples of specific archival and purging criteria:
All cases and associated records resolved more than 5 years ago should be archived
All cases and associated records resolved more than 10 years ago should be deleted
The archiving needs to be scheduled to run on a regular basis to archive cases that meet the criteria.
Additional things to consider when implementing the archiving strategy:
Restoration SLA
Search of archived cases
Reopen versus. read only
Reporting on archived cases
Security and access control for archived cases
There are two options for archiving/purging cases:
Archiving Wizard - PRPC comes with archive wizards for configuring and scheduling the archival and/or purging process. The archiving wizards use activity based processing and is suitable for environments with less than 100k cases only.
SQL Scripts - Cases can be copied to a set of mirror tables which parallel the main PRPC tables and hold the archived cases, alternatively the cases can be purged.
The wizard was detailed in the SSA lesson; let's take a look at the script option next.
Cases can either be purged or archived depending on the requirement. Archival is useful if we might need
to retrieve an archived case at a later point in time. Purging on the other hand should only be used if we
are sure that the case will never need to be retrieved because once it is deleted it is lost forever.

Purging and Archiving Cases
Cases created in applications are saved in the database. When the cases get resolved they are not
deleted or moved to a different table. Resolved cases remain in the same table along with all active cases
for multiple reasons.
The Resolved cases may be reopened at a later point.
The Resolved cases may still be needed if their parent cases are not resolved yet. To support reporting requirements that also includes resolved cases.
Implement the Archiving Strategy
An application can contain millions of cases. Typical decisions about the long-term retention of information are based on IT operational needs and capacity. However, a better approach is to supplement capacity-based retention rules with policies aligned with the business value.
Examples of specific archival and purging criteria:
All cases and associated records resolved more than 5 years ago should be archived
All cases and associated records resolved more than 10 years ago should be deleted
The archiving needs to be scheduled to run on a regular basis to archive cases that meet the criteria.
Additional things to consider when implementing the archiving strategy:
Restoration SLA
Search of archived cases
Reopen versus. read only
Reporting on archived cases
Security and access control for archived cases
There are two options for archiving/purging cases:
Archiving Wizard - PRPC comes with archive wizards for configuring and scheduling the archival and/or purging process. The archiving wizards use activity based processing and is suitable for environments with less than 100k cases only.
SQL Scripts - Cases can be copied to a set of mirror tables which parallel the main PRPC tables and hold the archived cases, alternatively the cases can be purged.
The wizard was detailed in the SSA lesson; let's take a look at the script option next.
Cases can either be purged or archived depending on the requirement. Archival is useful if we might need
to retrieve an archived case at a later point in time. Purging on the other hand should only be used if we
are sure that the case will never need to be retrieved because once it is deleted it is lost forever.

Using SQL scripts to archive or purge cases typically offers better performance and scales better than the archive agent.
Setup Purging/Archiving
Cases are saved in multiple tables, the work items typically get saved to a custom variant of the pc_work table.
There is a history table which saves all audit trail information for the work item.
If the work item includes attachments they are saved in a separate table. If the work items are using folders then there are entries in the link tables to link the folder and the work items.
There may be additional tables depending on how the application is configured. For example, there might be index tables created to support declare indexes used for reporting. By default all work items save work parties in index tables.
The following process is recommended when setting up archival on database level:
Create a set of mirror tables within the production PRPC schema which parallels the main PRPC tables and holds the cases which need to be archived (such as PC_WORK_ARCH for archiving entries on PC_WORK).
Write a set of database stored procedures that take work from the active tables, place it into the archive tables and remove the archived work from the active tables.
When the mirror tables are populated with the archive data, the export of these tables is done by the DBA team by scheduling a regular extract job. Mirror tables are then cleared of data and are ready for the next extract.
All types of work objects are part of a parent-child hierarchy. A parent case may have several types of child cases (subcases); the child cases themselves may be parent cases for additional subcases. Archival happens at the top case level only. Child cases are only archived if the top parent meets the archival criteria.

Once the SELECT scripts produce the desired results, change them to DELETE. The order of the tables are important, work backwards.
Schedule Purging/Archiving
A Pega agent can be used to schedule the archival, but a UNIX script (cron job) or any other means of scheduling could also be used if preferred.
Retrieve Archived Cases
A stored procedure can be used to extract the mirror tables back into the production database when/if requested to restore cases. Alternatively a copy system can point to the mirror tables allowing archived cases being accessed through a separate application.
We recommend that you perform the following database maintenance tasks periodically:
1. Perform statistics-gathering on the database, in some cases you might want to perform this task daily. Most Databases support self-tuning that can be automated to run this on a regular basis.
2. Pega recommends setting the initial data file size to 5 GB for the database using rules and then allowing it to grow with automatic extension of data files. Log files should be sized such that log file switching occurs every 20 minutes. Typically this is accomplished by increasing the size of the log files.
3. Rebuild Indexes regularly if required. This can be determined by analyzing the indexes periodically. If there are a large number of PEGA0005 alerts, it might be useful to index all the properties that are used in the WHERE clause.
4. Run Explain Plan on the database periodically if there are a lot of database alerts in the log files.

The following tables are small and volatile, and thus vulnerable to fragmentation. Defragment them regularly by re-writing the table and rebuilding the indexes. In addition it may be advisable to cache them in a separate database buffer pool. Share the following list with your DBA for tuning.
pr_sys_locks - PRPC keeps records of locks held on cases in this table. This table is a primary table that is subject to fragmentation and it needs to be rebuilt on a regular basis on some database platforms on a regular basis.
pr_sys_updatescache - Rule changes are recorded in this table for synchronization to other PRPC server by the System Pulse in a multi-node environment.
pr_sys_context - Holds records for passivated requestors.
pr_page_store - When a user session times out, the system saves a requestor's entire thread
context in this table.
pc_data_unique_id - Holds the most recent assigned case ID for each ID-format in use. This table is very small and contains the same number of rows as cases in a PRPC system, since it is used to get the next unique id for a case it can get fragmented due to frequent updates of these few rows.
pr_sys_*_queues - These tables hold items from the various queues maintained in PRPC so like sys locks these tables are subject to a lot of change and hence churn.
We already know that it's important to secure an application. We do the due diligence to make sure we set up the correct security. Correct security entails users only be able to access cases they are allowed to access and only seeing data they are allowed to see.
In this lesson, we'll examine some of the common mistakes that can open up vulnerabilities in the system, and how to address them including some Best Practices to help us avoid potential vulnerabilities. Then we'll finish up with the use of the Rule Security Analyzer and learn how to integrate this routine check into all of our projects.
Common Mistakes that Lead to Security Vulnerabilities
We already know that it's good idea to follow the guardrails. But we might not be aware that the guardrails also protect the security of our applications. PRPC by default has several built in protections to deal with things such as injection or cross site scripting attacks. When we deviate from the guardrails, we can unknowingly bypass these protections and open ourselves up to vulnerabilities.
Common Types of Security Attacks
The Open Web Application Security Project (OWASP), which is a non-profit organization focused on software security, has documented a 'top ten' list of the most critical web application security risks. For the sake of this lesson, we'll review their top 10 as of their 2013 findings (the full report can be found on their website at
1. Injection (SQL, OS, LDAP)
2. Broken Authentication and Session Management
3. Cross-Site Scripting (XSS)
4. Insecure Direct Object References
5. Security Misconfiguration
6. Sensitive Data Exposure
7. Missing Function Level Access Control
8. Cross-Site Request Forgery (CSRF)
9. Using Components with Known Vulnerabilities
10. Unvalidated Redirects and Forwards
As we can see, the most common type of security risk is injection. Let's take a look at how PRPC combats injection and what we can do to prevent it.

Protecting Against Injection Attacks
First off, what is an injection attack? Injection vulnerabilities come from providing user entered values directly to an interpreter, such as SQL. Consider the following query:
Select pyCreateDateTime from pc_work where pyID = "some user provided value"
If the user was allowed to directly provide the input to this query, they could provide something like:
"W-1; Truncate pc_work"
When this is provided to the interpreter, if it isnt' caught, the person could wipe out the entire work table.
Thankfully, in most situations PRPC doesn't allow this to happen. Report definitions and other standard rules make use of a prepared value instead of directly inserting user values into the SQL statements. This practice prevents the system from treating the user supplied value as native SQL.
So what do we want to watch out for? Use of non-standard practices, such as directly connecting to the database via Java , using of one of the RDB methods or using one of the RDB APIs, such as executeRDB can lead to vulnerability. These approaches should be reserved for the rare cases when it is not possible to achieve the desired outcome using standard rule types, such as report definitions or the relevant Obj methods.
Of course if we must absolutely use one of these then we must never allow users to directly provide query parameters.
Protecting Against Broken Authentication and Session Management Attacks
Broken authentication and session management vulnerabilities result from exposing user account information. Attacker's use these flaws to gain valid account information and can then impersonate the account to gain access.
Thankfully, PRPC takes measures to prevent this from occurring. PRPC handles this concern by not providing the authentication header after the user has logged in. Plus, PRPC expires old sessions when they time out.
However, sometimes businesses request unreasonably long time outs. For example, a timeout of 24 hours. These should already be avoided for their performance impact, but it also helps during a security audit to be able to confirm user sessions are invalidated after a reasonable time.

Protecting Against Cross Site Scripting (XSS) Attacks
This is probably one of the most famous types of attacks. Knowledge about protecting against these attacks and improvement in browsers has led to the OWASP downgrading this attack from being the second most critical risk to the third most critical. Cross site scripting is similar to an injection attack, in that a user can provide input that allows code to execute.
For example, let's say we have a screen where a user can enter free form text. We then display the text the user enters back to them in a confirmation screen. So, if a user enters something like the following into one of those fields
<script>alert("some kind of code")</script>
and, if we don't escape the user entered input, this gets directly entered back into the HTML of the confirm screen, allowing the script to run. Thankfully, PRPC automatically escapes all values, as long as we follow the guardrails. Do not use mode="literal" in a <pega:reference> tag or access the value directly in java with tools.getActiveValue() without passing it through the StringUtils.crossScriptingFile() function.
The easiest way to avoid these is to only use autogenerated UI rules. Of course, if you must use a non- autogenerated rule, always ensure the value has been properly filtered and escaped before displaying it back to the user.

Protecting Against Insecure Direct Object References
We don't need to worry about protecting ourselves from insecure direct object references. The references to objects are all stored on the clipboard instead of being passed back and forth in the URL. PRPC's model does not provide direct object references, so they can't be considered insecure.
Protecting Against Security Misconfiguration and Sensitive Data Exposures
These are squarely on us to manage, but, we should already be doing this. We already know that we need to set up the right roles, privileges and access. And we already know that it's not a good practice to reveal sensitive data, such as Social Security Numbers.
However, it is a good idea to occasionally review the work of any junior developers, to ensure they're also following these good practices.
The Rest of the List
We don't need to concern ourselves with the following vulnerabilities since PRPC natively prevents these from occurring based on the way it handles every request or transaction.
Missing Function Level Access Control Cross-Site Request Forgery (CSRF)
Unvalidated Redirects and Forwards
Things like redirects don't impact a PRPC system.
But, Using Components with Known Vulnerabilities is a concern to us. However, this is outside the realm of a PRPC domain. If there are flaws found in the current versions of java, the application server, or other software then we should be working with administration team to ensure they update this asset to the latest version, thereby patching the security holes.

The Rule Security Analyzer
As diligent as we are, we just can't check every line of every rule that every one of our junior developers create. Thankfully PRPC provides a tool to scan all the custom code in the system for known security risks. This tool is called the Rule Security Analyzer.
To launch the tool, we select Org & Security -> Tools -> Security -> Rule Security Analyzer.
The tool then opens in another window where we can specify which rulesets to scan, which rule types to scan and which regular expression to use for the scan.
PRPC by default has several expressions already defined for us to use while searching for various vulnerabilities.
After running the expression, we get a report of the number of rules searched, as well as any that have an issue.
From here, we can then identify which rules need to repaired. Ideally, the original developer who created the risk would be notified and be responsible for correcting their mistake.
Fitting into a Project
So when is a good time to run the analyzer? Right before a security audit? Just before QA? A best practice is to run the Rule Security Analyzer before locking a ruleset, be it for migration or for any other reason. This allows us to identify and correct issues in rules before they are locked. The tool only takes a couple of minutes to run through the different expressions, and it brings a large piece of mind to know that no security risks are potentially being deployed.
The Basics of Authentication
So what is authentication? Authentication is proving to the system that you are who you say you are. This should not be confused with Authorization, which is determining what rights you have in the system. The two may go hand in hand regarding how they are executed, but are two different animals in reality.
So, what goes into Authentication? At the very minimum, Authentication requires the use of some kind of user identifier. This UserId is what tells the system who you are. Our authentication could be as simple as:
"Hi, I'm Bob"
to as complex as:
"Hi, I'm Bob, here's the proof that I'm Bob, and here's the word of this other guy that you do trust who says that I'm indeed Bob."
Authentication types in PRPC
By default, PRPC supports several different authentication schemes. All authentication schemes get classified as one of these types:
PRBasic — Standard internal PRPC authentication based on Operator IDs and passwords stored in the system.
PRSecuredBasic — Same as PRBasic, but encrypts the credentials using Secure Sockets Layer (SSL)
PRExtAssign— Used for external assignments, such as those from Directed Web Access
J2EEContext — Used for container managed authentication
PRCustom — All other authentication schemes. These are mostly used with Single Sign-On processes.
We won't get into PRBasic, PRSecuredBasic or PRExtAssign in this lesson. Typically we just use these authentication types as-is without any need for additional configuration. J2EEContext will be covered later in this lesson, when we discuss Container Managed Authentication. So let's take a look at PRCustom.
PRCustom is the catch all for any other authentication scheme. This includes the various Single Sign On approaches, which are discussed in the 'Authetication using SSO' lesson, LDAP authentication, whichwas covered in the Senior System Architect course, and any other in house authentication that we might encounter.
The web.xml file
So, how do we specify which authentication scheme we want to use? The system can support multiple authentications all at once by mapping different authentications to different servlets fronting the PRPC system. These mappings are done in the web.xml file. The default web.xml that is shipped with a fresh install of PRPC is available in the related content for you to download.
This file contains two important features. The definition of the Servlet itself.
And, the mapping of the servlet to a URL:
Note that the servlet gets mapped twice both to the root URL "/PRServletProtected" and to the children of that URL "/PRServletProtected/*". When specifying new mappings, we want to ensure we follow this same pattern to properly map both the parent and children.
The servlet definition consists of four values, and then a series of parameters, identified as 'init-param'. All servlets will have a:
servlet-name display-name description
The 'init-param' values vary depending on which type of authentication we are leveraging. The starter web.xml provides several samples we can use to get started with most servlets.

One of these parameters is the AuthenticationType. This is where we specify our choice of PRBasic, J2EEContext, PRCustom and so forth. If this parameter is omitted the system uses PRBasic as the default. When specifying PRCustom, we need to also supply another parameter of AuthService.
So, what is an AuthService? The AuthService is short for Authentication Service. This is where we provide the name of a Data-Admin-AuthService, which is a record we create in the system. This tells the system which particular custom authentication scheme, such as single sign-on or LDAP, to leverage.
To create an Authentication Service, we click the Designer Studio button and navigate to Org & Security > Authentication > Create Authentication Service.
This launches the new form for our Authentication Service. On this form we have the choice of creating a service for either SAML 2.0 or a Custom authentication, the short description and the name of the authentication service.
Since SAML (Security Assertion Markup Language) is leveraged for single sign-on, we'll cover that particular authentication service in the single sign-on lesson. Let's choose Custom and proceed with our creation. A custom Auth Service allows us to specify the Authentication Activities:

These activities are where we provide all the necessary logic for our authentication. Thankfully, PRPC ships with several ready-made activities we can use as starting points for some of the more common authentication schemes.
Not shown here, the system also allows us to specify the JNDI Binding Parameters and the Search Parameters. These are typically leveraged just for an LDAP authentication. The Mapping tab is also leveraged only for an LDAP authentication, and is covered in the External Authorization lesson.
On the Custom tab, we have several options to control the behavior of our authentication. The first is whether or not to use SSL. If we elect to use SSL, and someone accesses the system via a non-secure protocol, such as HTTP, instead of HTTPS, the system will then use the HTML rule defined in the Initial Challenge Stream to redirect to the secured URL. This rule must reside in @baseclass class. The standard HTML rule Web-Login-SecuredBasic is used as the default if one is not specified. We can use this rule as a starting point for any customizations we might need to make to the system when using this feature

The next option is used to provide a popup for gathering credentials instead of a standard login screen. The credential challenge stream is used to specify the HTML rule to use for this popup. As above, it must reside in @baseclass.
The timeout options allow us to configure how the system behaves during a timeout. The Use PegaRULES Timeout option lets us choose between the timeout specified in the user's access group, or to let the application server handle the timeout. Use Basic Authentication is similar to the one for Challenge in that it instructs the system to use a popup instead of redirecting to a login page. Timeout challenge stream is the same as above, an HTML rule in @baseclass is used to gather credentials. The redirect URL is often used in single signon or container managed situations, where the user first authenticates against an external system. This is used to direct them back to that initial system.

The last two options are used to display an alternative login screen when the user fails to authenticate and to choose which types of operators can login through this service. The source of operator credentials doesn't actually look for their credentials. Instead, it merely relates to the "Use external authentication" checkbox on the Security tab of an Operator record. If "Use externally stored credentials" is selected, then only operators that have the checkbox enabled on their record can login. If "Use Credentials Stored in PegaRULES" is selected, then operators need to have that box un-enabled.
Now that we have an authentication service defined, we just need to make sure it's referenced in our servlet definition in the web.xml.
Container Managed Authentication is based on the Java Authentication and Authorization Service, typically called JAAS for short. This is a pluggable API that allows different sets of login modules to be used for each application. Typically, a system using JAAS also integrate with an LDAP server to store the user's credentials.
The lesson does not cover JAAS concepts, or how to configure the application server to use it as every application server is a little different. It is important that you understand the JAAS concepts specific to your environment so that you can understand how PRPC works with JAAS.
What we will concentrate on is how to configure PRPC to interact with JAAS. Let's assume for the moment that application server and LDAP entries have already been setup. Within PRPC, the first thing we need to do is to set up a servlet that uses JAAS. The standard web.xml that ships with the system already has one of these setup. Look in the web.xml file for a Servlet named "WebStandardContainerAuth". This can either be cloned to a new servlet or just used as-is.

This servlet is mapped to the url-pattern '/PRServletContainerAuth'. If we were creating a new servlet we would need to clone these, but for the sake of this lesson let's just use them as-is.

When using Container Managed Authentication, we still need to have an operator in the system. This is where the "EstablishOperatorExternally" activity comes in. This is an extension point activity, meaning the one provided is blank. We can override it to perform any necessary update on our operator, such as pulling in updated information from an LDAP connection or creating a new operator when one doesn't exist. When leveraging an LDAP system, it is typical for this activity to access the LDAP system again, not to authenticate the user, but to pull any additional, updated details.
The important thing to note is that the step page of this activity is the intended operator page. Hence, any edits, updates or creations need to be merged into the current step page using a Page-Merge-Into.
The PDN has examples of this in the PegaSample-IntSvcs ruleset available in the Integration Services and Connectors article ( from the Integration topic. Though written for version 5.x, the samples still apply to version 7.x.
Security policies can be used to enforce such things as minimum password lengths or whether or not the system should use a CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart).
They are disabled by default. To enable them, first navigate to the Security Polices landing page by clicking Designer Studio > Org & Security > Authentication > Security Policies. The landing page can also be accessed from Designer Studio > System > Settings > Security Policies. Both navigations open the same landing page so it doesn't matter which path we use to access it.
Once there, when we click the Enable Security Policies button, we're reminded that these policies will take effect for the entire system and will affect all current operators.

If we accept, the system opens up the policy settings for us to configure. These policies can be considered as three different functions. The first relates to the user's passwords. Within these settings we can enforce a minimum operator ID length, a minimum password length, the number of numeric, alphabetic and special characters required. All of these can be set to a maximum value of 64, but please be realistic. 64 character long passwords are overly difficult for most people to remember.
The last two settings allow us to specify how many unique passwords need to pass before an operator can reuse an old one and the number of days before the operator must change their password. Both of these can be set to a maximum value of 128.

The second relates to the use of CAPTCHA. A default CAPTHA implementation leverages the CAPTCHA shipped with PRPC. To use a custom CAPTCHA, first review the PDN article Customizing CAPTCHA presentation and function ( Enabling the CAPTCHA

Reverse Turing test module allows the system to present a CAPTCHA upon an authentication failure, based on the probability specified in the next setting. When disabled, no CAPTCHA displays on a failure. The last setting, enable presentation of CAPTCHA upon initial login, tells the system to display the CAPTCHA if it is the first time somebody tries to access the system from a new computer.

And the third is for Authentication lockout penalties. When enabled, if the user fails to login for the set number of times, a lockout penalty is imposed before they can login again. These penalties compound. For example, using the settings below, if a user has failed to login 5 times they must wait 8 seconds before they can try to login again. If they fail a 6th time, then they must wait 16 seconds (8 + 8) and if they fail a 7th time then they must wait 24 seconds (16+8).
Note that the last one of these "Audit log level" applies to the lockout, and all the logins. It can be set to one of three values:
None - No log entries are written
Basic - Only logs failed login attempts
Advanced - Logs all logins, both failed and successful
These logs are available through the "Display Audit Log" button at the top of this landing page. These logs are written instances of the Log-SecurityAudit. If required, additional custom reports can be written against these instances if required. The "View History" button allows us to see an audit trail of all the changes to these settings.
Single Sign-On (or, SSO) makes it possible to login only once - typically outside PRPC - thereby gaining access to multiple applications, including those built on PRPC.
There are multiple ways to drive SSO; we will cover only a subset of these in this lesson.
Windows-based authentication incorporates a mechanism whereby logging into a PC provides the credentials for logging into other applications. This involves SPNEGO (Simple and Protected Generic Security Services Application Programing Interface Negotiation Mechanism) and Windows integrated authentication.
It is also possible to use 3rd party desktop applications, such as SiteMinder, to drive SSO authentication. These applications can "screen scrape" to essentially push credentials through the login screen, or use "token security" to push out a token that is subsequently verified by PRPC.
It's also possible to drive single sign-on through a customer website. With this technique, there can be a link from the website to the PRPC application, or PRPC can be embedded using Internet Application Composer IAC.
The association from the external website to PRPC can be configured to require token validation, if desires. Other techniques, such as IP address control, can used to ensure a "trust" between the external website and PRPC.
Single sign-on and the General Design Pattern
Let's take a quick look at SSO in action to provide context for this lesson. This is an example of an external website redirecting to PRPC. This is a demonstration website - hence it has a somewhat minimalist design.

First, we login.
Once this is complete, the website displays a link to our PRPC application. Imagine several links here: some pointing to PRPC applications, others pointing elsewhere. The idea here is that the website handles the authentication, and then opens the door to other applications.
We click the link... and we're in.
There is a lot that went on behind the scenes in the few milliseconds after clicking the PRPC link, including token verification. So, let's take a look at this now.
Let's start by looking at the basic design pattern of how authentication is handled. This framework extends to all types of custom authentication, not only including SSO, but LDAP as well.
Custom authentication is identified in the Servlet Descriptor - the web.xml file.
In the web.xml file, the AuthenticationType parameter must be set to "PRCustom" and the "AuthService" parameter must be set to point to an Authentication Service instance in the PRPC system.

Below is the authentication service instance that this particular servlet points to.
The Authentication Service instance points to authentication activity and the authentication activity handles the actual business logic that is required post-authentication.
Authentication may involve token verification, or perhaps dynamically creating an operator instance if one does not exist yet. Once the activity is complete, the user is logged in.
Because the business logic is entirely handled by the authentication activity, let's take a look at those first.

Technically speaking, there is really only one absolute requirement for an authentication activity; it must identify an operator in the system so that PRPC can change context to that operator. Before the activity runs, the application context is essentially anonymous, treating all users the same.
In addition to establishing an operator, authentication activities can also do the following:
Create a new operator instance for the user logging in when one does not currently exist.
Authenticate. Please do not be thrown by this. Authentication activities only authenticate in certain circumstances, for example with LDAP integration. They do not authenticate with SSO since, in this case, authentication is handled by an external system or website.
Verify a token passed through from an access control system used for single sign-on. Specifically, with respect to SSO, the credential information must be passed from the external system to PRPC. This is done either through the query string portion of the URL or with custom HTTP headers.
Let's take a look at an example authentication activity that was used for our SSO example. In this activity, the system validates that the SSO requests comes from a known system, by comparing the Application ID provided against one that's stored in the system. If that check passes, it then places a call to VerifySecurityToken to make sure the token is legitimate. We'll get back to that process in a little bit.

Provided these checks pass, the system opens the operator's record, and subsequently passes the page with the Operator Record back in the pyOperPage parameter.
This parameter is an 'OUT' parameter of type Java Object; it is critical that we perform this step. The code that runs this activity reads the page from this parameter and subsequently establishes the session in the context of that operator. We don't need to worry about setting this context; it is entirely done by PRPC, provided that we properly set pyOperPage parameter. The other out parameter, pyChallange, is used for an alternative case, which we will discuss shortly.

It's also important to note that we need to make sure the 'Require authentication to run' option is not selected on the Security tab. This is because we need to run this activity before the user is authenticated.

That's it! We have just covered the essential requirements of an authentication activity: create an operator page and pass it into the pyOperPage parameter, and uncheck the "Authenticate?" option.
The pyChallenge Parameter and How to Set It
Recall the pyChallange parameter discussed above. There are two possible outcomes for a given run of an authentication activity. The first is that the activity successfully identifies an operator, and the second outcome is whether or not PRPC subsequently associates the session with the operator so that he may go about his work. Perhaps an error thrown when the token was tested, or perhaps reauthentication is required because the session timed out.
As we stated earlier, in the case of success, set the pyOperPage parameter to establish the operator for the session.
On the other hand, if the session should not continue, set the pyChallenge parameter. This determines what happens after the activity is complete. It determines what is rendered on the screen.
pyChallenge should be set to one of the following constant field values, all in the PRAuthentication interface:
Many of these render the PRPC interface based on a setting in the "Custom" tab of the corresponding Authentication Service Instance. Setting values in the "Custom" tab of an authentication service instance will, on its own, do nothing. This is only a repository for settings. Setting the pyChallenge parameter in the authentication activity is what truly drives which behavior is executed.
If pyChallenge is set to "PRAuthenticaiton.DEFAULT_CHALLENGE", and it is an initial authentication rather than a timeout re-authentication, PRPC uses the standard PRPC authentication, or a customized HTML Stream, depending on how the "challenge options" are set in the authentication service instance.
On the other hand, if this is a timeout scenario, the settings in "Timeout Options" are used instead.
Please note that the "Challenge Options" are rarely used for SSO, since the authentication is typically started from an external system. These settings are used, however, with LDAP integration.
If pyChallenge is set to "PRAuthentication.GENERATED_CHALLENGE_STREAM", the activity itself displays HTML; the authentication service instance is ignored.
If pyChallenge is set to "PRAuthentication.DEFAULT_REDIRECT_URL", PRPC redirects to an external website. This is relevant for timeouts only. If an external website is used for the initial authentication, we probably want to reauthenticate using the same site.
"PRAuthentication.GENERATED_REDIRECT_URL" also redirects to a URL, but does so based on the URL set in the pyRedirectTo parameter.

Finally, setting pyChallenge to "PRAuthentication.DEFAULT_FAIL_STREAM" will render the HTML configured in the "Authentication Fail Stream" setting in the authentication service instance.
Reading Parameters for Authentication Activities
Let's take a closer look at our example web application. When we enter our username and password and click the link, the external website populates a URL Query String with UserId, UserName, and some other information. For example,
http://localhost:8080/prweb/SSOServlet? +User& Activity=&SenderTime=20141008022516&pw=5df2ff78d656114085c96f1a0bd2271d
When the authentication activity is executed, PRPC automatically populates these query parameters as activity parameters. So, we should be able to leverage these parameters throughout the activity.
This is known as reading user credentials from the URL query string. Alternatively the information could be passed in as custom HTTP headers.
As stated earlier, PRPC automatically populates activity parameters with values from the query string. This is not the case with HTTP headers. So, we need to do an API call to lookup these custom headers.
Specifically, we need pull a value from the pxRequestor.pxHTTPServletRequest property.
The pxHTTPServletRequest is a façade object for the HttpServletRequest java object.
As such, it implements the javax.servlet.http.HttpServletRequest interface.
Once a user has gained access to PRPC, it is no longer possible to access the pxHTTPServletRequest property; as such it can only be queried from an authentication activity.
To retrieve a header value, use the built-in @java function as shown here:
@java("((javax.servlet.http.Http.ServletRequest)tools.getRequestor().getRequestorPage().getObj ect(\"pxHTTPServletRequest\")).getHeader(\"UserId\")")
Note that this is an example of pulling a "UserId" custom header. Replace "UserId" as appropriate for each of the custom headers we need to read.
Organizing Custom Authentication Rulesets
When writing authentication activities, give careful consideration to the ruleset in to which the rules are saved. The authentication activity is called shortly after authentication begins, prior to the requestor page being associated to the user who attempting to log in.. This makes sense, since it is the activity itself that establishes this operator.
Ordinarily, the operator dictates the access group, which dictates the application, which dictates the ruleset, which in turn houses rules like our authentication activities. A "chicken and egg" problem perhaps?
That is, how are rules called before PRPC knows the operator, the Application and its corresponding ruleset stack?
The answer is the use of a "Requestor Type" instance — specifically a "Browser Requestor Type".
It is this instance that is used to point to the access group, rather than an operator.
In fact, there are other rules that have these same "pre-operator" characteristics. Consider the user interface rules that are used to render the login screen when SSO is not used. When a login page is shown, a requestor page is created, and a ruleset stack is assembled based on the access group in the browser requestor type instance.

This ruleset stack continues to be used until some point after the authentication activity is run and the operator is established.
The conclusion here is that the custom authentication activity should be saved into a ruleset that is dedicated for rules called before authentication. Do not mix it with process and business rules related to your application.
Depending on what ruleset is used, update the "Browser" requestor type, and the Access Group it points to accordingly.
Also, add this ruleset to the main application ruleset stack. This ensures that the timeout activity can be found when the application ruleset stack is active.
A final note, the-applies to class of the authentication activity should be "Code-Security".

Let's take a closer look at the how token verification works. In the previous example, we covered how our authentication activity provided two steps prior to locating the operator record. One was to validate against an Application ID, the other validated the token.
Here is the activity we showed before.
Let's explain token generation and verification, and then revisit our token security authentication activity. The token is generated in the external access control system, and then verified in PRPC.
The external system takes a password, one that is also known to PRPC, and combines it with a query string and the timestamp, to make a single string.
A hash is generated from this string; this hash represents the token.
The external system passes the token, the query string, and sender time to PRPC using either the URL or custom headers, as discussed earlier.
PRPC then checks the Sender Time and ensures that the request is recent. If not, the verification fails. If it is considered recent, PRPC then continues and generates its own copy of the token, forming it.
The same way as the external application.
The Query String, the sender time, and its own copy of the password are combined, to create a single string.
A hash is generated from this, thus creating the token.
If the tokens match, the verification passes.
A PRPC "Application ID" instance plays a critical role in this process.

For clarity, Application ID instances are not the same as "Application" rules, which control the ruleset stack and other settings for the PRPC application you are developing.
"Application ID" data instances are specifically used for token verification. They store the password that is used, as well as the lag time, in seconds, that is considered acceptable for evaluating whether a request is considered recent.
Let's look at an example URL that is generated by the external website to show how these values are passed to PRPC.
http://localhost:8080/prweb/SSOServlet? +User& Activity=&SenderTime=20141008022516&pw=5df2ff78d656114085c96f1a0bd2271d
The "From" parameter indicates the Application ID instance, "SenderTime" is the time that the token was created, and "pw" represents the token. Do not be confused by the fact that this parameter is called "pw". It is not the password; it is the token, which, as discussed earlier, includes the password.
If possible, have the external application name the parameters as described here, making it easier to leverage standard authentication and token verification activities.
Looking back at our activity, we can see that the first step is to open the Application ID instance, which identifies the password and lag time that is used for token verification. As discussed, this is identified by the "From" parameter contained within the URL query string.
Note that the application ID instance is opened into the "AppIDPage" page. Please also use this very same page name, as it is expected by the standard token verification activity, which is called next.
Note that the "VerifySecurityToken-GMT" activity has the current parameter page passed into it. That way, it can readily consume the parameters that come from the URL. If we construct the token as shown in this lesson, it should be possible to call this standard activity. If not, create one based on it.
The step has a transition to check if the token verification succeeded.
If the "errorMessage" parameter is empty, the activity continues. If not, it advances to the error handling step.
The error handling step involves setting the "pyChallenge" parameter and then displays the error message with HTML.

Let's take a look at how PRPC has improved SAML integration in Pega 7. If you recall the lesson on Authentication Services, you should remember that we could create either a SAML Authentication Service, or a custom one.
This time, let's create a SAML Authentication Service. The SAML Auth Service provides the ability to state whether or not to use SAML authentication, and to provide the Identity Provider (IdP) information.
To save time, we can also import the IdP information using the "Import IdP metadata" link. This opens a popup where we can either choose to get the information from a URL or a File.

Similar to the Identity Provider, the form gives us the ability to configure the Service Provider settings, as well as a link to download the SP metadata.
The last part of the form is where we specify the timeout and authentication activities to use with this SSO connection. The default activities for use with SAML are shown here:
These default activities work similar to the rest of the authentication activities in that it's goal is to identify an operator, and then use that operator to set the correct context for the user session.
In the other security lessons, we cover how to access information from an external source, whether that comes from an LDAP connection or from a Single Sign-On process. In this lesson, we're going to take a look at how we can use this information to create or modify our operators.
Operator on The Fly
This process is known as Operator on the Fly, and is built into several sample Authentication Activities already.
In an authentication activity, we need to open an operator record. If one doesn't exist, we can create a new one. The new operator is based on a model operator, and has some of its attributes updated. We'll get back to that in a little bit.
In either case, whether the operator was created, or if an existing one is found, we finish up with updating some of the operator's properties. In the sample activities, this is typically just the user's name. But we can expand this to update any of the information available to us from the external system. For instance, we might need to change the operator's OrgUnit, or perhaps their phone number. In the later parts of this lesson, we'll be using this feature to also update their authorization.

Mapping Attributes from LDAP
When we're using an LDAP authentication, we can leverage built in functionality to set these properties for us. When used with the standard AuthenticationLDAP activity, we can specify the mapping of LDAP attributes to an operator on the Mapping tab of the Authentication Service record.
To do this, we open the Authentication Service record, go to the mapping tab and then specify a relation between LDAP attributes on the left to properties of a Data-Admin-Operator-ID on the right. In this example, we showed how we would map the operator's Organization and OrgUnit.
In other authentication schemes, it is probably easier to just create a simple Data-Transform to handle any mappings for other authentication schemes since we would not need to process the LDAP attributes, it's probably easier to just create a simple Data-Transform to handle any mappings.
Using a Model Operator to Create a New Operator
So how about creating a new operator? Most likely we won't have all the necessary information about an operator passed to us during the authorization process so instead we need to get it from another source. This is where the concept of a model operator comes in.
The model operator is an operator record that doesn't reflect a real user. Or at least it shouldn't, but it's important to note that since the model operator is an operator record, somebody can log in as this operator. Therefore the model operator should always be granted the lowest authorizations by default. We'll override those and provide the correct authorizations when we map them from the external source.
To establish a model operator, we first need to create an operator record. This record can be as generic or detailed as needed. Obviously the more details provided here, the less that needs to come from the external source, but it still needs to be generic enough to apply to all operators.

The same is also true of properties on the Work tab. These should be detailed enough to provide the most detail, while still remaining generic enough to apply to all operators. It's important to note that we need to make sure we set the correct Organizational unit for this operator as we'll be creating one per org unit that will be logging in.
A common practice, though not a specified best practice, is to uncheck the 'Operator is available to receive work' property. This prevents any work accidently getting assigned to this operator, especially if a work routing practice like round robin or load balancing is used.
The next step is to define the model user in the Organization Unit record. We use this in our activity to determine which operator record to open as the model.

Back in our authentication activity, we'll use this model operator as the starting point for a new operator. During authentication, if an operator record is not found we would:
Identify which Org Unit record to use, based on a parameter passed. For example, something like a param.deptNumber could be used to identify the Org Unit based on a Cost Center's Number specified in the record.
Open the Org Unit and retrieve the ID of the model user. Using this ID, open the operator record we created.
Update the properties on the operator. Since later in the authentication activity we'll be updating the properties from our external source, the only two properties we'll need to worry about here are:
o pyUserIdentifier - since this is the key of the operator record it is not typically updated in the later step
o pyOpAvailable - if the common practice of disabling the 'Operator is available to receive work' property is being used, this property is set to 'false'. We will need to set it to 'true' for this user to be able to perform any work
Based on the business' auditing requirements, some additional properties, such as the '.pxCreateDateTime' may also need to be updated.
That's it. We can now let this new operator record flow through the remaining process of updating their information from the external system.
Access Group Authorizations
By now you should already know that Access Groups drive the rights and privileges a user receives. They specify which applications a user can access, which access roles they are granted, the portals they are granted and a host of other information. This makes it perfect for mapping external authorizations. Provided we can classify our users as either having, or not having these rights.
There are two approaches to this method. If the external system is capable, the best approach is to directly state the Access Group in the external system. This way, we can leverage direct mapping in the PRPC system. It's important to note that this needs to be set exactly the same in both systems. For example, let's take a look at the access group 'HRServices:Managers'. To be able to specify this access group for a user, the external system needs to provide this value. Then, in our mappings, it's as simple as setting the operator's access group as shown here:

However, we don't always have control over what information we'll receive from the external source. In those cases, which frankly are the more likely case, we need to use an alternative approach, by looking up the Access Group. We first create a lookup to relate the values we expect from the external system with an access group. Here is a simple example that leverages a Decision Table:
Alternatively this could have been accomplished with a decision tree, a map value, a look up against a custom data table, or any other method. But I like to use a decision table when possible because it provides a clean interface to the user. In this particular example, we either provide the manager's access group, if the LDAP group returns that they're a manager, or we default to the user access group. In reality, there is often more than just two choices. Back in our data transform, we then just need to use a function call to evaluate the decision table.
Access When Authorizations
The Access Group method works great as long as all the users receive the same rights. But what do we do when we need to deal with granular rights? For example, let's say an operator 'Fred' has been put on probation. While on probation, Fred can only work on the Candidates that he's currently assigned. We could set up a different access group, different access roles, etc... and then create mapping rules to switch Fred's access group, but that seems like a lot of work for a temporary situation like probation.
Instead, we can leverage Access When rules to control this. Using an Access When, we can keep the same generic access group for all the operators, and conditional control Fred's access.
First, we need a way to store the fact that Fred's on probation. We could use an existing property on the Operator record, but for this lesson, let's create a new one. That's right. We can extend the Operator records with our own properties! Let's create one now. Here we have a new true/false property called "OnProbation" defined on the operator.

Now, we need to make sure it's mapped from our external attributes. In this example, we were using an LDAP authentication, so it's just a matter of adding it to the mapping tab of our authentication service. But we could have just as easily integrated this with any of our other mapping approaches.

Now that the property is available on for operators, we can define a new Access When rule that looks at this property. This Access When rule evaluates if the person is on probation, and if true, also ensures the assignment belongs to them.

The final step is to update the Access Manager to use the Access When rule we created. Once we switch to the right Access Group, we can change the Open and Modify rights from Full Access to Conditional and then specify our new Access When rule.
That's it. Now whenever Fred, or anybody else who's on probation, logs in they will only be able to access the Candidates that they currently have assigned to them. Once their probation is lifted, and the corresponding flag is changed in the external system, they'll once again have the full access they had originally.

Custom Authorizations
These two approaches of course aren't the only ways to centralize the authorization model. Some clients have custom in-house systems they use instead to accomplish these tasks. Or they might need their authorizations to act against a scale. For example, a client might have a threshold of approved loan amounts. Below that threshold an operator can approve the loan themselves but over the threshold they must seek a manager's approval.
This kind of business logic is often built into the process rather the various Access rules, but it is still an example of an external authorization. In these cases we follow the same kind of approach. We get the value from the external system, we store it against the user session, preferably attached to the Operator record directly, and then we evaluate it when necessary.
User Passwords
PRPC automatically encrypts the passwords for all operators in the system. If a user's operator record is accessed, all that you'll see is an indecipherable string, like this:
This encryption protects any user that is internally authenticated. If we're using an external authentication, it is up to the external system to protect the user passwords.
Ruleset Passwords
Similar to the user passwords, PRPC automatically encrypts the passwords for locked rulesets. This prevents developers from unlocking earlier versions and making changes to existing rules.
Stored Data
Data that's stored in the PRPC database is not encrypted. The data in the BLOB columns is stored using a compression algorithm that render's the data unreadable for standard DB Queries, but that is not the same as encryption. Any PPRC system would be able to uncompress the data back to a readable form.
To provide some level of protection, PRPC offers the ability to encrypt data as a Password, as a TextEncypted property or the entire BLOB. Properties that are encrypted as a Password use PRPC's existing encryption algorithms that are used for user and Ruleset passwords. A developer doesn't need any additional configuration and can use this feature out of the box.
The other two options, TextEncrypted and BLOB encryption require us to create a site-specific cipher. This functionality relies on Java Cryptography Extension (JCE) technology built into the PRPC system. We'll get back to that later.
Passwords in command line tools
If we're using any of the command line tools, such as for migration, these tools often require access to either the PRPC system or the Database. The passwords used for these are stored in either the or prconfig.xml files, depending on the tool. These files use clear text for these passwords normally, but they can be encrypted with some configuration.
We can generate passwords for use in the file using a built in cipher in the PRPC system, but the passwords for use in the prconfig.xml rely on first implementing the site-specific cipher referred to above.
To implement many of these encryptions, we need to first implement our own cipher. By using our own cipher, we can ensure that we're the only ones with the proper keys to un-encrypt any data that we've encrypted.
Running the scripts
Before we can get started though, we need to review how to run the scripts we'll be using for creating this cipher. The script we need to use is called 'runPega'. This is script is located the scripts directory of the PRPC installation media and is available as either runPega.bat or In either case, the arguments are the same. To run the script, we need to specify:
--driver: --prweb: --propfile: Java-class: Args:
Path to the JDBC driver
Path to directory used to run this instance
Path to a file
Name of a class to execute
Any arguments that need to be provided to the class.
To simplify executing this script, it is recommended to create a temporary directory with all the necessary files. The name of this directory doesn't matter, but let's refer to it as './OurTempPega' for the rest of this example.
Within this directory, we require a 'WEB-INF' directory. And within the WEB-INF we require a 'lib' and a 'classes' directory. So, after all of our creations we should have:
./OurTempPega/WEB-INF ./OurTempPega/WEB-INF/lib ./OurTempPega/WEB-INF/classes
Within the lib directory, we need to copy these files:
And the corresponding jar files for our JDBC connection
Within the classes directory, we need to copy these files: prlogging.xml
Once this is all done, we can execute the script by specifying:
./ -driver ./OurTempPega/WEB-INF/lib/<jdbc driver>.jar -prweb ./OurTempPega/WEB-INF - propfile ./OurTempPega/WEB-INF/classes/ <java class> <arguments>
Throughout the rest of this lesson, instead of repeating these arguments everytime we'll just refer to: runPega <java class> <arguments>
However be sure to include the arguments when actually entering them on the command line. Creating the Cipher

Creating the Cipher
The first step to creating our own cipher, is to determine which ciphers are available on the system. We achieve this by running the script with the JCECapabilites class and an argument of none.
runPega com.pega.pegarules.exec.internal.util.crypto.JCECapabilities none
This will provide a list of wealth of information, but we're only interested in the providers, ciphers and key generators. We need to pick a cipher that is listed in both the cipher and keygenerator outputs.
[Cipher.png] [KeyGenerator.png]
Based on the lists provided in our sample system, as shown here, we can choose from among:
Blowfish DES
DESede RC2
These are the only ones that are present in both lists. Any of the others cannot be used because they're missing from one list or the other.
The next step is to generate a class skeleton for our cipher. We do this by running the same runPega script, but in this case we specify the PRCipherGenerator class. Note that there are no arguments.
runPega com.pega.pegarules.exec.internal.util.crypto.PRCipherGenerator This command prompts for three inputs:

Here, we can specify any of the above ciphers for the transform, or we can select the default suggested value. DESede is one of our valid ciphers, so let's go ahead and select that one.
The key length is specified next. Again let's accept the default of 112. The last one is the provider. Again let's accept the default.
This generates an output we can use to create a class file:
Using this starting class, we need to replace two values:
YYYY.ZZZZ needs to be replaced with a new name for the package.
XXXX needs to be replaced with a new name for the class. Note that there are two locations where this needs to be replaced.
Once we've created and updated the class, we need to get it into the rulebase. We do this by using the compileAndLoad script. Similar to runPega, this script takes the three parameters of:
--driver the path to the jdbc driver for our database 556

--prweb the path to the our temporary directory
--propfile the path to the file And the additional parameters of:
the path to the directory with our class
a name for the jar file that will be created to hold our class either one of:
o Pega-EngineCode to put it in the base pega codeset. This is considered best practice but it would need to be manually migrated to any new version of PRPC.
o Customer to put it in the customer codeset. This doesn't need to be manually migrated to a new version of PRPC.
--codesetversion the latest version of the codeset. If using the customer codeset the version is always 06-01-01
-prprivate this is the full package path to the cipher class we've just created. This must be in quotes. Note that this parameter only uses a single '-' instead of the double '—' used in all the other parameters.
So, if we put this together with our sample directories, the command would look like:
./ -driver ./OurTempPega/WEB-INF/lib/<jdbc driver>.jar -prweb ./OurTempPega/WEB-INF -propfile ./OurTempPega/WEB-INF/classes/ -basedir ./OurTempPega -jarfile <nameofjar> --codeset Pega-EngineCode -codesetversion 07-10-09 -prprivate "<pathtoclass>"
And after running the command, we should see an output similar to this:

The last step is to update the system to use the cipher we just created. This is done in the prconfig.xml file.
<env name="crypto/sitecipherclass" value="<the full name of our class>"
So, if we used the same class from our sample:
<env name="com.pega.pegarules.exec.internal.util.crypto.testCipherClass">
After updating the prconfig.xml, we'll need to restart the server so the changes can take effect.

Now that we've created the cipher and loaded it into the rulebase, we need to start implementing it throughout the various tools we use.
The first one we'll address is the database password in This password is currently stored in clear text. To create the encrypted password, we need to use a standalone class called PassGen. This class encrypts a password using a PBEWithMD5AndAES cipher, which is different than our site specific cipher. This is because our site specific cipher is stored in the database so the system needs a standardized way to access our cipher that is still protected.
To generate the password, we again use the runPega command: runPega password
After running the command, we should see an output similar to this:
We then take this password and replace the clear text password in the file with the one shown here. The password for the database is now encrypted.
Encrypting prconfig.xml
Some of the command line tools rely on prconfig.xml instead of the file. To encrypt these connections, we can create a JCE Keyring file to hold the passwords. This file is created or updated using the KeyringImpl class. This class takes three parameters:
The path to the keyring file and the keyring file name. The name must match the name given for the database in the prconfig.xml file. I.E. pegarules
The path to the prconfig.xml file
The path to our running directory
Again, we execute this class using the runPega command:
runPega com.pega.pegarules.exec.internal.util.crypto.KeyringImpl ./OurTempPega/WEB- INF/classes/<keyring file> ./OurTempPega/WEB-INF/classes/prconfig.xml ./OurTempPega
Note that this class doesn't use parameter keywords, so the order is important.
Once we execute this command, we receive a series of prompts to step us through the process:
1. Provide the password to update the keyring file. If this is an existing keyring then the password must match the one given at this step during creation.
2. Confirm the clear text values for the database url, database user name and database user password that is read from the prconfig.xml file.
3. We're then prompted to provide the password to encrypt. Accepting the default and pressing enter will work in most cases, as this will be the password that was already read from prconfig.xml.
4. Optionally, we could have entered REMOVE which will remove this entry from the keyring.
5. After this step, we can remove the clear text password from the prconfig.xml file.
The system will now recognize that there is no password provided in the prconfig.xml file and then look for the corresponding pegarules.keyring file to retrieve the password.
Using the keyring in a command line
When running from a command line, we can explicitly state the keyring to use by providing the parameter -Dpegarules.keyring and full path to the keyring file. In our sample, the parameter will look like this:
Additional steps for BIX
If we're encrypting these values to use with a BIX implementation, then there are a few extra steps we need to take to also encrypt the PRPC user and password BIX uses to access the system.
We follow the same steps as we did for encrypting the database password, but we include a fourth parameter of bix. Using our same sample, the command would look like:
runPega com.pega.pegarules.exec.internal.util.crypto.KeyringImpl ./OurTempPega/WEB- INF/classes/<keyring file> ./OurTempPega/WEB-INF/classes/prconfig.xml ./OurTempPega bix
This tells the system we're encrypting for BIX and triggers some additional prompts.
1. Enter bix username:
2. Enter bix password:
We provide the username and password when prompted then click enter to complete the process.
The last step is to update the BIX command line to use the keyring using the same -Dpegarules.keyring parameter we've covered.