logotyp

Recommending an initial system

Because you will unlikely know the capacity requirements of all eventual process applications that are hosted by your CoE environment at the time it must be designed, the best approach might be to simply build a system with known capacity and scalability attributes. With this approach, you can grow your environment as required over the life of the process application system.

Although you do not yet know the characteristics of the process applications that will eventually be hosted by your CoE environment, you might know about certain aspects of your organization. Imagine a scenario where a company has fully embraced BPM and that every employee will, in some way, interact with the system. One way to make a rough calculation is to consider the maximum number of concurrent users that will ever use the system. In the case of a successful BPM program, this number will be every employee. If your BPM program is targeted only at a certain department, then it will be every employee of that department.

Testing revealed that, for a mildly complex process application, 900 concurrent users can be supported on a 2-core X86 2.6 GHz system. This system also nicely scales to 1800 users on a similar 4-core system.

Table 3.4.3/1

Number of users Hardware
Less than 900 X86 2.6 GHz 2-core
A range of 900 - 1800 X86 2.6 Ghz 4-core
Greater than 1800 Several options exist. BPM has demonstrated that it can support more than 10,000 concurrent users on an 8-core IBM POWER7® system running an IBM AIX® operating system. This way can be combined with horizontal scalability to address very large concurrent user requirements.

Being conservative, if you have fewer than 1000 potential business process users, a 4-core system might be an appropriate starting point for your initial CoE platform.

Recall that there are availability requirements, which are best addressed by having a 2-node cluster where the nodes exist on separate hardware. This way effectively doubles hardware requirements: instead of having a single 4-core machine, you can have two 4-core machines in a cluster. Although this approach can handle twice the load, you should design the system so that, in the event of a node failure, the surviving node can handle 100% of the load.

A 2-node cluster running on 4-core hardware can provide most organizations more than enough initial capacity to handle a successful CoE program.

Medium-term solution-based partitioning

As the “starter” system begins to mature and take on load from hosting more BPM solutions, it might make sense to consider partitioning resources according to specific categorization classes (such as criticality, security, region, or department).

No rule says that all process applications must be deployed to a single process server runtime environment. As you approach the capacity of your existing production process server environment, you must consider expanding into multiple production server environments. The most natural way to approach this option is to roll new process applications into new environments. Other options are to create departmental, regional, or security-specific runtime environments.

In fact, if the expected nature of your foreseeable process applications are strictly departmental, now might be the time to make the decision to create departmentally partitioned runtime environments. As of BPM 7.5, it is not practical to migrate a business process application and all of its in-flight data from one process server to another. Therefore, it is even more important to make the correct platform decision now, before you reach a capacity issue and have to handle the difficult situation of migrating a process application and its data to another platform.

Long-term capacity planning

A shared infrastructure BPM deployment will eventually consist of multiple BPM process applications. It is important that these applications coexist without affecting the performance of their neighbor applications that share the same BPM platform. Your BPM runtime environment has finite resources including CPU, memory, network bandwidth, database throughput, and I/O capacity. These resources all must be shared by the hosted business process applications and must be used in a way that does not introduce performance degradations or failures. Therefore, some level of performance testing and capacity planning is required to project the potential impact of a new process application into your BPM runtime environments.

A strong suggestion is for a capacity planning exercise to be a part of your business process on-boarding practice. Further, business processes are never static; they change over time, take different paths based on market, seasonal, and business conditions. By their nature, business processes are in a constant state of evolution. Therefore, be sure to make capacity planning a regular part of your release cycle.

A reasonable capacity planning exercise naturally calls for regular capacity testing to discover the limits and needs of the existing system against the planned future system. The best possible capacity test might be one that exercises every deployed process application concurrently. Although it is theoretically possible to concurrently test all process applications under a single massive test plan, doing so is not always practical. We are often constrained by time, budget, and resources. Compared to testing the entire system of process applications, a typical one-process-application BPM deployment is relatively easy to test. This chapter introduces a methodology for testing individual process applications and extrapolating those results to determine plausible system impact.