Version 5.1
Document Number GC09-3955-01
Second edition (November 2003)
This edition applies to:
and to all subsequent releases and modifications until otherwise indicated in new editions.
Order publications through your IBM representative or through the IBM branch office serving your locality.
(C) Copyright International Business Machines Corporation 2003. All rights reserved.
U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Edge component concepts and discussions
Building networks with Edge components
This book, WebSphere(R) Application Server Concepts, Planning, and Installation for Edge Components, serves as an introduction to the WebSphere Application Server Edge components. It provides high-level product overviews, detailed functionality discussions for key components, edge-of-the-network scenarios, installation and initial configuration information, and demonstration networks.
WebSphere Application Server Concepts, Planning, and Installation for Edge Components is written for experienced network and system administrators who are familiar with their operating systems and with providing Internet services. Prior exposure to the WebSphere Application Server or to WebSphere Application Server Edge components is not required.
Accessibility features help a user who has a physical disability, such as restricted mobility or limited vision, to use software products successfully. These are the major accessibility features in WebSphere Application Server, Version 5.1:
This documentation uses the following typographical and keying
conventions.
Table 1. Conventions used in this book
Convention | Meaning |
---|---|
Bold | When referring to graphical user interfaces (GUIs), bold face indicates menus, menu items, labels, buttons, icons, and folders. It also can be used to emphasize command names that otherwise might be confused with the surrounding text. |
Monospace | Indicates text you must enter at a command prompt. Monospace also indicates screen text, code examples, and file excerpts. |
Italics | Indicates variable values that you must provide (for example, you supply the name of a file for fileName). Italics also indicates emphasis and the titles of books. |
Ctrl-x | Where x is the name of a key, indicates a control-character sequence. For example, Ctrl-c means hold down the Ctrl key while you press the c key. |
Return | Refers to the key labeled with the word Return, the word Enter, or the left arrow. |
% | Represents the UNIX command-shell prompt for a command that does not require root privileges. |
# | Represents the UNIX command-shell prompt for a command that requires root privileges. |
C:\ | Represents the Windows command prompt. |
Entering commands | When instructed to "enter" or "issue" a command, type the command and then press Return. For example, the instruction "Enter the ls command" means type ls at a command prompt and then press Return. |
[ ] | Enclose optional items in syntax descriptions. |
{ } | Enclose lists from which you must choose an item in syntax descriptions. |
| | Separates items in a list of choices enclosed in { }(braces) in syntax descriptions. |
... | Ellipses in syntax descriptions indicate that you can repeat the preceding item one or more times. Ellipses in examples indicate that information was omitted from the example for the sake of brevity. |
This part introduces the WebSphere Application Server Edge components, Caching Proxy and Load Balancer, and discusses their integration with Application Server. It also defines the components of Caching Proxy and Load Balancer. In addition, this section introduces other related WebSphere family products.
This part contains the following chapters:
Introducing WebSphere Application Server Edge components
Edge components and the WebSphere family
More information on Application Server and Edge components
WebSphere is Internet infrastructure software that enables companies to develop, deploy, and integrate next-generation e-business applications such as those for business-to-business e-commerce. WebSphere middleware supports business applications from simple Web publishing through enterprise-scale transaction processing.
As the foundation of the WebSphere platform, WebSphere Application Server offers a comprehensive set of middleware that enables users to design, implement, deploy, and manage business applications. These applications can range from a simple Web site storefront to a complete revision of an organization's computing infrastructure.
Processor-intensive features, such as personalization, offer a competitive advantage to every e-business. However, habitually relegating these features to central servers can prevent valuable functions from scaling to Internet proportions. Consequently, with the constant addition of new Web applications, a business's Internet infrastructure must grow in scope and impact. In addition, reliability and security are extraordinarily important to an e-business. Even a minimal service disruption can result in a loss of business.
Edge components (formerly Edge Server) are now a part of the WebSphere Application Server offering. Edge components can be used in conjunction with WebSphere Application Server to control client access to Web servers and to enable business enterprises to provide better service to users who access Web-based content over the Internet or a corporate intranet. Using Edge components can reduce Web server congestion, increase content availability, and improve Web server performance. As the name indicates, Edge components usually run on machines that are close (in a network configuration sense) to the boundary between an enterprise's intranet and the Internet.
The WebSphere Application Server includes the Caching Proxy and Load Balancer Edge components.
Caching Proxy reduces bandwidth use and improves a Web site's speed and reliability by providing a point-of-presence node for one or more back-end content servers. Caching Proxy can cache and serve static content and content dynamically generated by WebSphere Application Server.
The proxy server intercepts data requests from a client, retrieves the requested information from content-hosting machines, and delivers that content back to the client. Most commonly, the requests are for documents stored on Web server machines (also called origin servers or content hosts) and delivered using the Hypertext Transfer Protocol (HTTP). However, you can configure the proxy server to handle other protocols, such as File Transfer Protocol (FTP) and Gopher.
The proxy server stores cacheable content in a local cache before delivering it to the requester. Examples of cacheable content include static Web pages and JavaServer Pages files that contain dynamically generated, but infrequently changing, information. Caching enables the proxy server to satisfy subsequent requests for the same content by delivering it directly from the local cache, which is much quicker than retrieving it again from the content host.
Plug-ins for Caching Proxy add functionality to the proxy server.
You can further extend the functions of Caching Proxy by writing custom plug-in modules to an application programming interface (API). The API is flexible, easy to use, and platform independent. The proxy performs a sequence of steps for each client request it processes. A plug-in application modifies or replaces a step within the request-processing workflow, such as client authentication or request filtering. The powerful Transmogrify interface, for example, provides access to HTTP data and enables substitution or transformation of URLs and Web content. Plug-ins can modify or replace designated processing steps, and you can invoke more than one plug-in for a particular step.
Load Balancer creates edge-of-network systems that direct network traffic flow, reducing congestion and balancing the load on various other services and systems. Load Balancer provides site selection, workload management, session affinity, and transparent failover.
Load Balancer is installed between the Internet and the enterprise's back-end servers, which can be content hosts or Caching Proxy machines. Load Balancer acts as the enterprise's single point-of-presence node on the Internet, even if the enterprise uses multiple back-end servers because of high demand or a large amount of content. You can also guarantee high availability by installing a backup Load Balancer to take over if the primary one fails temporarily.
Load Balancer intercepts data requests from clients and forwards each request to the server that is currently best able to fill the request. In other words, it balances the load of incoming requests among a defined set of machines that service the same type of requests. Load Balancer can distribute requests to many types of servers, including WebSphere Application Servers and Caching Proxy machines. Load balancing can be customized for a particular application or platform by using custom advisors. Special purpose advisors are available to obtain information for load balancing WebSphere Application Servers.
If the Content Based Routing component is installed together with the Caching Proxy, HTTP and HTTPS requests can even be distributed based on URLs or other administrator-determined characteristics, eliminating the need to store identical content on all back-end servers. The Dispatcher component can also provide the same function for HTTP requests.
Load balancing improves your Web site's availability and scalability by transparently clustering content servers, including HTTP servers, application servers, and proxy servers, which are surrogate content servers. Availability is achieved through parallelism, load balancing, and failover support. When a server is down, business is not interrupted. An infrastructure's scalability is greatly improved because back-end processing power can be added transparently.
Load Balancer includes the following components:
For all Internet services, such as HTTP, FTP, HTTPS, and Telnet, the Dispatcher component performs load balancing for servers within a local area network (LAN) or wide area network (WAN). For HTTP services, Dispatcher can perform load balancing of servers based on the URL content of the client request.
The Dispatcher component enables stable, efficient management of a large, scalable network of servers. With Dispatcher, you can link many individual servers into what appears to be a single virtual server. Your site thus appears as a single IP address to the world.
For HTTP and HTTPS services, the Content Based Routing component performs load balancing for servers based on the content of the client request. The Content Based Routing component works in conjunction with the Application Server Caching Proxy component.
The Site Selector component enhances a load-balancing system by allowing it to act as the point-of-presence node for a network and load balance incoming requests by mapping DNS names to IP addresses. In conjunction with Metric Server, Site Selector can monitor the level of activity on a server, detect when a server is the least heavily loaded, and detect a failed server.
The Cisco CSS Controller component generates server-weighting metrics that are sent to a Cisco CSS switch for server selection, load optimization, and fault tolerance.
The Nortel Alteon Controller component generates server-weighting metrics that are sent to a Nortel Alteon switch for server selection, load optimization, and fault tolerance.
The Metric Server component runs as a daemon on a load-balanced server and provides information about system loads to Load Balancer components.
The IBM WebSphere family is designed to help users realize the promise of e-business. It is a set of software products that helps users develop and manage high-performance Web sites and integrate Web sites with new or existing non-Web business information systems.
The WebSphere family consists of WebSphere Application Server, including the Edge components, and other WebSphere family software that is tightly integrated with the WebSphere Application Server and enhances its performance. For an overview of WebSphere Application Server and its components, see Introducing WebSphere Application Server Edge components.
Tivoli Access Manager (formerly Tivoli Policy Director) is available separately. It provides access control and centralized security for existing Web applications and offers one-time authentication capability with access to multiple Web resources. A Caching Proxy plug-in exploits Access Manager's security framework, enabling the proxy server to use Access Manager's integrated authorization or authentication services.
WebSphere Portal Server (available separately) offers a framework to meet the presentation, security, scalability, and availability issues associated with portals. Using Portal Server, companies can build their own custom portal Web site to serve the needs of employees, business partners, and customers. Users can sign on to the portal and receive personalized Web pages that provide access to the information, people, and applications they need. This personalized single point of access to all necessary resources reduces information overload, accelerates productivity, and increases Web site usage.
WebSphere Portal Server runs in a WebSphere Application Server cluster to achieve scalability and reliability. The Application Server Load Balancer component can also be used for additional load balancing and high availability.
WebSphere Site Analyzer (available separately) helps enterprises to anticipate capacity and performance problems. With Site Analyzer, Caching Proxy and Load Balancer logs and other manageability aids can be used to anticipate the demand for additional resources by monitoring, analyzing, and reporting your Web site usage. In addition, Site Analyzer manageability components assist users who install and upgrade Edge components, manage and store configurations, operate Edge components remotely, and view and report events.
WebSphere Transcoding Publisher (available separately) can convert a Web page for viewing on a mobile device, such as an Internet-capable phone, translate Web content to the user's preferred national language (by invoking WebSphere Translation Server), and convert markup languages. Transcoding Publisher enhances Caching Proxy's capabilities by allowing it to serve content for different devices and users. After accessing content from a Web server, Caching Proxy's Transmogrify interface can be configured to invoke Transcoding Publisher to transform the data and tag it for variant caching and possible reuse. At Caching Proxy's post-authentication interface, Transcoding Publisher then checks the proxy server for content matching the user and device requirements and, if a match is found, serves the content from the proxy server's cache.
The following documentation specific to the WebSphere Application Server Edge Components is available in the Edge Components InfoCenter.
Other WebSphere Application Server documentation is available from the WebSphere Application Server library page.
Self-help support information on Edge Components is available from the WebSphere Application Server support page or from the Edge Components InfoCenter.
The following is a list of Web sites to obtain information on Edge Components or related information:
This part includes detailed discussions that highlight some of the functionality available with Edge components. See Introducing WebSphere Application Server Edge components for an overview of the Application Server's Caching Proxy component.
This part contains the following chapters:
Caching Proxy's caching functionality helps to minimize network bandwidth utilization and ensure that end users receive faster, more reliable service. This is accomplished because the caching performed by the proxy server offloads back-end servers and peering links. Caching Proxy can cache static content and content dynamically generated by WebSphere Application Server. To provide enhanced caching, Caching Proxy also functions in conjunction with the Application Server Load Balancer component. See Introducing WebSphere Application Server Edge components for an introduction to these systems.
Caching Proxy machines are located between the Internet and the enterprise's content hosts. Acting as a surrogate, the proxy server intercepts user requests arriving from the Internet, forwards them to the appropriate content host, caches the returned data, and delivers that data to the users across the Internet. Caching enables Caching Proxy to satisfy subsequent requests for the same content directly from the cache, which is much quicker than retrieving it again from the content host. Information can be cached depending on when it will expire, how large the cache should be and when the information should be updated. Faster download times for cache hits mean better quality of service for customers. Figure 1 depicts this basic Caching Proxy functionality.
Figure 1. Basic proxy configuration
Legend:
1--Client 2--Internet 3--Router/Gateway 4--Caching
Proxy 5--Cache 6--Content host
In this configuration, the proxy server (4) intercepts requests whose URLs include the content host's host name (6). When a client (1) requests file X, the request crosses the Internet (2) and enters the enterprise's internal network through its Internet gateway (3). The proxy server intercepts the request, generates a new request with its own IP address as the originating address, and sends the new request to the content host (6).
The content host returns file X to the proxy server rather than directly to the end user. If the file is cacheable, Caching Proxy stores a copy in its cache (5) before passing it to the end user. The most prominent example of cacheable content is static Web pages; however, Caching Proxy also provides the ability to cache and serve content dynamically generated by WebSphere Application Server.
To provide more advanced caching functionality, use Caching Proxy in conjunction with Application Server's Load Balancer component. By integrating caching and load-balancing capabilities, you can create an efficient, highly manageable Web performance infrastructure.
Figure 2 depicts how you can combine Caching Proxy with Load Balancer to deliver Web content efficiently even in circumstances of high demand. In this configuration, the proxy server (4) is configured to intercept requests whose URLs include the host name for a cluster of content hosts (7) being load-balanced by Load Balancer (6).
Figure 2. Caching Proxy acting as proxy server for a load-balanced cluster
Legend:
1--Client 2--Internet 3--Router/Gateway 4--Caching
Proxy 5--Cache 6--Load
Balancer 7--Content host
When a client (1) requests file X, the request crosses the Internet (2) and enters the enterprise's internal network through its Internet gateway (3). The proxy server intercepts the request, generates a new request with its own IP address as the originating address, and sends the new request to Load Balancer at the cluster address. Load Balancer uses its load-balancing algorithm to determine which content host is currently best able to satisfy the request for file X. That content host returns file X to the proxy server rather than via Load Balancer. The proxy server determines whether to cache it and delivers it to the end user in the same way as described previously.
Advanced caching functionality is also provided by Caching Proxy's Dynamic Caching plug-in. When used in conjunction with WebSphere Application Server, Caching Proxy has the ability to cache, serve, and invalidate dynamic content in the form of JavaServer Pages (JSP) and servlet responses generated by a WebSphere Application Server.
Generally, dynamic content with an indefinite expiration time must be marked "do not cache" because the standard time-based cache expiration logic does not ensure its timely removal. The Dynamic Caching plug-in's event-driven expiration logic enables content with an indefinite expiration time to be cached by the proxy server. Caching such content at the edge of the network relieves content hosts from repeatedly invoking an Application Server to satisfy requests from clients. This can offer the following benefits:
Servlet response caching is ideal for dynamically produced Web pages that expire based on application logic or an event such as a message from a database. Although such a page's lifetime is finite, the time-to-live value cannot be set at the time of creation because the expiration trigger cannot be known in advance. When the time-to-live for such pages is set to zero, content hosts incur a high penalty when serving dynamic content.
The responsibility for synchronizing the dynamic cache of Caching Proxy and Application Server is shared by both systems. For example, a public Web page dynamically created by an application that gives the current weather forecast can be exported by Application Server and cached by Caching Proxy. Caching Proxy can then serve the application's execution results repeatedly to many different users until notified that the page is invalid. Content in Caching Proxy's servlet response cache is valid until the proxy server removes an entry because the cache is congested, the default timeout set by the ExternalCacheManager directive in Caching Proxy's configuration file expires, or Caching Proxy receives an Invalidate message directing it to purge the content from its cache. Invalidate messages originate at the WebSphere Application Server that owns the content and are propagated to each configured Caching Proxy.
Caching Proxy offers other key advanced caching features:
Network performance is affected by the introduction of Caching Proxy functionality. Use Caching Proxy alone or in conjunction with Load Balancer to improve the performance of your network. See Introducing WebSphere Application Server Edge components for an introduction to these systems.
Caching Proxy's performance within your enterprise is only as good as the hardware on which it runs and the overall architecture of the system into which it is introduced. To optimize network performance, model hardware and overall network architecture to the characteristics of proxy servers.
This section discusses network hardware issues to consider when introducing Caching Proxy functionality into your network.
A large amount of memory must be dedicated to the proxy server. Caching Proxy can consume 2 GB of virtual address space when a large memory-only cache is configured. Memory is also needed for the kernel, shared libraries, and network buffers. Therefore, it is possible to have a proxy server that consumes 3 or 4 GB of physical memory. Note that a memory-only cache is significantly faster than a raw disk cache, and this configuration change alone can be considered a performance improvement.
It is important to have a large amount of disk space on the machine on which Caching Proxy is installed. This is especially true when disk caches are used. Reading and writing to a hard disk is an intensive process for a computer. Although Caching Proxy's I/O procedures are efficient, the mechanical limitations of hard drives can limit performance when the Caching Proxy is configured to use a disk cache. The disk I/O bottleneck can be alleviated with practices such as using multiple hard disks for raw cache devices and log files and by using disk drives with fast seek times, rotational speeds, and transfer rates.
Network requirements such as the speed, type, and number of NICs, and the speed of the network connection to the proxy server affect the performance of Caching Proxy. It is generally in the best interest of performance to use two NICs on a proxy server machine: one for incoming traffic and one for outgoing traffic. It is likely that a single NIC can reach its maximum limit by HTTP request and response traffic alone. Furthermore, NICs should be at least 100 MB, and they should always be configured for full-duplex operation. This is because automatic negotiation between routing and switching equipment can possibly cause errors and hinder throughput. Finally, the speed of the network connection is very important. For example, you cannot expect to service a high request load and achieve optimal throughput if the connection to the Caching Proxy machine is a saturated T1 carrier.
The central processing unit (CPU) of a Caching Proxy machine can possibly become a limiting factor. CPU power affects the amount of time it takes to process requests and the number of CPUs in the network affects scalability. It is important to match the CPU requirements of the proxy server to the environment, especially to model the peak request load that the proxy server will service.
For overall performance, it is generally beneficial to scale the architecture and not just add individual pieces of hardware. No matter how much hardware you add to a single machine, that hardware still has a maximum level of performance.
The section discusses network architecture issues to take into consideration when introducing Caching Proxy functionality into your network.
If your enterprise's Web site is popular, there can be greater demand for its content than a single proxy server can satisfy effectively, resulting in slow response times. To optimize network performance, consider including clustered, load-balanced Caching Proxy machines or using a shared cache architecture with Remote Cache Access (RCA) in your overall network architecture.
One way to scale the architecture is to cluster proxy servers and use the Load Balancer component to balance the load among them. Clustering proxy servers is a beneficial design consideration not only for performance and scalability reasons but for redundancy and reliability reasons as well. A single proxy server represents a single point of failure; if it fails or becomes inaccessible because of a network failure, users cannot access your Web site.
Also consider a shared cache architecture with RCA. A shared cache architecture spreads the total virtual cache among multiple Caching Proxy servers that usually use an intercache protocol like the Internet Cache Protocol (ICP) or the Cache Array Routing Protocol (CARP). RCA is designed to maximize clustered cache hit ratios by providing a large virtual cache.
Performance benefits result from using an RCA array of proxy servers as opposed to a single stand-alone Caching Proxy or even a cluster of stand alone Caching Proxy machines. For the most part, the performance benefits are caused by the increase in the total virtual cache size, which maximizes the cache hit ratio and minimizes cache inconsistency and latency. With RCA, only one copy of a particular document resides in the cache. With a cluster of proxy servers, the total cache size is increased, but multiple proxy servers are likely to fetch and cache the same information. The total cache hit ratio is therefore not increased.
RCA is commonly used in large enterprise content-hosting scenarios. However, RCA's usefulness is not limited to extremely large enterprise deployments. Consider using RCA if your network's load requires a cluster of cache servers and if the majority of requests are cache hits. Depending on your network setup, RCA does not always improve enterprise performance due to an increase in the number of TCP connections that a client uses when RCA is configured. This is because an RCA member is not only responsible for servicing URLs for which it has the highest score but it must also forward requests to other members or clusters if it gets a request for a URL for which it does not have the highest score. This means that any given member of an RCA array might have more open TCP connections than it would if it operated as a stand-alone server.
Major contributions to improved performance stem from Caching Proxy's caching capabilities. However, the cache of the proxy server can become a bottleneck if it is not properly configured. To determine the best cache configuration, a significant effort must be made to analyze traffic characteristics. The type, size, amount, and attributes of the content affect the performance of the proxy server in terms of the time it takes to retrieve documents from origin servers and the load on the server. When you understand the type of traffic that Caching Proxy is going to proxy or serve from its cache, then you can factor in those characteristics when configuring the proxy server. For example, knowing that 80% of the objects being cached are images (*.gif or *.jpg) and are approximately 200 KB in size can certainly help you tune caching parameters and determine the size of the cache. Additionally, understanding that most of the content is personalized dynamic pages that are not candidates for caching is also pertinent to tuning Caching Proxy.
Analyzing traffic characteristics enables you to determine whether using a memory or disk cache can optimize your cache's performance. Also, familiarity with your network's traffic characteristics enables you to determine whether improved performance can result from using the Caching Proxy's dynamic caching feature.
Disk caches are appropriate for sites with large amounts of information to be cached. For example, if the site content is large (greater than 5 GB) and there is an 80 to 90% cache hit rate, then a disk cache is recommended. However, it is known that using a memory (RAM) cache is faster, and there are many scenarios when using a memory-only cache is feasible for large sites. For example, if Caching Proxy's cache hit rate is not as important or if a shared cache configuration is being used, then a memory cache is practical.
Caching Proxy can cache and invalidate dynamic content (JSP and servlet results) generated by the WebSphere Application Server dynamic cache, providing a virtual extension of the Application Server cache into network-based caches. Enabling the caching of dynamically generated content is beneficial to network performance in an environment where there are many requests for dynamically produced public Web pages that expire based on application logic or an event such as a message from a database. The page's lifetime is finite, but an expiration trigger cannot be set in at the time of its creation; therefore, hosts without a dynamic caching and invalidation feature must designate such as page as having a time-to-live value of zero.
If such a dynamically generated page will be requested more than once during its lifetime by one or more users, then dynamic caching provides a valuable offload and reduces the workload on your network's content hosts. Using dynamic caching also improves network performance by providing faster response to users by eliminating network delays and reducing bandwidth usage due to fewer Internet traversals.
Functioning in conjunction with content hosts, such as WebSphere Application Server, or with the Application Server Caching Proxy component, the Application Server Load Balancer component enables you to enhance your network's availability and scalability. (See Introducing WebSphere Application Server Edge components for an introduction to these Edge components.) Load Balancer is used by enterprise networks and is installed between the Internet and the enterprise's back-end servers. Load Balancer acts as the enterprise's single point-of-presence on the Internet, even if the enterprise uses multiple back-end servers because of high demand or a large amount of content.
Availability is achieved through load balancing and failover support.
Load balancing improves your Web site's availability and scalability by transparently clustering proxy servers and application servers. An IT infrastructure's scalability is greatly improved because back-end processing power can be added transparently.
You can satisfy high demand by duplicating content on multiple hosts, but then you need a way to balance the load among them. Domain Name Service (DNS) can provide basic round-robin load balancing, but there are several situations in which it does not perform well.
A more sophisticated solution for load balancing multiple content hosts is to use Load Balancer's Dispatcher component as depicted in Figure 3. In this configuration, all of the content hosts (the machines marked 5) store the same content. They are defined to form a load-balanced cluster, and one of the network interfaces of the Load Balancer machine (4) is assigned a host name and IP address dedicated to the cluster. When an end user working on one of the machines marked 1 requests file X, the request crosses the Internet (2) and enters the enterprise's internal network through its Internet gateway (3). The Dispatcher intercepts the request because its URL is mapped to the Dispatcher's host name and IP address. The Dispatcher determines which of the content hosts in the cluster is currently best able to service the request, and forwards the request to that host, which, when the MAC forwarding method is configured, returns file X directly to the client (that is, file X does not pass through Load Balancer).
Figure 3. Load balancing multiple content hosts
Legend:
1--Client 2--Internet 3--Router/Gateway 4--Dispatcher 5--Content
host
By default, the Dispatcher uses round-robin load balancing like DNS, but even so it addresses many of DNS's inadequacies. Unlike DNS, it tracks whether a content host is unavailable or inaccessible and does not continue to direct clients to an unavailable content host. Further, it takes the current load on the content hosts into account by tracking new, active, and finished connections. You can further optimize load balancing by activating Load Balancer's optional advisor and manager components, which track a content host's status even more accurately and incorporate the additional information into the load-balancing decision process. The manager enables you to assign different weights to the different factors used in the decision process, further customizing load balancing for your site.
Load Balancer's Dispatcher can also perform load balancing for multiple Caching Proxy machines. If your enterprise's Web site is popular, there can be greater demand for its contents than a single proxy server can satisfy effectively, potentially degrading the proxy server's performance.
You can have multiple Caching Proxy systems performing proxy functions for a single content host (similar to the configuration depicted in Figure 1), but if your site is popular enough to need multiple proxy servers, then you probably also need multiple contents hosts whose loads are balanced by Load Balancer. Figure 4 depicts this configuration. The Dispatcher marked 4 load balances a cluster of two proxy servers (5), and the Dispatcher marked 7 load balances a cluster of three content hosts (8).
Figure 4. Load balancing multiple proxy servers and content hosts
Legend:
1--Client 2--Internet 3--Router/Gateway 4--Dispatcher 5--proxy
server 6--Cache 7--Dispatcher 8--Content
host
The cluster host name of the Dispatcher marked 4 is the host name that appears in URLs for the enterprise's Web content (that is, it is the name of the Web site as visible on the Internet). The cluster host name for the Dispatcher marked 7 is not visible on the Internet and so can be any value you wish. As an example, for the ABC Corporation an appropriate host name for the Dispatcher marked 4 is www.abc.com, whereas the Dispatcher marked 7 can be called something like http-balancer.abc.com .
Suppose that a browser on one of the client machines marked 1 needs to access file X stored on the content servers marked 8. The HTTP request crosses the Internet (2) and enters the enterprise's internal network at the gateway (3). The router directs the request to the Dispatcher marked 4, which passes it to the proxy server (5), which is currently best able to handle it according to the load-balancing algorithm. If the proxy server has file X in its cache (6), it returns it directly to the browser, bypassing the Dispatcher marked 4.
If the proxy server does not have a copy of file X in its cache, it creates a new request that has its own host name in the header's origin field and sends that to the Dispatcher marked 7. The Load Balancer determines which content host (8) is currently best able to satisfy the request, and directs the request there. The content host retrieves file X from storage and returns it directly to the proxy server, bypassing the Dispatcher marked 7. The proxy server caches file X if appropriate, and forwards it to the browser, bypassing the Dispatcher marked 4.
Load Balancer acts as a single point-of-presence for your enterprise's content hosts. This is beneficial because you advertise the cluster host name and address in DNS, rather than the host name and address of each content host, which provides a level of protection against casual attacks and provides a unified feel for your enterprise's Web site. To further enhance Web site availability, configure another Load Balancer to act as a backup for the primary Load Balancer, as depicted in Figure 5. If one Load Balancer fails or becomes inaccessible due to a network failure, end users can still reach the content hosts.
Figure 5. Using a primary and backup Load Balancer to make Web content highly available
Legend:
1--Client 2--Internet 3--Router/Gateway 4--Primary
Dispatcher 5--Backup
Dispatcher 6--Content host
In the normal case, a browser running on one of the machines marked 1 directs its request for a file X to the cluster host name that is mapped to the primary Load Balancer (4). The Dispatcher routes the request to the content host (6) selected on the basis of the Dispatcher's load-balancing criteria. The content host sends file X directly to the browser, routing it through the enterprise's gateway (3) across the Internet (2) but bypassing Load Balancer.
The backup Dispatcher (5) does not perform load balancing as long as the primary one is operational. The primary and backup Dispatchers track each other's status by periodically exchanging messages called heartbeats. If the backup Dispatcher detects that the primary has failed, it automatically takes over the responsibility for load balancing by intercepting requests directed to the primary's cluster host name and IP address.
It is also possible to configure two Dispatchers for mutual high availability. In this case, each actively performs load balancing for a separate cluster of content hosts, simultaneously acting as the backup for its colleague.
The Dispatcher does not generally consume many processing or memory resources, and other applications can run on the Load Balancer machine. If it is vital to minimize equipment costs, it is even possible to run the backup Dispatcher on one of the machines in the cluster it is load balancing. Figure 6 depicts such a configuration, in which the backup Dispatcher runs on one of the content hosts (5) in the cluster.
Figure 6. Locating the backup Load Balancer on a content host
Legend:
1--Client 2--Internet 3--Router/Gateway 4--Primary
Dispatcher 5--Backup Dispatcher and content
host 6--Content host
Functioning in conjunction with the Application Server Caching Proxy component, the Application Server Load Balancer component enables you to distribute requests to multiple back-end servers that host different content. (See Introducing WebSphere Application Server Edge components for an introduction to these Edge components.)
If Load Balancer's Content Based Routing (CBR) component is installed together with Caching Proxy, HTTP requests can be distributed based on URL or other administrator-determined characteristics, eliminating the need to store identical content on all back-end servers.
Using CBR is especially appropriate if your Web servers need to perform several different functions or offer several types of services. For example, an online retailer's Web site must both display its catalog, a large of portion of which is static, and accept orders, which means running an interactive application such as a Common Gateway Interface (CGI) script to accept item numbers and customer information. Often it is more efficient to have two different sets of machines perform the distinct functions, and to use CBR to route the different types of traffic to different machines. Similarly, an enterprise can use CBR to provide better service to paying customers than to casual visitors to its Web site, by routing the paid requests to more powerful Web servers.
CBR routes requests based on rules that you write. The most common type is the content rule, which directs requests based on the path name in the URL. For example, the ABC Corporation can write rules that direct requests for the URL http://www.abc.com/catalog_index.html to one cluster of servers and http://www.abc.com/orders.html to another cluster. There are also rules that route requests based on the IP address of the client who sent them or on other characteristics. For a discussion, see the WebSphere Application Server Load Balancer Administration Guide chapters about configuring CBR and about advanced Load Balancer and CBR functions. For syntax definitions for the rules, see the WebSphere Application Server Load Balancer Administration Guide appendix about CBR rule types.
Figure 7 depicts a simple configuration in which Load Balancer's CBR component and Caching Proxy are installed together on the machine marked 4 and route requests to three content hosts (6, 7, and 8) that house different content. When an end user working on one of the machines marked 1 requests file X, the request crosses the Internet (2) and enters the enterprise's internal network through its Internet gateway (3). The proxy server intercepts the request and passes it to the CBR component on the same machine, which parses the URL in the request and determines that content host 6 houses file X. The proxy server generates a new request for file X, and if its caching feature is enabled, determines whether the file is eligible for caching when host 6 returns it. If the file is cacheable, the proxy server stores a copy in its cache (5) before passing it to the end user. Routing for other files works in the same manner: requests for file Y go to content host 7, and requests for file Z go to content host 8.
Figure 7. Routing HTTP requests with CBR
Legend:
1--Client 2--Internet 3--Router/Gateway 4--Caching
Proxy and Load Balancer's CBR
component 5--Cache 6, 7, 8--Content
host
Figure 8 depicts a more complex configuration which is possibly suitable for an online retailer. Load Balancer's CBR component and the proxy server are installed together on the machine marked 4 and route requests to two Load Balancer machines. The Load Balancer machine marked 6 load balances a cluster of content hosts (8) that house the mostly static content of the retailer's catalog, whereas the Load Balancer marked 7 load balances a cluster of Web servers that handle orders (9).
When an end user working on one of the machines marked 1 accesses the URL for the retailer's catalog, the request crosses the Internet (2) and enters the enterprise's internal network through its Internet gateway (3). The proxy server intercepts the request and passes it to the CBR component on the same machine, which parses the URL and determines that the Load Balancer machine marked 6 handles that URL. The proxy server creates a new access request and sends it to the Load Balancer, which determines which of the content hosts marked 8 is currently best able to service the request (based on criteria that you define). That content host passes the catalog content directly to the proxy server, bypassing Load Balancer. As in the preceding example, the proxy server determines whether the content is cacheable and stores it in its cache (5) if appropriate.
The end user places an order by accessing the retailer's ordering URL, presumably via a hyperlink in the catalog. The request travels the same path as the catalog access request, except that the CBR component on machine 4 routes it to the Load Balancer machine marked 7. Load Balancer forwards it to the most suitable of the Web servers marked 9, which replies directly to the proxy server. Because ordering information is generally dynamically generated, the proxy server probably does not cache it.
Figure 8. Load balancing HTTP requests routed with CBR
Legend:
1--Client 2--Internet 3--Router/Gateway 4--Caching
Proxy and Load Balancer's CBR
component 5--Cache 6, 7--Load
Balancer 8--Content host 9--Web
Server
Load Balancer's CBR function supports cookie affinity. This means that the identity of the server that serviced an end user's first request is recorded in a special packet of data (a cookie) included in the server's response. When the end user accesses the same URL again within a period of time that you define, and the request includes the cookie, CBR routes the request to the original server rather than reapplying its standard rules. This generally improves response time if the server has stored information about the end user that it does not have to obtain again (such as a credit card number).
This part discusses business scenarios that use IBM WebSphere Application Server Edge components. These are architecturally sound and tested solutions that can provide excellent performance, availability, scalability, and reliability.
This part contains the following chapters:
Business to client-banking-solution
The basic electronic commerce Web site is a business-to-consumer network. In the first phase of Internet growth, businesses typically focus on simply creating a Web presence. Corporate information and product catalogs are converted to digital formats and made available on the Web site. Shopping can be available by providing e-mail addresses, telephone and fax numbers, and even automated forms. True online shopping, however, is not available. All transactions have an inherent latency because humans need to process the order.
In phase two, businesses eliminate this latency and streamline their sales operation by implementing secure shopping carts for direct online purchases. Synchronization with warehouse databases and integration with banking systems are crucial to completing these sales transactions. Product that is not available cannot be sold, and a customer's account cannot be charged for that item. Likewise, a product cannot be taken from inventory and shipped to a customer until a valid financial transaction occurs.
In the third phase, the corporate Web site evolves into a dynamic presentation site where the consumer begins to take on the aspects of a client and is provided with personalized content.
Figure 9 shows a small commercial Web site designed to provide efficient catalog browsing. All client requests pass through the firewall to a Dispatcher that routes the requests to a cluster of proxy servers with active caches that act as surrogate servers to the Web servers. Metric servers are colocated with the proxy servers to provide load-balancing data to the Dispatcher. This arrangement reduces the network load on the Web servers and isolates them from direct contact with the Internet.
Figure 9. Business to consumer network (Phase 1)
Figure 10 shows a the second phase of evolution for a commercial Web site designed to provide efficient catalog browsing and fast, secure shopping carts for potential customers. All customer requests are routed to the appropriate branch of the network by a Dispatcher that separates requests based on Internet protocol. HTTP requests go to the static Web site; HTTPS requests go to the shopping network. The primary, static Web site is still served by a cluster of proxy servers with active caches that acts as a surrogate for the Web servers. This part of the network mirrors the network in the first phase.
The electronic commerce portion of the Web site is also served by a cluster of proxy servers. However, the Caching Proxy nodes are enhanced with several plug-in modules. The SSL handshaking is offloaded to a cryptographic hardware card, and authentication is performed through the Access Manager (formerly Policy Director) plug-in. A Dynamic Caching plug-in reduces the workload on the WebSphere Application Server by storing common data. A plug-in on the application server invalidates objects in the Dynacache when necessary.
All shopping cart applications are tied into the customer database that was used to authenticate the user. This prevents the user from having to enter personal information into the system twice, once for authentication and once for shopping.
This network divides traffic according to client usage, removing the processor-intensive SSL authentication and electronic commerce shopping carts from the primary Web site. This dual-track Web site allows the network administrator to tune the various servers to provide excellent performance based on the role of the server within the network.
Figure 10. Business to consumer network (Phase 2)
Figure 11 shows the third phase of the evolution of a business-to-consumer network, with the static Web adopting a dynamic presentation method. The proxy server cluster has been enhanced to support the caching of dynamic Web content and assembly of page fragments written to comply with the Edge Side Includes (ESI) protocol. Rather than using server-side include mechanisms to build Web pages on the content servers and then propagating these client-specific, noncacheable, pages through the entire network, ESI mechanisms permit pages to be assembled from cached content at the edge of the network, thereby reducing bandwidth consumption and decreasing response time.
ESI mechanisms are crucial in this third-phase scenario, where each client receives a personalized home page from the Web site. The building blocks of these pages are retrieved from a series of WebSphere Application Servers. Application servers containing sensitive business logic and ties to secure databases are isolated behind a firewall.
Figure 11. Business to consumer network (Phase 3)
Figure 12 shows an efficient online-banking solution that is similar to the business-to-consumer network described in Business-to-consumer network. All client requests pass through the firewall to a Dispatcher that separates traffic according to Internet protocol. HTTP requests pass to a cluster of proxy servers with active caches that act as surrogate servers for the Web servers. Metric servers are colocated with the proxy servers to provide load-balancing data to the Dispatcher. This arrangement reduces the network load on the Web servers and creates an additional buffer between them and the Internet.
HTTPS requests are passed to a secure network designed to provide clients with personal financial information and permit online-banking transactions. A cluster of enhanced proxy servers provides scalability to the site. These proxy servers support the caching of dynamic Web content and assembly of page fragments written to comply with the Edge Side Includes (ESI) protocol. A cryptographic hardware card manages SSL handshakes, which significantly reduces the processing required of the proxy server host, and an Access Manager (formerly Policy Director) directs client authentication.
A collection of application servers clusters distribute the processing of requests by separating the business logic, contained in EJB components, from these presentation layer, contained in servlets and JSP files. Each of these clusters is managed by a separate session server.
Figure 12. Business to consumer banking solution
Figure 13 shows a Web portal network designed to support a heavy volume of traffic while providing each client with personalized content. To minimize the processing load on the various servers, no part of the network carries SSL traffic. Because the portal does not deliver sensitive data, security is not an important issue. It is important for the databases containing client IDs, passwords, and settings to remain moderately secure and uncorrupted, but this requirement does not impair the performance of the rest of the Web site.
All client requests pass through the firewall to a Dispatcher that balances the network load across a cluster of proxy servers with active caches that act as surrogate servers for the Web servers. Metric servers are colocated with the proxy servers to provide load-balancing data to the Dispatcher.
The actual dynamic Web site is a cluster of application servers that generate ESI fragments that are passed to the proxy servers for assembly. Because of the reduced security concerns, each application server performs all necessary functions for constructing the Web site. All application servers are identical. If one application server goes out of service, the session server can route requests to the other servers, providing high availability for the entire site. This configuration also allows for rapid escalation of the Web site if excessive traffic occurs, for example, the hosting of a special event by the portal. Additional proxy servers and application servers can quickly be configured into the site.
All static content, such as image files and boilerplate text is stored on separate Web servers, allowing it to be updated as necessary without risking corruption to the more complex application servers.
This part discusses the hardware and software requirements for Edge components and provides procedures for installing them.
This part contains the following chapters:
Requirements for Edge components
Installing Edge components using the setup program
Installing Caching Proxy using system packaging tools
Installing Load Balancer using system packaging tools
This topic provides hardware and software requirements for Edge components and guidelines for using Web browsers with the Caching Proxy Configuration and Administration forms and with the Load Balancer online help.
IMPORTANT: For the most current information on hardware and software requirements, link to the following Web page: http://www.ibm.com/software/webservers/appserv/doc/latest/prereq.html.
This section describes the hardware and software prerequisites for WebSphere Application Server, Version 5.1 Edge components.
This section describes the hardware and software prerequisites for installing the Caching Proxy on a machine that runs the AIX operating system.
This section describes the hardware and software prerequisites for installing Load Balancer components on a machine that runs the AIX operating system.
This section describes the hardware and software prerequisites for installing the Caching Proxy on a machine that runs the HP-UX operating system.
The latest available version of the fix pack, HP-UX 11i Quality Pack (GOLDQPK11i), is required. More information and download instructions for obtaining the latest Quality Pack is found at the HP Support Plus Web site: http://www.software.hp.com/SUPPORT_PLUS/qpk.html.
This section describes the hardware and software prerequisites for installing Load Balancer components on a machine that runs the HP-UX operating system.
The latest available version of the fix pack, HP-UX 11i Quality Pack (GOLDQPK11i), is required. More information and download instructions for obtaining the latest Quality Pack is found at the HP Support Plus Web site: http://www.software.hp.com/SUPPORT_PLUS/qpk.html.
This section describes the hardware and software prerequisites for installing the Caching Proxy on a machine that runs the Linux operating system.
The table below lists the supported systems for Linux. For updates
and additional information on hardware and software prerequisites, refer to
the following Web page,
http://www.ibm.com/software/webservers/appserv/doc/latest/prereq.html.
Table 2. Supported Linux systems
Operating System | Linux for Intel (32-bit mode) | Linux for S/390 (31-bit mode) | Linux for PowerPC (64-bit mode) |
---|---|---|---|
Red Hat Enterprise Linux Advanced Server 2.1 (2.4 kernel) | x |
|
|
SuSE Linux Enterprise Server 8.0 (2.4 kernel) |
| x | x |
SuSE Linux Enterprise Server 8.0 SP2a (2.4 kernel) | x |
|
|
UnitedLinux 1.0 | x | x | x |
UnitedLinux 1.0 SP2a | x |
|
|
This section describes the hardware and software prerequisites for installing Load Balancer components on a machine that runs the Linux operating system.
The table below lists the supported systems for Linux. For updates
and additional information on hardware and software prerequisites, refer to
the following Web page,
http://www.ibm.com/software/webservers/appserv/doc/latest/prereq.html.
Table 3. Supported Linux systems
Operating System | Linux for Intel (32-bit mode) | Linux for S/390 (31-bit mode) | Linux for PowerPC (64-bit mode) |
---|---|---|---|
Red Hat Enterprise Linux Advanced Server 2.1 (2.4 kernel) | x |
|
|
SuSE Linux Enterprise Server 8.0 (2.4 kernel) |
| x | x |
SuSE Linux Enterprise Server 8.0 SP2a (2.4 kernel) | x |
|
|
UnitedLinux 1.0 | x | x | x |
UnitedLinux 1.0 SP2a | x |
|
|
export JAVA_HOME=/opt/IBMJava2-14/jre export PATH=$JAVA_HOME/bin:$PATH
This section describes the hardware and software prerequisites for installing the Caching Proxy on a machine that runs the Solaris operating system.
For Solaris 8, the install wizard requires the linker to be at level 109147-16 or later and the shared libraries for C++ to be at level 108434-8 or later.
For the most consistent behavior, download and apply the most-recent Solaris patches from Sun Microsystems at http://sunsolve.sun.com.
This section describes the hardware and software prerequisites for installing Load Balancer components on a machine that runs the Solaris operating system.
For Solaris 8, the install wizard requires the linker to be at level 109147-16 or later and the shared libraries for C++ to be at level 108434-8 or later.
For the most consistent behavior, download and apply the most-recent Solaris patches from Sun Microsystems at http://sunsolve.sun.com.
This section describes the hardware and software prerequisites for installing the Caching Proxy on a machine that runs a Windows operating system.
This section describes the hardware and software prerequisites for installing Load Balancer components on a machine that runs a Windows operating system.
Minimum browser requirements
To configure the Caching Proxy using the Configuration and Administration forms, your browser must do the following:
Recommended browsers
The following browsers were used for testing the Configuration and Administration forms. Note that during testing of the National Language Version (NLV) software, only browsers running on Microsoft(R) Windows systems were used.
Table 4. Test supported browsers (for Caching Proxy)
Operating system | Browser |
---|---|
AIX | Mozilla 0.9.9 and Mozilla 1.0 |
HP-UX | Netscape Communicator v4.79 |
Microsoft Windows 2000 |
Microsoft IE v5.5.x and v6.0.x
|
Microsoft Windows 2003 |
Microsoft IE v6.0.x
|
Red Hat Linux | Mozilla 0.9.9 and 1.0 |
SuSE Linux / United Linux | Mozilla 1.0.1 |
Solaris 9 | Netscape Communicator v4.78 |
In order to properly display forms, the operating system that is actually displaying the form (the one on which the browser resides) must contain the appropriate font sets for the language in which the form is written. The browser interface, however, does not necessarily need to be in the same language as the forms.
For example, a Chinese version of the proxy server is running on a Solaris 9 system. A Netscape browser with an English-language interface is loaded onto the Solaris host. This browser can be used locally to edit the Configuration and Administration forms. (Forms are served to the browser in the character set used by the proxy server--in this example, Chinese; however, the forms might not be displayed correctly if the browser and its underlying operating system are not properly configured to display the character set sent by the proxy server.)
Alternatively, if a Windows workstation with Chinese language support is available to remotely connect to the proxy server, it is possible to load a Chinese version of a Netscape browser onto the Windows workstation and use this browser to enter values in the forms. This second solution has the advantage of maintaining a consistent language interface for the administrator.
The font sets specific to operating systems greatly affect the display of various languages, particularly of double-byte characters, within the browsers. For example, a particular Chinese font set on AIX does not look exactly the same as a Chinese font set on Windows platforms. This causes some irregularities in the appearance of HTML text and Java applets within the Configuration and Administration forms. For the best appearance, only browsers running on Windows operating systems are recommended.
Notes about Netscape 4.x browsers
Netscape browsers have limitations associated with displaying the Configuration and Administration forms that include, but are not necessarily limited to, the following:
Netscape 6 is not supported
Netscape 6 is not supported for use with the Caching Proxy Configuration and Administration forms.
KDE Konqueror is not supported
KDE Konqueror is not supported for use with the Caching Proxy Configuration and Administration forms.
To use the Load Balancer online help, your browser must support the following:
Using a browser that does not support these requirements can result in incorrectly formatted pages and functions that might not work correctly. The following browsers support these requirements:
The following browsers were used for testing the Load Balancer online helps. Note that during testing of the National Language Version (NLV) software, only browsers running on Microsoft Windows systems were used.
Table 5. Test supported browsers (for Load Balancer)
Operating system | Browser |
---|---|
AIX | Netscape Communicator v4.79 (AIX 5.2), Netscape Communicator v4.76i (AIX 5.1) |
HP-UX | Netscape Communicator v4.7.x |
Microsoft Windows 2000 |
Microsoft IE v6.0.x
|
Microsoft Windows 2003 |
Microsoft IE v6.0.x
|
Red Hat Linux | Mozilla 1.0.1-2.2.1 |
SuSE Linux / United Linux | Mozilla 1.0.1-35 |
Solaris 8 and 9 | Netscape Communicator v4.7.x |
This topic provides instructions for installing Edge components using the setup program.
IMPORTANT: After installation, scripts within the Caching Proxy packaging attempt to start the proxy server using the default configuration. If port 80 is in use, such as by another Web server, the proxy server will fail to start.
Use setup program to install Edge components onto your Windows system as follows:
Use the setup program to install Edge components onto your UNIX system as follows:
# ./installThe Welcome window opens.
The setup program begins installing the selected Edge components and required packages.
This topic provides instructions for installing Caching Proxy using the system packaging tools.
IMPORTANT: After installation, scripts within the Caching Proxy packaging attempt to start the proxy server using the default configuration. If port 80 is in use, such as by another Web server, the proxy server will fail to start.
Using your operating system's package installation system, install the packages in the order listed in Table 6. The following procedure details the typical steps necessary to complete this task.
su - root Password: password
cd mount_point/package_directory/
On AIX:
installp -acXd ./filename
On HP-UX:
swinstall -s ./filename
On Linux:
rpm -i ./filename
On Solaris:
pkgadd -d ./filename
Table 6. Caching Proxy components
Component | Packages installed (in recommended order) |
---|---|
Caching Proxy |
|
Edge component documentation |
doc-lang1
|
Notes:
|
Table 7. AIX, HP-UX, and Solaris package file names
Generic package name | Solaris file name | AIX fileset | HP-UX fileset |
---|---|---|---|
admin | WSESadmin | wses_admin.rte | WSES-ADMIN |
cp | WSEScp | wses_cp.base | WSES-CP |
doc-lang | WSESdoclang1 | wses_doc.lang2 | WSES-DOC-lang3 |
gskit7 | gsk7bas | gskkm.rte | gsk7bas |
icu | WSESicu | wses_icu.rte | WSES-ICU |
msg-cp-lang | WSEScpmlang1 | wses_cp.msg.lang2 .base | WSES-cpmlang3 |
Notes:
|
Table 8. Linux package file names
Generic package name | Linux file name |
---|---|
admin | WSES_Admin_Runtime-5.1.0-0.hardw1.rpm |
cp | WSES_CachingProxy-5.1.0-0.hardw1.rpm |
doc-lang | WSES_Doc_lang2-5.1.0-0.hardw1.rpm |
gskit7 | gsk7bas.rpm |
icu | WSES_ICU_Runtime-5.1.0-0.hardw1.rpm |
msg-cp-lang | WSES_CachingProxy_msg_lang2-5.1.0-0.hardw1.rpm |
Notes:
|
This topic documents the installation of Load Balancer on AIX, HP-UX, Linux, Solaris, and Windows systems:
Notes:
To ensure that the Load Balancer components use the correct version of Java when multiple versions are installed, do the following:
Edit the following script files for the components of Load Balancer that you are upgrading:
For example, on Windows systems, if Java 1.4.1 is installed in C:\Program Files\IBM\Java141\jre\bin, in the dsserver.cmd file, change javaw to the following:
C:\Program Files\IBM\Java141\jre\bin\javaw
Table 9 lists the AIX filesets for Load Balancer.
Load Balancer components | AIX filesets |
---|---|
Load Balancer components (with messages) | ibmlb.component.rte ibmlb.msg.language.lb |
Device Driver | ibmlb.lb.driver |
Base | ibmlb.base.rte |
Administration (with messages) | ibmlb.admin.rte ibmlb.msg.language.admin |
Documentation (with messages) | ibmlb.doc.rte ibmlb.msg.language.doc |
License | ibmlb.lb.license |
Metric Server | ibmlb.ms.rte |
Notes:
Before you install Load Balancer for AIX, ensure the following:
installp -u ibmlb
or, for previous versions, enter the following command:
installp -u ibmnd
To uninstall specific filesets, list them specifically instead of specifying the package name ibmlb.
When you install the product, you are given the option of installing any or all of the following:
It is recommended that you use SMIT to install Load Balancer for AIX because SMIT ensures that all messages are installed automatically.
mkdir /cdrom mount -v cdrfs -p -r /dev/cd0 /cdrom
Table 10. AIX installation commands
Packages | Commands |
---|---|
Load Balancer components (with msgs). Includes: Dispatcher, CBR, Site Selector, Cisco CSS Controller, and Nortel Alteon Controller | installp -acXgd device ibmlb.component.rte ibmlb.msg.language.lb |
Device Driver | installp -acXgd device ibmlb.lb.driver |
Documents (with messages) | installp -acXgd device ibmlb.doc.rte ibmlb.msg.language.lb |
Base | installp -acXgd device ibmlb.base.rte |
Administration (with messages) | installp -acXgd device ibmlb.admin.rte ibmlb.msg.language.admin |
License | installp -acXgd device ibmlb.lb.license |
Metric Server | installp -acXgd device ibmlb.ms.rte |
installp -ld device
To unmount the CD, enter the following command:
unmount /cdrom
Verify that the product is intalled by entering the following command
lslpp -h | grep ibmlb
If you installed the full product, this command returns the following:
ibmlb.admin.rte ibmlb.base.rte ibmlb.doc.rte ibmlb.ms.rte ibmlb.msg.language.admin.rte ibmlb.msg.language.doc ibmlb.msg.language.lb.rte ibmlb.lb.driver ibmlb.lb.license ibmlb.component.rte
Load Balancer installation paths include the following:
This section explains how to install Load Balancer on HP-UX using the product CD.
Before beginning the installation procedure, ensure that you have root authority to install the software.
If you have an earlier version installed, you should uninstall that copy before installing the current version. First, ensure that you have stopped both the executor and the server. Then, to uninstall Load Balancer see Instructions for uninstalling the packages.
Table 11 lists the names of the installation packages for Load
Balancer and the required order to install the packages using the
system's package installation tool.
Table 11. HP-UX package installation details for Load Balancer
Package description | HP-UX package name |
Base | ibmlb.base |
Administration | ibmlb.admin |
Load Balancer License | ibmlb.lic |
Load Balancer components | ibmlb.component |
Documentation | ibmlb.lang |
Metric Server | ibmlb.ms |
Notes:
|
The following procedure details the steps necessary to complete this task.
su - root Password: password
Issue the install command
swinstall -s source/package_name
where source is the directory for the location of the package, and package_name is the name of the package.
For example, the following installs the base package for Load Balancer (ibmlb.base), if you are installing from the root of the CD
swinstall -s /lb ibmlb.base
Issue swlist command to list all the packages that you have installed. For example,
swlist -l fileset ibmlb
Use the swremove command to uninstall the packages. The packages should be removed in the reverse order they were installed. For example, issue the following:
swremove ibmlb
To uninstall an individual package (for example the Cisco CSS Controller)
swremove ibmlb.cco
Load Balancer installation paths include the following:
This section explains how to install Load Balancer on Linux using the Edge components CD.
Before installing Load Balancer, ensure the following:
rpm -e pkgname
When uninstalling, reverse the order used for package installation, ensuring that the administration packages are uninstalled last.
The installation image is a file in the format lblinux-version.tar.
tar -xf lblinux-version.tarThe result is the following set of files with the .rpm extension:
Where --
rpm -i package.rpmIt is important to install the packages in the order shown in the following list of packages needed for each component.
rpm -i --nodeps package.rpm
rpm -qa | grep ibmlb
Installing the full product produces the following output:
Load Balancer installation paths include the following:
If you need to uninstall the packages, reverse the order used for package installation, ensuring that the administration packages are uninstalled last.
This section explains how to install Load Balancer on Solaris using the Edge components CD.
Before beginning the installation procedure, ensure that you are logged in as root and that any previous version of the product is uninstalled.
To uninstall, ensure that all the executors and the servers are stopped. Then, enter the following command:
pkgrm pkgname
pkgadd -d pathnamewhere -d pathname is the device name of the CD-ROM drive or the directory on the hard drive where the package is located; for example: -d /cdrom/cdrom0/.
The following list of packages is displayed:
Where the variable lang refers to the substitution of one of the following language-specific codes: deDE, esES, frFR, itIT, jaJP, koKR, ptBR, zhCN, zhTW. For English, the variable lang refers to the substitution of doc.
If you want to install all of the packages, simply type all and press Return. If you want to install some of the components, enter the name or names corresponding to the packages to be installed, separated by a space or comma, and press Return. You might be prompted to change permissions on existing directories or files. Simply press Return or respond yes. You need to install the prerequisite packages (because the installation follows alphabetical, not prerequisite order). If you type all, then respond yes to all prompting, and the installation completes successfully.
All of the packages depend on the common package, ibmlbadm. This common package must be installed along with any of the other packages.
For example, if you want to install just the Dispatcher component with the documentation and Metric Server, you must install: ibmlbadm, ibmlbbase, ibmlblic, ibmdisp, ibmlbms, and ibmlbdoc.
If you want to install the remote administration, install only one piece: ibmlbadm.
pkginfo | grep ibm
The Load Balancer installation paths include the following:
This section explains how to install Load Balancer on Windows 2000 and Windows Server 2003 using the Edge components CD.
Notes:
Before beginning the installation procedure, ensure you the following:
Follow these steps to install Load Balancer:
Alternatively, you can use the command line to start the InstallShield Wizard from the Load Balancer directory. At a command prompt, change to the CD-ROM drive, for example D, and enter the following command:
D:\ LB_directory\setup.exe
Where LB_directory is the name of the directory containing the Load Balancer files.
Load Balancer installation paths include the following:
This part provides procedures for building basic demonstration networks using Edge components. These networks are not intended to be used in production environments. The process of initially configuring a network can clarify many edge-of-network concepts for administrators who are new to the product. For complete coverage of all component features and for in-depth configuration information, refer to the Caching Proxy Administration Guide and the Load Balancer Administration Guide.
The procedures permit any computer system supported by the component to be used at any node.
This part contains the following chapters:
Build a Caching Proxy network.
Build a Load Balancer network.
Figure 14 shows a basic proxy server network using three computer systems located at three network nodes. This network binds the proxy server to a dedicated content host (IBM HTTP Server), which is located on Server 2, and the proxy server serves the host. This is visually represented by the Internet being located between the workstation and Server 1.
Figure 14. Caching Proxy demonstration network
To build a Caching Proxy network, peform these procedures in the following order:
The following computer systems and software components are needed:
Install and configure the Caching Proxy as follows:
# htadm -adduser /opt/ibm/edge/cp/server_root/protect/webadmin.passwd
When prompted, provide the htadm program with a user name, password, and real name for the administrator.
Install and configure the Caching Proxy as follows:
cd "Program Files\IBM\edge\cp\server_root\protect" htadm -adduser webadmin.passwd"
When prompted, provide the htadm program with a user name, password, and real name for the administrator.
From the workstation, do the following:
From the workstation, do the following:
Figure 15 shows a basic Load Balancer network with three locally attached workstations using the Dispatcher component's MAC forwarding method to load balance Web traffic between two Web servers. The configuration is similar when load balancing any other TCP or stateless UDP application traffic.
Figure 15. Load Balancer demonstration network
To build a Load Balancer network, perform these procedures in the following order:
The following computer systems and software components are needed:
Workstation | Name | IP Address |
---|---|---|
1 | server1.company.com | 9.67.67.101 |
2 | server2.company.com | 9.67.67.102 |
3 | server3.company.com | 9.67.67.103 |
Netmask = 255.255.255.0 |
Each of the workstations contains only one standard Ethernet network interface card.
Name= www.company.com IP=9.67.67.104
Add an alias for www.company.com to the loopback interface on server2.company.com and server3.company.com.
ifconfig lo0 alias www.company.com netmask 255.255.255.0
ifconfig lo0:1 www.company.com 127.0.0.1 up
You have now completed all configuration steps that are required on the two Web server workstations.
With Dispatcher, you can create a configuration by using the command line, the configuration wizard, or the graphical user interface (GUI).
If you are using the command line, follow these steps:
dscontrol executor start
dscontrol cluster add www.company.com
dscontrol port add www.company.com:80
dscontrol server add www.company.com:80:server2.company.com
dscontrol server add www.company.com:80:server3.company.com
dscontrol executor configure www.company.com
dscontrol manager start
Dispatcher now does load balancing based on server performance.
dscontrol advisor start http 80
Dispatcher now ensures that client requests are not sent to a failed Web server.
Your basic configuration with locally attached servers is now complete.
If you are using the configuration wizard, follow these steps:
dsserver
The wizard guides you step-by-step through the process of creating a basic configuration for the Dispatcher component. It asks questions about your network and guides you through the setup of a cluster for Dispatcher to load balance the traffic for a group of servers.
The configuration wizard contains the following panels:
To start the GUI, follow these steps:
dsserver