Thursday, August 30, 2012

BIG IP F5 LTM Tutorial - Part 8

5. Monitors

Overview:

Health monitor test the availability of devices and services on the network and are used to determine whether pool member are working properly, Monitor test the availability of member based on criteria defined by administrator. Usually we test the availability of network devices using ICMP protocol but we cannot verify accessibility of application using ICMP protocol. So to resolve this issue we can use application based monitor like we can monitor HTTP, TCP or any port that is used by particular application or we can send request to that particular port to get expected response. So in this way we can monitor the availability of applications and also we can assign multiple monitors to node, member or pool to verify availability of applications.

BIG IP LTM offer different types of monitor:

Ø  Node/ Address Checking
o        Some monitor is primarily designed to determine whether an IP address is reachable. When a monitor associated with a node is unsuccessful then that node will be marked as an OFFLINE.Usually these types of monitors will use ICMP to determine availability of node.

Ø  Service Checking
o      This is another type of monitor determine whether a service is available by opening connection with IP address and port. The TCP monitor is an example of this type of check.These monitor will report UP when TCP connection is establish with particular IP and port then it will close the connection & if TCP connection is not establish then it will report OFFLINE.

Ø  Content Checking
o       Some monitor do more than service verification, like they also test whether the server is serving appropriate content which make sense, for example if you have setup HTTP server and you assign monitor which will test HTTP port but might be there could be error with HTTP content so service checking monitor cannot identity these error. So content checking monitors will help you to verify the appropriate response from particular server.The monitor will establish TCP connection & then issue a command to check the response from server. i.e. for HTTP  it will issues GET / command .

Ø  Integrative Checking
o       These are advance type of monitor which help you to determine availability of server using multiple content checking .Monitors can also interact with server, sending multiple commands and will processing multiple responses.


Ø  Virtual Server , Pool ,Member & Node Status :


  •  BIG IP F5 will forward traffic to virtual server when it is marked as unknown. Unknown status means  either you have not assigned any monitor or BIG IP F5 has not received reply to that monitor status.


Ø  Green Status :
o   General          : Monitor is successful.
o   Node             : The Most recent monitor was successful.
o   Pool Member : The Most recent monitor was successful.
o   Pool               : At least one pool member is available.
o   Virtual Server : At least one pool member is available.  

Ø  Unknown Status :
o   General          : No monitor was assigned or the monitor timeout has not been reached.
o   Node             : Monitor has not assigned.
o   Pool Member : Monitor has not assigned.
o   Pool               : All members are unknown.
o   Virtual Server : All pools are unknown.

Ø  Offline Status :
o   General          : Monitor has failed.
o   Node             : Monitor did not succeed during the recent check.
o   Pool Member : Monitor did not succeed during the recent check.
o   Pool               : One or More members are failed during the recent check & showing offline                                                                                            
o  Virtual server  :  One or More pools are offline & no pool members are available.

Till now we had completed:

  • Overview of Load Balancers 
  • Basic Config of BIG IP LTM : Licensing, Vlan , IP address assigning ..etc  
  • BIG IP CLI Utility : bigpipe, tmsh.. etc
  • Load Balancers Terminology : Node, Pool Members, Pools & Virtual Server.
  • Monitors :  Types of monitors & etc... 
So now we are ready to configure all those thing that we learned & In my next post we will do only configuration & verification.




Tuesday, August 28, 2012

BIG IP F5 LTM Tutorial - Part 7


4. Load Balancing Terminology


Ø  Nodes  
  • The devices represented by the IP address of Pool Member are called Nodes. Hence Node has only IP address & may represent in many pool members. ( IP Address )

Ø  Pool Members 
  • A pool member is an IP address and service port combination that is hosted by physical server.
  • A single server can host any number of pool member because the different service ports may run various services. ( IP Address:Port )

Ø   Pools   
  •  A pool is a group of pool member and is identified by name. In addition to pools member, pools also have their own load balancing method, monitors and other feature.

Ø  Virtual Servers
  •   A virtual Server is a combination of Virtual IP address & service Port
  • Virtual Server is mapped with multiple actual server or pools.  
  • Virtual Server will perform multiple functions like: Server availability Load Balancing, translation of virtual IP address to actual IP address, Translate virtual server service ports to actual server service ports.
  • By default translation of address & ports are enabled in virtual server.



Ø  Packet Flow through BIG IP :

ü  Client initiate request to BIG IP Virtual Server IP address & Service Port
ü  BIP IP translate VIP to actual IP address
ü  Server will see Client IP address as a source & own IP address in destination
ü  Server will reply to that session & BIG IP will translate back server actual IP to VIP address

BIG IP F5 LTM Tutorial - Part 6


3. BIG IP CLI Utility                     

Ø  BIG IP F5 Command Line Utility & Tools :-
  •  Bigpipe Shell Utility  : Type “Bigpipe shell”  & hit enter for this utility {Prompt : bp>}
  •  Config utility             : Type “config” {Initial Config Tool}
  •  Bigtop utility             : Type “bigtop” {Display real time traffic}
  •  Bigstart utility           : Type “bigstart” {Start, Stop, Restart various daemons}
  •  TCL                        : iRules {Tool Command Language}
  • Shell CLI Mode       : Type “tmsh” {Traffic Management Shell – Prompt  (tmos)# }


Ø  The BIG-IP system includes a tool known as the Traffic Management Shell (tmsh) that you can use to configure and manage the system from the command line.

Ø  “tmsh” is an interactive shell that you can use to manage the BIG-IP system. The structure of tmsh is hierarchical and modular as shown below. The highest level is the root module, which contains six subordinate modules: auth, cli, gtm, ltm, net, and sys.

Ø  Important things to remember when examining commands in TMSH:

  • Show” (usually) provides just the statistical information, with configuration parameters present to provide a level of disambiguation.
  •  List” provides actual configuration information, but just variations from the default. For example, “list /ltm virtual TEST will show you the configuration.
  • all-properties” extends a “list” command to show every configuration option, not just the variations from default.
  • Help” command is very useful if you are not familiar with CLI mode of TMSH. To use “HELP” command just uses the same structure mention above. For Example: “help ltm pool” this will show you all detail CLI syntax, description, configuration example & related options with their descriptions.
  • Save” command is used to save the running configuration.
  • Delete” command is used to delete the configuration


As per BIG IP F5:
    
“Show”                 :  View runtime information, statistics and status
“List”                     :  View configuration and settings

When I tested above command & there output then I found that “List” & “Show” command will give you almost identical output, so you can use both command which ever you want & found easy to use.

Note: Bigpipe Mapping with tmsh command: https://devcentral.f5.com/wiki/tmsh.BigpipeMappings.ashx

Monday, August 20, 2012

BIG IP F5 LTM Tutorial - Part 5

2. Initial Config
Ø  Configuration license of BIG IP F5 is very easy
o   Access BIG IP System
o   Enter registration key
o   Access the dossier
o   Send registration key and dossier to the licenses serve
o   Install the big IP licenses file or copy & past the dossier from license server
o   Restart the BIG IP processes

Ø  For initial configuration you can use either web-based or legacy CLI mode. In CLI mode type 
“config” & provide necessary detail. ( Below screen show the same)


Ø  Basic Configuration :
o   Assign FQDN name
o   Assign mgmt IP address  
o   Assign Self IP (Internal -with VLAN ID)
o   Assign Self IP (External- with VLAN ID)
o   Assign Floating IP address
o   Change Password : CLI & WEB
o   Assign default gateway





BIG IP F5 LTM Tutorial - Part 4


1. Overview

Ø  Default IP Address of BIG IP : 192.168.1.254
Ø  By Default, Default route is not installed in BIG IP, need to Configure manually.
Ø  License can be configured manually or auto, without licenses BIG IP features will not be visible.
Ø  BIG IP can be configured through Console, telnet/ssh, CLI Config utility or through web based mode.
Ø  Default Users: CLI: - User - root , Pass : default , Web : User – admin , Pass : admin.
Ø  In every BIG IP hardware there will be primary OS known as TMM : Traffic mgmt microkernel
Ø  In every BIG IP hardware there will be secondary OS known as: AOM or SCCP.
  o   AOM : Always On Management
  o   SCCP : Switch Card Control Processing
Ø  SSL Chip: TMOS has its own SSL stack and can process SSL entirely in software, but it is much faster to offload cryptographic operations to specialized SSL ASICs.
Ø  Switch Fabric in BIG IP F5

Ø  Hardware & software details :


The Switch Module, where all application delivery traffic enters and exits, connects to the PVA (Packet Velocity ASIC), F5’s custom-engineered L4 load balancing ASIC switching fabric. Traffic that can be handled within the PVA never goes any further; at this step, all packet and connection management occurs at the hardware level by the PVA through the Switch Module. Traffic enters through the switch into the PVA, where the appropriate logic and transformations are applied before the traffic is sent back out through the Switch Module. Generically speaking, this is typically referred to as the fastL4 profile.  For traffic and which is not handled by the PVA, it is simply passed through the PVA onto the next layer, which F5’s primary traffic management processing system is called TMM (Traffic Management Microkernel). TMM handles all of BIG-IP’s local traffic functionality such as intelligent load balancing, compression, SSL, iRules, packet filters, etc. (with the exception of L4-only load balancing, which can be handled in either the PVA or TMM). The TMM can manage traffic using several optional hardware acceleration modules such as SSL, FIPS, and Compression and has entirely dedicated hardware. TMM is also responsible for delivering traffic to the Host Management subsystem as necessary for products such as BIG-IP Global Traffic Manager (GTM).

Note : For more Detail please refer : http://www.f5.com/pdf/white-papers/tmos-dev-wp.pdf

Sunday, August 12, 2012

BIG IP F5 LTM Tutorial - Part 3


BIG IP Traffic Management

F5 Offers three type of Traffic Management:

Ø  BIG IP Local Traffic Manager: This is BIG IP local Traffic Management Solution
Ø  BIG IP Link Controller : High availability and intelligent routing for multi homed networks
Ø  BIG IP Global Traffic Manager : Wide –area networks high-availability, intelligent load balancing



Table of Content:











In further post we will discuss all this Content & Configuration.

Note : All configuration will be based on Shell CLI Mode.  


BIG IP F5 LTM Tutorial - Part 2


BIG IP F5 LTM:

Historical Overview: -

Historically, there have been two ways to build Application Delivery Networking
Appliances—build them for performance or build them for intelligence. In the open market, customers have traditionally selected solutions that exhibit the best performance. As a result, most vendors have built their devices on faster, packet based designs instead of the poorer performing proxy-based architecture. As the need for intelligence in these devices has grown, vendors find themselves in a precarious position: the more intelligence they add to the devices in response to
Customer demand, the closer they resemble a proxy and the worse they perform.

F5 initially took the packet-based path, but simultaneously started addressing the root problem—making an intelligent solution that also delivers high performance. The result is the F5 TMOS® architecture, a collection of real-time features and functions, purpose-built and designed as a full-proxy solution with the power and performance required in today’s network infrastructure.


TMOS Platform ( Time Management Operating System )

A unified product platform that delivers complete control and scalability: -

TMOS is the universal product platform shared by F5 BIG-IP products. No single competing technology can solve such a wide variety of application delivery problems over networks.
With its application control plane architecture, TMOS gives you intelligent control over the acceleration, security, and availability services your applications require. TMOS establishes a virtual, unified pool of highly scalable, resilient, and reusable services that can dynamically adapt to the changing conditions in data centres and virtual and cloud infrastructures

 TMOS is a collective term used to describe the completely purpose-built, custom architecture which F5 spent years and significant investment developing as the foundation for F5 products going forward. From a high-level, TMOS is:

Ø  A Collection of Modules: networking driver module, an Ethernet module, an ARP module, an IP module, a TCP module, and so on.
Ø  Self-Contained and Autonomous : TMOS-based device has a form of Linux running on it but TMOS & Linux are two different parts & It is important to note that this Linux system is not involved in any aspect of the traffic flowing through TMOS.
Ø  A Real-Time Operating System: A real-time operating system means TMOS does not have a preemptive CPU scheduler.
Ø  Both Hardware and Software: Because TMOS is inherently modular in design; it doesn’t matter whether individual functions are handled by software or by hardware. With TMOS, everything can be done in software utilizing highly optimized and purpose built modules.
Ø  Stateful inspection
Ø  Dynamic Packet Filtering : Bu default deny all policy in F5

Note: For more detail please refer http://www.f5.com/pdf/white-papers/tmos-wp.pdf

BIG IP F5 LTM Tutorial - Part 1


WELCOME TO F5
BIG IP LTM 



Application Delivery Network appliance is known as load balancers & it was used to manage the applications or server farm which delivers services to end users or customer.

Server load balancing is the process of distributing service requests across a cluster of servers. There are many benefits to this process, these include:

Benefits of Application Delivery Network:

Ø  Improves performance - The highest performance is achieved when the processing power of servers is used intelligently. Advanced server load balancing products can direct end-user service requests to the servers that are least busy and therefore capable of providing the fastest response times.
Ø  Creates Resilience - high-availability pairs for load balancer resilience, and creates fault-tolerance for your back-end servers.
Ø  Adds Intelligence - Content inspection rules allow you to send requests for certain web pages to specific groups of servers.
Ø  Improves Reliability - By continually monitoring the health of your back-end servers, failed servers are automatically detected and removed from the cluster until they recover.
Ø  Ease of Use - Easy to install and configure, and include a secure, web-based graphical user interface which provides you with visual alerting, health monitoring and cluster diagnostics.
Ø  One-Click Session Persistence - If sticky sessions are required for your web application, just turn on session affinity with one click.
Ø  Improved flexibility and scalability - Many content intensive applications have scaled beyond the point where a single server can provide adequate processing power. Both enterprises and service providers need the flexibility to deploy additional servers quickly and transparently to end-users.
Ø  Enhanced availability - server load balancing is its ability to improve application availability. If an application or server fails, load balancing can automatically redistribute end-user service requests to other servers within a server farm or to servers in another location.
Ø  Less disruption - Server load balancing also prevents planned outages for software or hardware maintenance from disrupting service to end users.


Distributed server load balancing products can also provide disaster recovery services by redirecting service requests to a backup location when a catastrophic failure disables the primary site which is known as GSLB (Global Server Load Balancing)