Traffic Shaping and Bandwidth Management For Novell NetWare
See a Flash movie of how the TSE works!
"My Shaper is Better Than Your Shaper"
Or so they would have you think. The typical "technology" page of other bandwidth management vendors are written by the marketing department - and as such have no specific legal meaning, and don't really say too much about the technology either. Much of the specifics of how the TSE works is incredubly dull and boring stuff.
If you read the "technology" information of other vendors products you'll read the standard marketing hype we all hate so much: "Our product is so far superior to everyone else you'd be an idiot to even think about buying something from them." Or so it seems. Packeteer goes on at great length about their technology, most of which seems to be a highly elaborate form of TCP window size adjustment and pacing - which does no good for UDP, ICMP, or the rest of the protocols which deliver MP3's to your door. Packeteer literature trashes queing as a methodology while simultaneously using it virtually exclusively for all non TCP traffic! They also offer TCP pacing, despite the large body of research showing that paced TCP generally performs much worse when mixed with non-paced flows ( which is the the rest of the Internet. )
Don't get the wrong idea, the Packeteer boxes offer tons of features. Does the TSE offer the bazillion item feature list Packeteer does? No way! But there are numerous TSE users who have found the TSE solved their bandwidth problems as efficiently as an expensive Packeteer unit... and without adding yet another box to the network. The TSE does offer several features the "big boys" have yet to quite figure out. The idea behind the TSE is that it is so effective and inexpensive, that you can afford to use it throughout your organization.
I'm writing a piece of server software, not a word processor. So in order of importance: reliability, speed, scalability, and finally functionality. In the past four years of public releases, there have been virtually no reported AbEnds on production servers. As a network manager, I undertand the frustration of having critical software abend, refuse to unload, or go flaky. Every possible effort has been made to ensure the TSE meets and exceeds even your high standards for reliability. The TSE requires very little CPU and memory to do its job, even when configured with lots of rules and managing lots of connections. Consequently the TSE scales to enormous numbers of individual managed connections - numbers far in excess of ANY product in existence. TSE 3.2 can manage in excess of 1,000,000 individual connections, all on 500 Mhz server with 256 MB RAM. The theoretical limit of the current code is something like 8,000,000 connections before performance starts to degrade and is limited by available memory. In summary, the TSE is designed to be highly reliable, and offers the most commonly desired features aimed at providing significant releif.
Companies like Packeteer spill a lot of ink on trying to argue that queing is bad. Of course they have to say something to get you to spend $20,000 on proprietary hardware. For the most part, every router in the internet works by using queing and the "drop tail" queing discipline. Why? It works. If MCI or Verizion could squeeze out an extra 1% bandwidth on their backbone by switching to a different algorithm, they would. "They ain't stupid." And they are sure a lot smarter than Packeteer, or me. Much of these claims made by Packeteer are, frankly, not supported by the vast body of research. In fact, much of the theoretical research on congestion control is either contradictory, or based on assumptions which just don't make a lot of sense. So when you look around for reaearch based on actual networks using production equipment, the work of Sprint Labs stands out as being both practical, and oriented at achieving measurable performance increases in real networks. This is why TSE 3.2 incorporates a mechanisam to implement virtually any queue management scheme. Queing works, it works well, and it works against all types of traffic, not just TCP, and considering that many of the latest P2P apps rely on UDP, this is a big deal.
The TSE's underlying technology is capable of scaling to manage up to 8,000,000 traffic flows individually using commodity uniprocessor Intel based hardware. This claim goes way beyond being able to handle the traffic of a million TCP connections. By this claim, I mean that you can assign a different and distinct data rate to each TCP connection, UDP connection, etc... and do that for 1,000,000 million of them! The current Beta 2 and future Beta 3 version of the TSE has been tested with individual rate assignments for up to 1,000,000 flows without noticable performance degradation. All that was needed was to specify a couple command line parameters when the TSE was loaded to make it accomodate such a large number of connections. Beta 3 will allow each flow to specify an action, just like a rule, providing for each of these 1,000,000 connections to handle traffic in any way desired, not just specify a data rate. Since that action can drop, forward, log, tag, rate shape, ... the TSE is the foundation for robust filtering and traffic management applications. A server with only 256 MB of RAM is capable of hosting a 1,000,000 connection TSE configuration - yes, and still have room to run the OS.
The TSE is rule based. This makes the TSE a bit more difficult to configure for simple applications, as there is a bit of a learning curve to overcome. But this also allows the TSE to be used in a variety of difficult situations where the shrink wrapped functionallity of appliances makes a solution impossible. The TSE can be used to limit the number of simultaneous users accessing an external web site, erect filters to block workstations using know malicious / suspicious protocols, and even modify rule outcomes based on access to other resources. Lists of workstations or connections can be built dynamically and then used programmatically to determine the outcome of further traffic. Because of the flexibility the TSE can take on some of the aspects of a firewall, intrusions detection system, bandwidth manager, traffic auditor, or whatever you like. TSE 3.2 extends on this to make the connection table / flow an additional point of programability.
With an ability to manage up to a millition connections, you might get the idea that the TSE is pretty efficient. In fact, the TSE is designed to operate in an acceptable fashon on 486 based hardware. The CPU utilization generated by the TSE is essentially unnoticable. Typical per packet overhead is of the order of a couple microseconds. The TSE take advantage of SMP under NetWare and uses multiple processors to offload tasks such as connection import / export / auditing to make the best use of the hardware you have. The low level traffic processing code is so efficient that it would not benefit from additional processors.
The TSE adds several valuable technologies to the NetWare OS. The TSE offers basic bandwidth management such as rate limiting,. In addition, the TSE adds DiffServ / TOS tagging, and the opportunity to use the NetWare server as a point to implement the PHB ( per hop behaviour ) based on inbound tags from other devices. While NetWare 6.x offers tagging, it provides only for assigning a single tag value to all egress traffic. I.e. all egress traffic is marked as "high priority." The TSE allows you to choose a tag based on the type of traffic - truely integrating NetWare into an existing DiffServ / TOS environment.
Programmable AQM Profiles
The TSE includes a programmable advanced queuing management system allowing it to implement arbitrary drop probability characteristics. All of the following drop probability curves can be implemented, including drop tail, RED, GRED, eGRED, or various stepped piecewise linear functions. So rather than implemening a small number of algorithms, you can implement any of the most popular methods, or even invent one of your own.
|Last Modified 05-18-2002||