Technology Positioning Statement Report

6.1.7 Network Administration and Monitoring Tools

Description: Software tools for network and server monitoring, performance measurement, and optimization.

Category: 6 - Networks   Subcategory: 1 - Enterprise Networks
Old Category: none




Industry UsageSC Usage

Performance Metrics

Appropriate applications for Windows servers; ease of use; functionality; server and network integration.

Usage and Dependencies

Industry Usage:  The network administration and monitoring market is made up of all products that help manage the infrastructure and its performance. It addresses the main aspects of the infrastructure, such as networks (local area network (LAN) and wide area network (WAN)), network devices (routers, switches, bridges, firewalls), servers, applications and databases and the main parameters of service-level management (SLM): availability, performance (speed) and accuracy/security. Simple Network Management Protocol (SNMP) has historically been the basis for commercial products and the de facto standard for network management.

Desktop Management: Microsoft System Management Server (SMS) 2.0 is the core product for Microsoft-based environments; it is now at SP3. SMS provides targeted software distribution to desktops using a dynamic distribution list (DDL) based on user roles, discovery-based software inventories, automated installation and removal, CD-ROM distribution, NT server health monitoring, network topology tracing, and intelligent network monitoring. SMS links to a SQL Server 7.0 database which can run customized queries to report software inventories.

Microsoft is supporting the Common Information Model (CIM) specification developed by the Distributed Management Task Force (DTMF), as part of the Web-Based Enterprise Management (WBEM) initiative. This provides a common way of presenting management information from multiple sources, such as SNMP, DMI, and the Microsoft® Win32® application programming interface. Microsoft has built Windows® Management Instrumentation, which is CIM-compliant, into the Microsoft Windows NT® version 4.0 operating system and Windows 2000 operating system environments. Microsoft Systems Management Server 2.0 has been designed to collect data in a CIM format. This means that it has access to data from many sources, including Win32, SNMP, and DMI, and administrators have a much richer collection of inventory information available. Given the large number of inventory objects, filtering options have been added so that an administrator can choose which data is most important.

"The traditional intracorporate-management model [e.g. Microsoft SMS] is characterized by a centric “cognizance” (Manager) that is dependent on significant IT resource support, parallel but distinct application types (Application Stubs) and fragmented client awareness (Agents)."

"For the model to become intelligent, two fundamental changes must take place at the client. First, the client device must be recognized as a collection of interrelated objects — hardware, OS and applications — with discoverable configurations and acceptable boundaries of operation that can be instrumented through a matrix of coordinated agent services. Second, local application intelligence, that can examine and act on the managed objects through the same matrix of agent services, must reside at the client."

"A number of management vendors — ASDIS, Cognet, Marimba, Novadigm, Novell, ON Technology, SWAN and XcelleNet — have already adopted slight variations of the managed-object model. Through local inventory and software distribution intelligence, these vendors can provide desktop configuration management (a distinct class above “push and pray” software distribution) and some vendors — Cognet, Marimba, Novadigm, Novell, SWAN and XcelleNet — provide application self-healing by incorporating configuration diagnostics."

"The use of application communication standards, such as XML-SOAP or XML-CIM, by IT management vendors is still quite low, but Giga believes this will increase in the next 12 months [.8p]." -- Business Demands an Intelligent Client Management Architecture, Norbert Kriebel, Giga, Dec. 14, 2000.

Performance Monitoring: "Each evolution of IT has brought another level of complexity and increased the obsolescence of traditional IT management processes and tools, especially in the area of capacity planning and performance prediction. The lack of planning capabilities brought into the limelight a number of performance management suites and point products able to monitor and diagnose the infrastructure bottlenecks, a substitute for proper forecasting. The proof of this new focus is found in the success of the performance management product market."

"The Network System Management (NSM) market is extremely fragmented and made of a number of point solutions. It is fairly common to find several products used in conjunction at a client site, each covering a particular area of technology. Even end-to-end products may fail to provide all of the functions a customer is looking for."

"The major vendors are traditional NSM vendors that have expanded their offerings to include performance management of servers, applications and databases. Computer Associates (CA), BMC, Hewlett-Packard (HP) and Tivoli form this group of “framework vendors.” Next, is a group of companies proposing end-to-end management suites, such as Aprisma, Compuware, Concord, Lucent and NetIQ, which are application SLM oriented and provide an integrated view of performance across the infrastructure. The next group of dominant players is made of the following point solutions: 

  • Network-centric solutions: Micromuse, Riversoft and Entuity provide technologies aimed at fault management of networks and propose advanced technologies for root cause analysis. NetScout provides capacity planning of the network component. 
  • Application and database solutions: Propose an original technology to resolve a specific problem or capture data from a specific part of the infrastructure. This is the case with companies like Cyrano, Quest and Precise for databases or Dirig for Web application servers. 
  • Web and e-business performances: Point solutions are provided through services or products and include major players Keynote and Candle. Mercury Interactive is omnipresent with hosting and service providers. 
  • Data integration: Will probably be the next wave to hit the space. It is composed of companies proposing integration of data, either for reporting (Managed Objects, Systar) or corrective action (Peakstone) or capacity planning (Netuitive, Metron). Most companies that embraced the new data capture technologies discovered that the sheer data fragmentation required a lot of work to put together a consistent infrastructure performance management process. Service providers were quick to build some form of integration among the products they use. Giga expects that within the next year most large companies will have started or completed an infrastructure data integration project. While technology will certainly remain a key differentiator during the next year, Giga expects the focus to shift from how the data is captured to how it is reported and how useful it is for operation management. In other words, from reporting on service levels to sustaining service levels. The major trends and differentiators for products will include the following: 
  • Holistic integrated reporting is already the key evolution in end-to-end products (Concord, Lucent) and new point products (Managed Objects). Such reports show aggregated data for a given application or a given business process and help in root cause analysis. 
  • Root cause analysis of performance problems in quasi-real time will require another level of integration, probably with topology data. 
  • Capacity planning for which another level of global statistical analysis will be needed."
--Market Overview: Infrastructure Performance Management, Jean-Pierre Garbani, Giga, March 9, 2001.

"Computer Associates opened the door to the AI (Artificial Intelligence) application to root cause analysis by introducing their “Neugent” technology, a systems management application based on a neural network concept. Neural networks are the result of using mathematical formulations to model nervous system operations. Neural networks involve a large number of nodes, processing data in parallel and capable of adapting (learning) themselves to the relation between the data that they are fed. In short, a node converts two data sets into a logical value by adjusting the weight (relation) of the data sets. Neural networks used in conjunction with “fuzzy logic” have been used as a basis for predictive systems in a number of applications — from financial to data mining. In CA’s application, they are used as a way to predict an impending problem. The great interest of this application is that it tackles the root cause analysis problem from the angle of prevention rather than correction."

"Peakstone eAssurance, a new product, provides a closed loop system, which not only analyzes the problem, but also automatically applies the required correction. Peakstone “learns” about the infrastructure through a load testing session. By applying a load testing product such as Mercury Interactive Loadrunner (one of Peakstone’s partners), the product is capable of identifying patterns corresponding to load situations and to correlate a set of traffic and component load thresholds that correspond to a service level. For example, if a certain response time is desired, the software will learn the corresponding traffic and load levels of the application infrastructure. Each identified situation, or pattern, has a corrective answer that triggers load balancers, QoS or, in the case of Microsoft servers, just-in-time capacity (the ability to fire up a new server to absorb the extra load). With this type of application, we are getting closer and closer to industrial automation and process control loops." --Root Cause Analysis: The Next Frontier in Infrastructure Performance Management, Jean-Pierre Garbani, Giga, March 27, 2001.

SC Usage:  WebTrends, Cisco Works, NetIQ, HP OpenView with Network Node Manager, Compaq Insight Manager and SMS (System Management Server) are products available for use on the SC network. SMS is installed on its own server in the NMIC, named SCNSMS1P. There are also various admin utilities bundled with Microsoft NT Server.

About 80% of the functionality of SMS is being used in SC. Features not currently used include software metering and some advanced reporting capabilities.

NetIQ, WebTrends and HP OpenView are underutilized due to time restrictions and the effort that would be required to plan a comprehensive monitoring procedure. Although these tools have many features, the problem with performance monitoring is that we do not know what to measure, or what is the main bottleneck in a process. The new root-cause and artificial intelligence-based products may finally allow automated measurement and real-time optimization of server and network performance. These new products should be investigated for use in SC. Alternatively, the existing features of current products should be utilized in the context of a budgeted capacity planning and upgrade project.

SC Application Impacts: General performance (latency and capacity) for all applications; reducing risks of data unavailability or, on the other hand, excessive cost due to purchase of unnecessary equipment.

Last Update: Valid Until:


Microsoft SMS

List all Categories

Administer the Database