IT Organisations: operating in a world of constant change

04 Dec, 2007

The brilliant mathematician Alfred North Whitehead once said: "The art of progress is to preserve order amid change and to preserve change amid order". The nature of business today is that change is the only constant. Organisations, be they public or private entities, are faced with change as a result of reorganisation, business expansion, competition, the impact of new technology, mergers and acquisitions, industry or government regulatory controls and a myriad of other factors.
The reality is that any change that affects an organisation will have a flow-on effect to the IT organisation. One can say that an organisation's ability to adapt to change is directly related to its IT system's ability to adapt to those changes. There are many examples of organisations that have suffered considerable harm to their reputations and market values through IT disasters that resulted from poorly implemented systems, and upgrades that went wrong.
From the release of the first commercially available relational database system in 1979, to support for Very Large Database (VLDB) requirements in the late 1990s, to databases for grid computing environments in recent years -- the last 30 years have seen many important innovations with new server architectures emerging to support mission critical systems.
In the past, customers had fewer choices in server architectures as symmetric multiprocessing (SMP) servers were almost the de-facto standard for UNIX-based applications. Today however, we witness the emergence of architectures such as blade servers, clustered servers and new operating systems such as Linux.
Back then, moving from one vendor's SMP server to another was relatively simple as benchmarks could be conducted to ensure that the new server would deliver the required performance. Today, customers looking to migrate from a UNIX SMP architecture to a Linux architecture based on blade servers are faced with a significantly more complex task. The potential for errors is higher and this can lead to decisions that bring on disastrous results.
CHANGE ASSURANCE: Data centers have changed fundamentally in the way they look and operate with the introduction of grid computing. From silos of disparate resources to shared pools of servers and storage, organisations cluster low-cost commodity servers and modular storage arrays in a grid. Databases built for grid environments have enabled organisations to improve user service levels, reduce downtime, and make more efficient use of their IT resources while still increasing the performance, scalability and security of their business applications.
Nevertheless, managing service level objectives continues to be an ongoing challenge. Users expect fast and secure access to business applications 2417, and IT managers must deliver without, increasing costs and resources. Databases play a key role in ensuring high availability. In the next generation database, the ability to run real-time queries on a physical standby system for reporting, or the ability to perform online, rolling database upgrades by temporarily converting a physical standby system to a logical standby, or a snapshot standby to support test environments, can all help ensure rapid data recovery in the event of an IT disaster.
APPLICATION PERFORMANCE TESTING IS A NECESSITY, NOT A LUXURY: To understand the impact of Application Performance Testing to businesses, let us take a closer look at a key IT issue for organisations in relation to managing change. During the lifespan of any application system, changes are a fact of life but the complete impact of these changes has to be known before the application goes into production. Common system changes are:
-- Updates to an application requiring it to be moved from a testing to a production environment
-- Upgrading or patching the database or operating system
-- Changes to the database scheme
-- Changes to storage or network
-- Testing a potential new hardware platform (eg comparing UNIX platforms)
-- Testing a potential new operating system (eg migrating from Windows to Linux)
-- In order to provide some structure to this process, a range of tools has been released to help customers better manage this process and provide some capability for customers to test application performance. Despite helping to make the testing process easier, it requires significant investments of time and effort to gain a functional understanding of the underlying application of many of such tools before the testing workloads can be generated. In a vast majority of cases, the bigger issue is that the resulting workloads are to a large degree, artificial.
Despite extensive testing and validation - both time consuming and expensive - the success rate has traditionally been low as many issues still go undetected and application performance can be affected, leading to potentially disastrous outcomes.
In order to help customers deal with application performance testing, the latest release of the industry's leading database incorporates new features that allow customers to capture a production workload which can then be "replayed" on a test system to show how the application functions in a new environment. The key difference in this approach is that all external client requests directed to the database can be captured -- so the real workload is captured and can then be replayed on a test system. This will throw up any errors or unexpected results (ie a different number of rows returned by a query) using the comprehensive reporting system provided. With this innovative feature, organisations will be better prepared to cope with change -- without fear.
SIDEBAR: ORACLE DATABASE INNOVATION OVER 30 YEARS: 1979 World's first commercially available relational database system was released 1983 Industry started to move away from proprietary to open system platforms with a portable version of database
1988 Enterprise-scalability of mission critical applications enabled through support for row-level locking New database capabilities such as stored procedures, triggers, and declarative referential 1992 integrity, making the database programmable and able to enforce business rules in a client/server environment
1997 Support for Very Large Database (VLD) requirements in anticipation of the massive growth in the storage of data online 1998 New features to support the growing role of the Internet in the business environment 2001 Support for Oracle Real Application Clusters and advanced Data Guard standby environments.
2004 Introduction of the first database that could take advantage of low cost hardware and storage arrays in a grid environment 2007 Innovation continues - extending the ability to deliver grid computing benefits.

Read Comments