Posts Tagged ‘smart computing’

Key Recommendations

There are many factors that determine success in large-scale projects. But what is surprising is that management and motivation, combined with measurement and tracking of the right things, explains success far more than selection of tools or specific methods of work that are employed within the systems development function. All too often we find well-organized teams, using the most advanced tools and methodologies that nevertheless are not successful. As a result, we have focused on the management and leadership skills that are required. In addition, we have identified a strong problem with governance of large-scale projects and have been able to formulate recommendations that address these issues as well. Putting all of these factors together will significantly raise your chances of success or help you turn around a project that is teetering on the brink of disaster.

Large Projects

Large Projects

Make sure the business vision has been clearly articulated and championed by senior executive management. The fundamental building block for a successful large-scale systems effort is senior executive management support. The champion for the project must come from this group. Without this important element, the likelihood of failure is very great. Systems of this magnitude need to be defined in terms of key business outcomes. It is by focusing on the business outcomes that the meaning of the large investment required for different systems becomes clear. Outcomes needs to be defined not in terms of process or an other inwardly looking criteria, but rather in terms of specific deliverables of business results in the market or with customers. If the organization is unable to define these factors for the system, then consideration should be given to not building it.

Place the program under a leader with obvious business “subject matter” credentials, and credibility within the enterprise. Leadership of large-scale systems efforts is critical in getting success. What kind of leadership? A common mistake is to treat the effort as though it is a systems project. It is not. It is a business project, and should as a result be led by a well-recognized business leader. Picking the leader is critical, and is not done by the IT group. Instead, it is an enterprise matter and senior executive management needs to be part of the consultations. Picking a senior business executive to lead these projects is a strong signal for success. It makes it easier to communicate systems initiatives to others in senior management and to various business function elements whose cooperation is needed.

Establish an effective “governance” linkage to executive management. Failure comes from lack of a clear governance model that links together the IT group in charge of the project and the executive management that ultimately is responsible for signing the checks for funding. The purpose of the governance model is to build into the process a system of collaboration between IT and executive management that insures continued funding. This continued support comes in the form of consultations that continue to clarify the strategic mission of the project, and to review the continued change in its strategic dimensions. Sometimes external factors, for example, change, leaving the IT shop without any clear way to adjust their efforts. The business senior management is generally charged with being outward-looking and is responsible for monitoring external competitive threats and other market developments, including emerging opportunities, in order to provide constant steering of the systems development process for large-scale systems.

Assure the presence of in-depth business function membership on the project team. Another failure factor is a project that does not have business function membership on the operational side of the project. Successful projects keep this part of the governance structure in place, and ensure that the business function representatives are constantly engaged as the system is created and rolled out into the organization. The most common problem is the “fair weather friend” syndrome, when the business function leaders are there at the beginning of the project, but then disappear or divorce themselves from the effort the moment a problem appears, or there is the slightest hint of difficulty. “It’s not my project – I would have done it differently,” says the back-biting business function leader. “They want to do it that way, then let them pay for it.”

Maximize continuity among the project team participants. Even short-term turnover is a major problem for most IT shops. For long-term projects, the problems magnify. Only a few original members of a project are around to see the project through to its conclusion. A key best practice in this regard is to work hard and carefully at retaining the senior talent in any large-scale job. This is not to say that all talent available should not be preserved and sheltered, but continuity among team members is a major success factor in staying on time and budget for projects. The nature of large-scale systems is such that the cumulative effects of a series of small delays from various sub-teams can add up to really large delays overall, particularly when the critical path is changed – as it almost always is. For smaller projects, this issue can be dealt with on an ad hoc basis, however, for large-scale projects, the ripple effect can become debilitating. Therefore, a formalized system for maintaining continuity of participants is a critical element.

Implement stage gate reviews with executive management. Stage gate reviews are points in the project where the organization makes a major assessment of its continued viability. It has its parallel with the milestone process in classical project management. Stage gates are defined so that a clear message can be formulated for executive management in the company. It is their continued support upon which rests the funding and mission of the effort, and they need to be kept constantly informed regarding the status of the project. In this way, funding is assured, and the higher elements of strategy are continually injected into the systems delivery process. This is in contrast to the practice of letting a large-scale systems effort gradually sink into oblivion, away from the eyes of executive management. The stage gate review process needs to be scheduled with executive management far in advance, and kept on their calendar. A sense of importance and mission must be maintained, so that actual executive management, not lower-down representatives are sent to participate in these meetings.

Track and publish schedule results and costs. Good program management rests upon carefully watching expenditures for each stage in the process. A best practice is to publish the results regarding time and budget to all persons involved in the project – both on the IT as well as on the business side. What is the effect? This helps all parties involved to realize the critical nature of the effort that is underway. Doing this usually helps to motivate the various teams on the critical path, and to increase a general level of awareness of the project, thus stimulating overall efforts at cooperation, something sorely missing from many projects. In most organizations, it is human psychology to band together even more tightly into cooperative teams when there is a sense of urgency and a pressing business-critical mission that needs to be accomplished. In contrast, it is the silent projects that die a silent death, unnoticed by the larger organization.

Assure the effectiveness of mechanisms to coordinate with other projects. Large-scale systems projects are never an island unto themselves, but rather exist within the rich context of many other initiatives that are going on, some of which occasionally may have higher short-term priorities. It is necessary to ensure that any large-scale systems effort is integrated into the planning for other smaller projects that may be underway. The purpose of this is multifold. It is useful in ensuring allocation of resources, both funding, and skills, among the various projects. It is also necessary in order to ensure a consistency of architecture. This coordination process needs to be constant, and be built into the overall governance mechanism of the project.

Assure effective coordination in planning the Technical Architecture. Due to the enterprise-wide nature of large-scale systems, it is often the case that different systems (and sub-systems) overlap or even conflict with one another. This is why the role of the chief architect is so important. Coordination planning around architecture is one of the key high-level tasks of management for a large-scale systems implementation. This process is not a one-time step in a long sequence of systems development events. Rather, it is a constant point of review that is done as a punch list element for each major step along the way. In this way, many problematic issues are resolved before they occur.

Does the project team have adequate technical staffing and liaison? Throughout a lengthy project, the skills required will change. This factor combined with the inevitable turnover in IT organizations, and the long-term nature of a large-scale project introduce a structural uncertainty into any proposed project. We have seen that between different sub-projects, there is always the challenge of distributing adequate technical and specialized talent effectively. This in itself is a serious management challenge. Most projects at the same time are forced to rely upon outside contractors for major pieces of the work. Finally, all of the teams involved – either insiders or outsiders – must be set into a program management structure that will facilitate adequate communications with business. The best practice is not to set this type of communication as a policy, but to systematically build it into the schedule of work so that there is little doubt it will take place. An additional benefit is that this type of scheduling ensures that eventually this type of exchange of important information becomes an accepted part of the corporate life-style. This, in turn, helps to inculcate these same values into the different sub-contractors who might be working on various projects from time to time.

Assure that effective escalation processes are in place. Since large-scale development programs are composed of different projects, and these projects in turn are sometimes composed of sub-projects, inevitably there arise conflicts between different teams. Disagreements over different technical solutions, cross-team effects of one solution spilling over onto another, and the struggle for resources that must carefully be allocated across and between different teams – these are only a few of the issues that spawn disagreements. From the very beginning it is best practice to design into the organizational structure of the project management group the specific ways for escalation to take place. This will ensure that a coherent decision-making structure is put into place.

Make change leadership a planned and visible component. All too many IT shops take on large-scale systems development projects and gradually allow them to slip out of visibility to the business side of the house. The danger in this behavior is that IT eventually comes to be seen as the “owner” of the project. In reality, of course, it is the entire business (not IT) that is the owner of the project. In order to ensure this idea remains at the forefront of people’s minds, leadership of change is engineered into the program management of the project. Visibility is important in order to continually refresh corporate learning and consciousness of the changes that are taking place. In addition, it helps the IT team remain in contact with the business so that important information can be picked up regarding anticipated changes in strategy.

Actively envision and plan for the “Living There” stage – supporting the new environment and harvesting the business benefits. Effectively conducting a large-scale development effort is more than the day-to-day operations involved with program management and reporting. The best projects have in their field of vision a picture of how the system will operate when it is completed – not only a technical vision, but an operational vision encompassing the business process that will out of necessity change. In order to harvest business benefits, companies prepare for the completed system and how it will be used a long time in advance, instead of waiting until the system is near completion, then worrying about how it will be used and implemented into business processes. When companies wait too long before focusing on the end state, we have found their chances of success drastically reduced. Envisioning the future, and actually training for the future, needs to be built into the systems development process from the very beginning, or at a minimum from the middle of a project. This will help ensure that everyone is on board and that the organization will be prepared to take immediate and full advantage of the system once it finally is ready (see Project ES: Implementing Enterprise Systems for details regarding the different stages of ERP rollouts).

Assure ongoing communication and dialogue with the target business operation – manage expectations. The politics of large-scale systems implementation require constant communications and expectations management. This must be systematically maintained throughout the project, so that towards the end – or at any other crucial moment – it is possible to get the support and funding needed to accomplish milestones in the project. Management of expectations is a large part of this effort. Pace, timing, and verification of benefits to end-user groups needs to be controlled carefully so that the system does not develop a poor reputation before it is even completed.

Article by Shaun White http://www.sacherpartners.eu


Read Full Post »

Define your strategy. Determine the basic strategy your are employing (e.g., why you are going to us an Cloud Computing and what it is going to provide for you). We have identified several generic strategies that are available including time to market and cost reduction, and several variations. Without knowing your basic raison d’etre for going the Cloud Computing route, it will be impossible to evaluate your success, or to value the service you are buying.

Assess your current platform and investment. You need to determine the compatibility of the proposed Cloud Computing with your platform, both from a technology point of view as well as from a systems maturity viewpoint. Look for application sets that are not well developed in your organization, and have a relatively small number of interfaces to other applications, that will tend to be relatively stable over time, and that do not require a large amount of customization to meet your needs.

Cloud Computing

Cloud Computing

Determine source of value added for Cloud Computing. Evaluate where the Cloud Computing will give your value added. Are you merely replacing an application process or are you going to get something from using the Cloud Computing that you could not obtain otherwise? Understanding this helps to set the pricing for the service. Another critical factor is the level of commoditization in the market. Cloud Computing will charge premiums for services that are unavailable elsewhere.

Estimate integration issues (and who is going to pay for them). Our research suggests companies will tend to underestimate the cost and complexity of integration between their infrastructure and the Cloud Computing. You may be able to save by having the basic development of the system paid for by the Cloud Computing, but if there are counter-balancing integration challenges that must be funded by your own organization, the Cloud Computing advantage may disappear. Understanding who is going to pay for what during the entire lifetime of the Cloud Computing relationship is necessary to truly understand the value you are getting.

Contract service level agreement (SLA). Negotiation of SLAs for the Cloud Computing relationship can not be done too carefully. The trend is to have a single point of contact for any problem – either application or network performance – that is being contacted in case of problems. First, second, and third tier escalation and Problem Determination Procedures (PDPs) and Trouble Ticket Tracking need to be defined well and subjected to a testing period.

Implement “vanilla,” then add value. Our analysis indicates that a user should go as long as possible (in the contracted relationship) without introducing customization or any other changes in the services being purchased. Contacting for the “vanilla” layer of services will give the best price-performance. When absolutely necessary, and after the bugs in the Cloud Computing relationship have been worked out, you can then begin to add value to the contract gradually by introducing extra services (and features) as required. According to the VCM, the key to limiting the unpredictability of long-term contract costs in an outsourcing relationship is to avoid customization as long as possible. One advantage in the Cloud Computing model is that it almost always provides a vanilla level of basic services that can be hitched onto in order to stabilize the long-term costs of the contract.

Do a cost analysis. For any consideration of the Cloud Computing model, a cost analysis needs to be done so that the Cloud Computing option is compared to alternative paths. Any cost analysis has a diverse set of variables cost factor elements that can be either included or excluded from the analysis, and depending on what is included the analysis outcome varies. One advantage to the Cloud Computing contingency is that it is possible to receive a fixed fee commitment from the Cloud Computing along with a clear bill of services. Although what is included will vary from contract to contract and from one service provider to another; nevertheless, it should be possible to define the services in a bundle that can be compared to your own costs of providing them internally, although there will be many judgement calls concerning where to load on costs.

Chapter 2 of this report reviews the value proposition being made by Cloud Computing. What value to customers are they bringing to the market, and what are the factors that determine whether it is likely to be successful?

Chapter 3 details the basic types of Cloud Computing and describes the “delivery chain” from infrastructure and applications through networks to desktops that must be managed to produce high performance in Cloud Computing.

Chapter 4 introduces our Why-What-Who-How framework for making decisions about going with an Cloud Computing approach.

Chapter 5 reviews the basics of negotiating an Cloud Computing contract for services.

Chapter 6 identifies the current risks and limitations of the Cloud Computing model, and proposes various amelioration strategies that can be employed. We have also provided three appendices.

Appendix A provides a checklist of factors to consider when evaluating an Cloud Computing.

Appendix B provides a watch list to monitor for the Cloud Computing sector.

Appendix C provides a more detailed look at the Cloud Computing value proposition from a cost standpoint.

Article By Shaun White http://www.sacherpartners.eu Learn More

Read Full Post »


Why provision application services externally? Since the Cloud Computing market is still emerging, you should only consider Cloud Computing if the value propositions are directly translatable into business advantage for your firm. You should clearly understand the underlying business forces, competitive pressures, and urgency that may make Cloud Computing an attractive option. Is first mover advantage for a greenfield operation or spin-out likely to translate into lasting competitive advantage? Is flexibility to exit a business, or rapidly ramp up business volume important? Can you reliably forecast the transaction processing scale required of your technology infrastructure twelve months in the future? Could the wrong in-house technology decision now create an unscalable wall that blocks business growth? Should you ration capital funds, and focus them solely on core, differentiating assets, not operating infrastructure?

Cloud Computing Decision Sequence

Cloud Computing Decision Sequence

In the first stages of the decision processes, it is necessary to carefully determine and assess the underlying forces that are compelling change in your IT infrastructure or business. In some cases, the reason could be that external competitive pressures are forcing your enterprise to develop new eBusiness services, or to go to market in a different way. Or the external pressures could be simply along the low-cost provider trajectory. In any case, there can be a variety of external forces that will compel the organization to make significant changes in its business processes and how it delivers IT support to make them work.

At the same time, significant internal pressures can be a driving for adoption of the Cloud Computing model. For example, if there is a chronic shortage of IT personnel, then it may be completely impossible to deliver the required IT services any other way. There may be core competency issues coming to the surface, (e.g., if there is consensus around the idea that many IT services should be done by outsiders, leaving key personnel to focus on activities that support core competencies of the organization, instead of frittering away their talent elsewhere).


What are the specific business results and performance levels the Cloud Computing solution must deliver? Since few vendors have tackled the end-to-end service delivery chain (and demonstrated consistent competence provisioning each specific service), it’s critical to understand the performance characteristics and limitations of the applications, networks, infrastructure and support services (starting with help desks) that make up your Cloud Computing delivery chain. Are the application’s business process design and the Cloud Computing technology integration sufficient to support everyday business operations? Will the technology infrastructure (network and operations) prove reliable, and sufficiently robust, to meet transaction processing needs? Should you limit the number of vendors providing service to reduce finger pointing, or should you consciously involve sufficient partners to optimize contingency and exit planning?


Who should you choose as your providers? And should the arrangements be viewed as transactions, or as longer term strategic partnerships? Since contracts are predominantly short-term, the accepted rules for prioritization, risk, and relationship management could shift dramatically. Should you structure arrangements to capture intended financial advantages quickly, while hedging your company’s most critical risks? Or should you take the time to negotiate arrangements that address each potential issue in advance? Will your service level agreements be little more than mutual goals in a situation where contracts may expire before default agreements and remedy options can be enforced? This forms the baseline against which the Cloud Computing model is compared. After the base line costs for providing the service internally is established for a period of time, usually 2-3 years, the next step is for the user to contact different Cloud Computing vendors and begin their selection and development of contracts.


How should you organize to manage transition and ongoing operations in an Cloud Computing-based service model? Is “service sourcing” destined to become a key competency in your organization? Will dramatic changes redefine the role of your IT organization, or will the continuing evolution away from custom development be sufficient? Will traditional internal application maintenance and support become obsolete? Can technology and service integration be outsourced, or will rapid integration become a core competency that distinguishes operational and technology leaders? This has several implications and challenges:

  • IT Organization. The IT organization must readjust itself to working and “interleaving” with an outside service provider. This can mean either that people will be re-assigned to more ‘core’ activities for the company, or they will leave. Support structures and how the help desk operates must be debugged, and changed so that users or customers are not disadvantaged by the transition to the new model.
  • Project Management. The way in which the IT organizations, and the business units that drive priorities in IT must change to accommodate the new Cloud Computing delivery model. Instead of making demands against internal resources, now it is necessary to work with partners, and this changes completely how the budget approval and planning process operates.
  • Business Processes. Finally, in order to make full use of the Cloud Computing provisioning of IT services, it is clear that many if not all business process must be changed, or at least modified, in order to adjust to the new model. For example, policies for handling sensitive data that is going to be stored and processed by the Cloud Computing must be worked out. Also, it is important to keep track of business processes over time to see if any significant potential for synergy or consolidation appears.

In summary, the Why-What-Who-How framework for choosing Cloud Computing starts with the large “macro” forces that are shaping the utilization of IT in the organization, then narrows down the options by first understanding the scope of what is required. After that is determined, the nature of the required application set determines the general type of Cloud Computing to choose. After that, pro forma cost estimations are made to establish a base line for cost and expenditure that is a point of comparison for the Cloud Computing model. Cloud Computing are then selected on a variety of both financial and non-financial data, and contracts, including SLAs are negotiated and registered. After that, still the organization faces a serious amount of work in adapting to the new provisioning model.

Article by Shaun White Sacher Partners Ltd

Read Full Post »

Management Summary

For the past several years information technology budgets have been increasing for a number of reasons, including the deployment of enterprise systems, new communications infrastructure needed to handle the explosion of email and data transmission, entries into electronic commerce. At the same time, companies are facing increased competitive pressures, are often unable to increase prices, and thus continue to look aggressively for ways to wring out costs. The IT organization is under pressure to deliver new business capability and to aid in cost reduction initiatives in other parts of the business. It is also under pressure to reduce its own costs, despite having no direct control over consumption patterns or many of the drivers that influence IT costs.

Cutting IT Costs While Building Capability

Cutting IT Costs While Building Capability

Thus, the CIO may face the demand to reduce costs while simultaneously upgrading the technology infrastructure to support electronic commerce, rapid applications development and, in many cases, a global communications network. Sound impossible? Not necessarily. After reviewing practices at more than twenty-five companies, we concluded that well-executed techniques for managing supply and demand of IT services can free up the funds needed for new infrastructure investment. The most significant barriers to achieving these savings are typically not technical or operational, but rather political and cultural, especially where business units view standardization of technology and controls over demand for IT services as threats to their autonomy. IT cost control is not just a numbers game – it is a general management challenge.

Cost Management Framework

We find that there are three ways to look at IT cost:

  • Cost efficiency looks at how IT is delivering the basket of services that it controls. For example, does IT deliver help desk services of high quality at a best-in-class price?
  • Cost effectiveness looks at how information systems, technology and IT services affect the total cost of specific business processes. The focus here is on business performance and costs.
  • Market growth looks at IT’s contribution to new product development, market expansion or business transformation. The focus here is on the cost and value of IT’s contribution to business flexibility and innovation.

Employing such a framework is essential to effective cost management, because the techniques for cost control and the measures of success vary with these categories. Absent a useful cost categorization, executive teams fall into the trap of looking only at the total IT budget, and sometimes issuing potentially business-damaging directives like “cut 10% like all the other G&A departments.”

Cost Control Techniques

Cost control techniques in each of these categories address two major and interrelated needs: controlling the cost of supplying technology and services, and controlling demand for technology and services. For example, in most companies IT is responsible for provision of long distance phone services. Through negotiation, audit, comparison shopping, and use of new technologies and techniques (such as voice over the Internet), IT can lower the cost of a minute of long distance. However, these savings may be overwhelmed by increases in long distance usage, and arbitrary or misguided attempts to control these costs may result in both higher unit costs and lower service. Companies must address cost and demand control together and anticipate the possible business outcomes of cost control moves.

Managing the Cost of Supply

Techniques for managing human resources in IT yield significant benefits for many companies – understandable when expenditures on staff and contractors were in the range of 30-50% of total IT costs.

Probably the single most important technique for managing staff costs (yet the least practiced) is “total time accounting,” having IT staff account for their time at a sufficiently detailed level to be able to match hours with results. In order to do activity-based costing and budgeting, you need to know which resources are linked with which activities. For example, how many hours really go into minor systems enhancements? Total time accounting enables you to redeploy resources away from low-value activities to more strategic initiatives, as well as providing information useful in demand management.

Staff cost control is also improved by performance management and retention management. Performance management means that the company has a career development plan with an appropriate mix of incentives and rewards for high performers, as well as willingness, when necessary, to actively deal with poor performance. (Almost half of the 20 companies we interviewed on this point did not actively deal with poor performance.) Retention management includes a good understanding of the costs of losing and replacing an employee, plus the flexibility to shape benefits and compensation plans creatively to retain top performers.

In managing the costs of technology supply, standardization of hardware, software and other infrastructure yields significant benefits. Companies that track costs associated with the introduction of new or different technology, including software, soon recognize the amount of hidden cost – especially long-term support cost – associated with a variant technology. While there are legitimate business reasons for using non-standard technologies in specific business units, lower-cost companies ensure that the differentiation is legitimate. They manage technology infrastructure through a hierarchy of standards: for example, email and database management systems must be standardized; back-office software such as HR and payroll should be standardized; enterprise resource management systems must be standardized at points of corporate coordination; and business-specific strategic systems have more flexibility but are encouraged to leverage as much corporate infrastructure as possible.

Managing the Level of Demand

Management of demand is where firms experience truly substantial savings. One company in this study reduced IT costs by 50% over a two-year period by elimination of unnecessary systems, more appropriate use of existing systems, and reduction in support and enhancement of systems.

In many companies, baseline costs (keeping what we have running, doing routine maintenance, and making externally imposed changes) consumes 80-90% of the IT dollar spend, and IT management may find these costs difficult to reduce. We recommend close inspection, because it often happens that support and operation of questionable legacy systems consume a disproportionate amount of money. Companies under strong cost control mandates have dramatically clamped down on “maintenance and support” with large savings and little degradation business capability or service.

We also recommend that companies carefully analyze their portfolio of applications and decommission those whose operating costs are no longer justified by their benefits. Some companies have used Y2K as the reason to eliminate systems with limited use or life expectancy.

Barriers to demand management are predominantly political and cultural. In several cases we studied, IT professionals knew that applications had limited usefulness or consumed excessive resources, but they felt unable to raise this issue. In other situations, the culture norm called for IT to respond to all user requests. In one company where a new CIO escalated this issue to the executive team, they discovered that 30% of the IT development/maintenance projects lacked specific tactical or strategic value.

Implementing a Cost Control Program

As a first step in implementing an IT cost control program, companies must establish their objectives, desired business outcomes (including user and IT behaviors), and cost/benefit measurement techniques. The next step is to assess current costs and technological and organizational capabilities in order to find the most significant opportunities to control or reduce costs. This assessment is very important, because there are typically too many potential cost reduction initiatives to attempt at once (e.g., there are more than 40 sets of techniques listed in this report). The third step is to select and follow-through on those cost control opportunities with highest impact for your business.

About this Report

Chapter 2 explores the useful ways to categorize, assess and discuss IT costs, including efficiency, effectiveness and support for business transformation.

Chapter 3 summarizes more than 40 sets of techniques used by companies to help reduce IT cost by becoming more efficient or by appropriately managing demand.

Chapter 4 describes approaches to assessing the current maturity level of a company and its IT organization and then matching the cost control implementation approach to the current level.

Appendix A outlines an approach, with examples, for activity-based budgeting in IT. The principles of activity-based costing developed for the broader business environment can help in surfacing and managing the cost drivers specific to IT.

Appendix B outlines the application of real options to IT investment decision making, especially in the area of new infrastructure. More information on this article, please contact Shaun White http://www.sacherpartners.eu

Read Full Post »