Wednesday, December 31, 2008

Taking Measures for True Success: How to Choose Software That Meets Your Needs

You’ve probably already read about recent large-scale software implementation failures and their costs to companies: More than $33 million lost, as a project goes over budget and time. In excess of $54 million spent, and the project canceled before completion. Revenue loss of $100 million, as the new software fails to successfully integrate with the old. These are just some examples—global averages are often difficult to accurately determine due to companies’ reluctance to publicize failed projects.

Is the success of a software selection and implementation project really nothing more than a happy accident, and almost completely beyond your ability to control it? Why is the rate of failure so high?

One main factor Technology Evaluation Centers (TEC) has observed is that the less-than-inspiring rates of success noted above often boil down to the organization’s current experience and historical success with larger-scale implementations involving complex functionality. In other words, the success of a software selection and implementation project depends on whether your current internal resources are sufficiently skilled and experienced to define your organizational needs.

Software Selection (and Implementation) Failure

But in addition to considering your internal resources, it’s important to realize that the probability of success also depends on how you define success.
Organizations often employ “fuzzy thinking” to determine the metrics that define a successful software implementation project. Companies have a tendency to use three common metrics: if the project was completed within a scheduled time frame and within budget, and brings the full-spectrum of new functionality to help automate processes and improve efficiency, then the project is deemed a success.

However, upwards of 60 percent of companies fail to consider that completing a project on time and within budget means nothing if the software itself doesn’t meet the needs of the organization.

Let’s look at why time, budget, and functionality aren’t the only metrics you need to consider, and why choosing and implementing a software solution might be more complicated than it seems at first glance.
  • Time: “Short-cuts” can be very expensive in the long (or even short) run. For instance, many organizations elect to shave time from the painstaking requirements-gathering process. But if you discover after implementation that crucial functionality is lacking, you’ll need to spend additional time and effort to work around the deficiencies.
  • Budget: You may be tempted to cut back on certain expenses to bring your project in under budget—but, as an example, shrinking your budget for training staff means the software may not be used efficiently, as staff struggle to learn how to use it themselves, on your dollar.
  • Functionality: Even if a software package offers comprehensive functionality, there’s no guarantee it’s exactly what your organization needs (full-featured doesn’t necessarily mean “best”). The only way to properly rank your functional requirements is to define them—before you even start choosing a solution.
Time, budget, and functionality are important considerations, but they shouldn’t be used as the only measures of a successful project. And even if you meet all of them, your project isn’t a guaranteed success.

The real measure of success is whether or not the software you implement meets your business requirements.


The Big Question: How Do You Succeed?

So if it isn’t just a matter of time, budget, and a spread of features and functionality, how do you succeed? The most important part of the software selection and implementation process is to define your requirements. But, it’s not quite as easy as it sounds.
Know what you do and the steps you have to take to do it, so the software you choose does what you need it to do.

Don’t let the software features dictate your business processes and core activities—make sure your business activities determine the software functionality you require. To do this, you’ll need to get consensus from all stakeholders about what your core business activities are, and then create one shared definition of those activities, also known as business process modeling (BPM). Not doing so may result in disagreement about what activities are most important—and worse, disagreement about whether your chosen software is meeting the needs of your business.
To understand your software requirements—the starting point for a successful software selection and implementation—you must define your desired-state business processes.

Elements of Business Process Modeling

To define your processes, you’ll have to ensure you address all aspects of business process modeling (BPM). Note that these are not necessarily discrete, sequential steps. Best-practice software selection projects often address these elements concurrently or iteratively.
  • • Defining current processes
  • • Describing changes or additions to existing processes that improve business results
  • • Conducting review, analysis, and validation to ensure that requirements are clearly and completely described and aligned to your business objectives.
  • • Prioritizing your needs in terms of what you would like a new software system or Custom Application development to address: What are your “must-have” versus “nice-to-have” application features and functionalities?

BPM + DIY = DOA?

Your internal business analysts (BAs) may have the necessary ability and experience to fully define your requirements, but if they don’t, it sets your organization up for a less-than-successful selection and implementation. “the vast majority of projects surveyed did not use sufficient business analysis skill….” According to the study, 70 percent of organizations lost control of time and budget while at the same time attaining far less functionality. Why? Because their organizations didn’t have sufficiently experienced analysts for this type of strategic project.

In other words, most companies’ internal business analysts do not have the needed background, experience, or expertise to perform adequate or effective business process modeling. You may be imperiling your odds of success by going through the selection and implementation process alone.

If your business lacks the necessary BA expertise, what can you do about it? There are three solutions that can help you deal with the problem of inadequate analyst expertise.
  1. Hire someone to coach and mentor your current business analysts through effective business process modeling and analysis, including driving consensus and requirements-gathering, and steering you away from common BPM pitfalls.
  2. Hire outside consultants to do your business process modeling and requirements analysis for you.
  3. Let the software vendor take care of defining your processes. Attention: this “vanilla implementation”—one flavor for everyone—may sound all right, until you realize that vendors will naturally fit your requirements to the functionality of their software offerings. But the fit may not be right, and the last thing you need is the headache of relearning your business processes to accommodate an unwieldy software package.

You may have noticed that we haven’t talked about training your internal analysts in how to do BPM. The simple reason is that in-depth BPM training is a very long-term, possibly multi-year commitment—which the typical software selection project cannot accommodate.

The Last Word

Are you ready to start your software selection and implementation project? To see if you’ve now got the know-how for a successful outcome, consider these questions.

1. Your first objective for your software selection and implementation project is to
  • a) get a good price, or a price that meets your predetermined budget
  • b) meet industry best practices and regulatory compliance issues
  • c) meet your business process needs
  • d) find a solution that is “user-friendly” and easy to implement
2. The most important measurement of success for any software selection and implementation project is
  • a) finishing the project on budget
  • b) getting the maximum range of software functionality
  • c) ensuring the project is not delayed
  • d) defining your requirements accurately
  • 3. The best way to define your business processes and requirements is to
  • a) use only your internal resources, including business analysts
  • b) hire outside analysts to perform BPM
  • c) get experts to guide your internal analysts through the BPM process
  • d) hand the task over to the vendor
There are no right answers, but there are better answers. Any approach can lead to success. But the question is this: how do you improve your odds, faced with a risky and costly software selection project?
By having your requirements thoroughly and accurately defined, you have a solid basis for the other requirements for success (time, budget, functionality). And, if you decide to engage the services of dedicated experts, you can benefit from best-practice services for successful requirements definition, including courses to coach and train your business analysts, project requirements elicitation, request for proposal (RFP) preparation, and project scoping and requirements planning.

Whatever you do, make sure you don’t end up a sorry statistic. Define your requirements with the help of specialized and experienced professionals. Contact Symbyo Technologies Experts today for a free consultation.

Tuesday, December 30, 2008

Steps to select the right Outsourcing Vendor

The process of selecting a software outsourcing company vendor implies a complex multistage process to evaluate not only what the provider can do, but also the way it’s done.

If your project ends up in the wrong hands, it could endanger all your great plans for business growth. In return, with a well-selected provider, you will see savings, enhanced product value and greater speed to market, thus giving your business a competitive edge.

Introduction
The process of selecting an outsourcing vendor implies a complex multistage process to evaluate not only what the provider can do, but also the way it’s done.

First of all it’s important know that this process can and should take some time. Sometimes, this means months.

A well-organized vendor selection usually takes between 6 and 12 months and can ramp up the total cost of the project with approx. 1-10%. (For further information on this, read the “Real cost of Outsourcing” white paper).

Costs associated with this phase include analysis and documentation of requirements; creation, distribution and evaluation of RFPs (Request for Proposals); negotiations of contracts; development of SLAs (service level agreements); pay of external players: consultants, lawyers etc. Therefore, the selection of vendor is not a process to be rushed. Companies should follow a well-established methodology that defines each step of the trip. After all, the final goal is to end up with the best service provider for delivering the desired outcome.

Step 1: Define your objectives and goals
This is a basic step for all future outsourcing activities. You have to describe the process, service or product that you want outsourced clearly. You should also indicate what your goals are through outsourcing.

Another one of the first things you should do is gather a core team to evaluate vendors and participate in negotiations. The team should consist of individuals from various parts of the company, such as executives from affected business departments, legal staff and human resources responsible.

Make sure you include the answer to the following questions in formulating your objectives:
* What do you want to outsource?
* What type of an outsourcing agreement are you looking for?
* What are the offshore outsourcing locations that you are interested in?
* What are your goals in outsourcing?
* What services do you expect a vendor to provide?
* How much do you plan to spend?
* What are the risks associated with such an outsourcing agreement?

The team’s first task should be to define the high-level requirements of the outsourcing engagement. For instance, if the goal for outsourcing is to reduce costs, the organization should state it openly and leverage this process to explore ways to achieve that goal. The next step is to benchmark the current process against others in the industry. Drawing "before" and "after" process maps is a great exercise that helps companies explain where they are today and show where they want the outsourcer to take them.

Next, it’s critical that the core team determines the right type of services to be outsourced. There are many different kinds of work outsourced. However all of these outsourcing services fall in two broad categories, technology services outsourcing and business process outsourcing.
Technology Services Outsourcing The fast emerging business world of today requires companies to use sophisticated and fast computer systems and software. These technologies and systems also need to be scalable and highly adaptive. Therefore it is imperative to choose the right associate for developing these technologies. Here are some of the different categories that come under technology services.

* ecommerce
* Network/ Infrastructure
* Software/ Custom Application development
* Telecom
* Web Development and Hosting

Business Process Outsourcing (BPO)
The new global scenario requires that each company finds its own niche field that can add value to the world economy. Thus companies now try to focus their resources on areas that give maximum yield. As a spin off of this trend, service providers who focus on narrow business providers these enterprises need also emerged. Thus the term Business Process Outsourcing (BPO) came into being around 1995. The proliferation of the Internet and its emergence as a business tool helped to make BPO highly popular.

Below are the sub categories of services that come under BPO.
* Customer Relations/ Customer Contact Management
* Finance/ Accounting Processes
* Logistics
* Equipment Management
* Security

Step 2: Find out all you need to know about the vendor – Plan the RFI

The Request for Information (RFI) provides material for the first rounds of vendor evaluations. Organizations generally use the RFI to validate vendor interest and to evaluate the business climate in the organization’s industry. As opposed to a highly specific, formal Request for Proposal (RFP), the RFI encourages vendors to respond freely. It also spells out the business requirements defined by the core team, so the vendor understands
what the company is trying to accomplish.
* A request for information is just that – requesting information
* It is usually issued to acquire information on what is available, from whom and what approximate cost before writing an RFP that is based on real information rather than wishful thinking.
* Typically, vendors will not respond to an RFI unless the effort to do so is not excessive and there is an expectation that an order or at least an RFP will follow.

Contents of RFI
* The type of information usually sought by RFI’s includes things such as:
o The availability of equipment or needed services.
o The approximate one time and recurring costs.
o The differentiating factors between the goods or services proposed and similar offerings from other vendors.
* The latter is very useful in providing information to help determine mandatory and desirable
characteristics to be included in an RFP.

After vendors return the questionnaire, the issuing company matches the vendors’ responses to the company’s requirements and weights the criteria based on importance. Providers that don’t meet stated needs or haven’t responded to the specific questions are eliminated.

Eventually, the RFI process helps companies make the "go or no go" decision—that is, the choice to proceed with or walk away from a project. The data solicited identifies the availability and viability of outsourcing, cost estimate ranges, and risks. It also provides detail useful for developing project requirements.

Step 3: Prepare the RFP
The third step is to develop the RFP; send it to at least three short-listed suppliers; evaluate them; and, of course, select the best ones.

The RFI and RFP are complementary. Information collected during the RFI process can prepare the solution requirements section of the related RFP. You should have by now a better understanding of project scope and requirements, as well as a list of qualified suppliers. Leveraging the information-gathering focus of the RFI will lead to a concise RFP that articulates the business needs.

The RFP outlines the engagement’s requirements—relevant skill sets, language skills, intellectual property protection, infrastructure, and quality certifications—and gives prospective vendors the information necessary to prepare a bid. The responsibility of developing the RFP rests with the project’s sourcing leader, but various aspects of the document will require input from other domain experts.

A good RFP includes one section that states what the company seeks (business requirements) and four sections that ask about the vendor and what it will be able to provide:
* Business requirements. In brief, this section details the company’s project goal, deliverables, performance and fulfillment requirements, and liquidity damages.
* Vendor profile. External service providers differ greatly in performance, style, and experience. This part of the RFP details the vendor’s stability, services, and reputation.
* Vendor employee information. This section addresses the resources assigned at the project management, middle management, team leader, and task levels, along with the quality of people, their skills, training, compensation, and retention. If your company ranks technical skills highest should look at technical expertise before examining costs.

* Vendor methodology. The methodology segment details project management, quality, regulatory compliance and security.

* Infrastructure. This part outlines the vendor’s infrastructure stability and disaster-recovery abilities.

Step 4: Due Diligence
After vendors have sent their RFP responses, you begin the evaluation. Usually, vendors propose different strategies when they respond to an RFP. They may suggest a sole provider,
co sourcing, or multisourcing scenario, in which one, two, or several vendors, respectively, deliver the service to the company. Regardless of the structure, if the proposal meets the stated requirements, each vendor must then undergo a due diligence review.

Due diligence supports or invalidates the information the vendor supplied on processes, financials, experience and performance. It helps you determine what the provider can do right now, as opposed to what it might do if given the business. Due diligence should confirm the information supplied in the RFP and address the following data:
* Company profile, strategy, mission and reputation
* Financial status - reviews of audited financial statements
* Customer references - preferably from similar outsourced processes
* Management qualifications, including criminal background checks
* Process expertise, methodology and effectiveness
* Quality initiatives and certifications
* Technology, infrastructure stability and applications
* Security and audit controls
* Legal and regulatory compliance, including any outstanding complaints or litigation
* Use of subcontractors
* Insurance
* Disaster recovery, security and business continuity policies

Pay attention also to employee policies, attrition, service attitudes and management values; the company and the vendor need to fit together culturally.

You should evaluate the vendor’s project management competency, the level of success achieved, the quality and standards of work delivered, adherence to contract terms, and the communication process. Reliable, ongoing communications, especially in offshore outsourcing is very important; potential pitfalls can result from infrequent or vague communications. For instance, if the onshore company doesn’t clearly communicate deliverables and timelines, offshore resources might not be allocated correctly and may endanger completing the project on time.

Sometimes you must perform due diligence on more than one of the vendors that respond. The length and formality of the due diligence process varies according to companies’ experience with outsourcing, the timeline for implementing outsourcing, the risk, and familiarity with the vendor.

Step 5 (Optional): Test Project
Some companies can even conduct test projects to ensure a good fit between the company and the vendor.

These tests allow companies to review the vendor’s project management process for efficiency and effectiveness. Specifically, they look at whether project execution is completed within guidelines, whether deliverables are timely and whether the vendor has adhered to defined quality standards. Tests serve as a good method for companies to check and review the facts before making a final vendor decision.

Test projects also let companies experience the benefits of outsourcing before jumping into a long-term relationship. Often, companies will conduct a "proof of concept" (POC) with a couple of vendors to compare results and, after evaluation, choose the best one. A good method to select the best vendor is by taking the top two vendors from the RFP process and having them complete the same test project. This will demonstrate their project management capabilities, communication style, and ability to meet deadlines for deliverables. Many companies are using POCs as test beds before offshoring larger projects.

Step 6: Choose the Vendor
Eventually, the biggest step in the process of selection is picking a service provider to manage business processes and applications. Making the final decision means signing a contract that clearly defines the performance measures, team size, team members, pricing policies, business continuity plans and overall quality of work standards.

Conclusion
Last, but not least, remember that outsourcing is a long-term relationship, and choosing the right vendor is crucial to meeting your technology, business and financial objectives. If you base your decision on following the steps above, you will eliminate the risks of engaging in a wrongly-selected affiliation that can not only fail to improve your business, but even do harm.

Sunday, December 28, 2008

The Three keys to IT Outsourcing success

AS BUSINESS STRATEGIES GO, INFORMATION TECHNOLOGY OUTSOURCING (ITO) IS CERTAINLY NOT THE new kid on the block. But despite ITO’s maturity and widespread acceptance, success still eludes some companies.

Logic would say that any business process that has been around as long as ITO should achieve more consistent results. In reality, many clients tell us that they’re not getting the financial and operational efficiencies they expected. Some have actually reported that IT outsourcing has resulted in a greater spend than insourcing. We believe this experience may be more widespread
than reported and may be a factor contributing to the relatively short average tenure of a chief information officer at a Fortune 1000 company—which is currently around three years.

When it comes to ITO, often big bets are being made by organizations and big risks taken by CIOs. And success or failure is quickly determined. So why do some organizations appear to struggle with such a mature business process as ITO?>> TTwo processes inherent to ITO are complex and difficult: ITO governance—defined as the oversight, control, and smooth operation of outsourcing among client, vendor, and line of business—and the people component of ITO implementation. Often, ITO transactions are heavily focused on procurement and technology, and not nearly as focused on the human capital side of the equation. So while HR participates in the outsourcing process, it usually does so primarily as it relates to people leaving the organization and not in terms of the retained IT organization, i.e., those roles and the people filling them in the client organization who will remain onboard to make sure the deal is successful.

Without HR’s steadfast focus on the software outsourcing company, retained employees may experience a combination of confusion about their new goals and responsibilities, doubt surrounding their career and promotion prospects in the new organization, and uncertainty about management’s long-term business plans.Unless reversed, these feelings can leave employees disengaged.

And instead of lowering costs and increasing worker productivity, ITO can actually lead to
higher operating costs and the delivery of less measurable business value.

People Are a Lot More Complicated Than Technology Nowadays, there’s no shortage of technical
prowess available in the marketplace. So it’s unlikely that some organizations’ lack of ITO
success is a result of inadequate technology implementation. There are undoubtedly some technology issues that undermine fulfillment of the ITO business case. But our experience is that most of the problem revolves around people-related issues. Enterprises often aren’t sufficiently focused on the suite of fundamental human capital and change management issues involved in outsourcing, including ITO governance, skills development, communications, consequence management, and program results measurement.

As companies shift IT operational oversight to a vendor, the day-to-day job responsibilities of the retained IT organization change markedly. The hands-on project management style familiar to most employees must be replaced by supplier management skills. Instead of dealing directly with their line clients, employees must now act alongside their outsourcer as part of a
two-party team to address clients’ needs. And as clients’ needs change, employees must repress the urge to intercede directly with clients and instead communicate those changes to the outsourcer for handling. Perhaps most important, employees need to be strong advocates for change within their organizations.

But IT people quite naturally tend to focus on what they know best and what they’re most comfortable with: technology enablement issues.And they often do so at the expense of people-related issues.Absent an equal (or greater) focus on the people issues, ITO success may be in jeopardy. In fact, by some industry estimates, if companies are not placing at least 15 percent to 20 percent of their “spend” on human capital issues relating to a major business transformation like ITO, the chances of success are greatly diminished. Those organizations that follow a disciplined process that includes a strong human capital orientation are much more likely to succeed. Such a process requires a clear outsourcing strategy, operational discipline, and close ongoing attention to the people-related issues of outsourcing. We believe there are five critical elements


BEHAVIORAL CHANGES FOLLOWING OUTSOURCING

Current Behavior
■ Caring about the “how”
■ Providing IT services as “part of the family”
■ Jumping in when problems arise
■ Reacting to customer changes and new/additional requirements
■ Acting as an internal team to achieve clients’ objectives
■ A potentially different execution model in each locale
■ High procurement skills
■ Two-party relationship management skills
■ Technology orientation

New Behavior
■ Focusing on the “what”
■ Overseeing and facilitating IT services that are being provided by a third party
■ Monitoring problem escalation and resolution processes
■ Facilitating line-of-business planning and communication with the vendor involving new/additional requirements
■ Acting as a two-party team to achieve clients’ objectives
■ Single, global execution model
■ High supplier management skills
■ Three-party relationship management skills
■ Business and services orientation

contributing to ITO success.Addressing them all may not ensure your enterprise’s success. But not addressing them almost surely guarantees failure.

These elements include:
ITO Governance—how to design and then manage the overall control structure required to integrate the various internal and external players in the retained IT organization Performance Metrics—establishing agreed-upon operational and financial performance targets along with the ability to measure results versus goals

Skills Inventory & Career Mapping— identifying retained IT employees’ skills gaps vis-à-vis their new responsibilities— before entering an ITO deal—and backfilling shortfalls with customized training and development programs

Consequence Management—achieving desired ITO program results by consistently rewarding employees for doing the right things and disciplining them for doing the wrong things

Rational & Behavioral Communication—
targeted and comprehensive communication
programs that are an appropriate blend of rational (just the facts) and behavioral (how will I be impacted) messages ITO Governance Is Key Once processes have been outsourced, the retained IT organization isn’t so much an organizational structure as it is a series of connected IT governance control structures that address the basic issues of performance management, change
management, enterprise architecture, program management, business architecture, dispute resolution, and business organization representation. This structure includes not just client people but supplier people as well. These control structures don’t need a lot of people. In fact, except for maintaining adequate staff for the collection and analysis of performance metrics, they operate best with fewer rather than more people.

The overall goal of effective governance is to encourage an environment of collaboration and
consensus, strategic and business alignment, audit and measurement, leveraging of supplier assets, and the ability to deal with risk and crisis more effectively. IT governance isn’t something
you put in place just because you outsource; ideally, IT governance already exists within the
client organization and just needs to be modified or adapted to reflect the reality of now having an outsourcing partner. Not only does process become more important following IT outsourcing, so do the skills of the retained organization. In this case, the retained organization refers to>> those roles and the people filling them in the client organization who are there to facilitate and
make the deal successful. By its very nature, the success of the organization post-outsourcing is
now far more dependent on many fewer employees, and every single individual must be handpicked at this point.

Finally, the specific roles and approach to managing the client-outsourcer relationship
need to be properly established from the start. And client and outsourcer together need to
ensure that those lines of business affected by the changes understand both the benefits and consequences of the new deal for them. This includes how IT services will now be delivered as well as any changes in service level agreements that they should expect. If the new client-vendor working arrangement isn’t clarified, there’s a good chance that line businesses may become dissatisfied with the changes and perceive that the overall transition is being poorly handled. The reputations of the senior business leaders and of the service provider will be damaged, and the outsourcer will be forced to quickly shift from service delivery to service recovery mode.

Performance Metrics
When it comes to outsourcing, you won’t get what you don’t measure. In too many cases, companies merely pay lip service to this crucial part of the outsourcing process. They neglect to develop a solid strategic outsourcing plan up front—with ITO plans and goals that support the broader company business plan—and then measure results against target metrics.
Organizations should never enter into an outsourcing arrangement without a good quantitative
sense of where the starting point is and where the enterprise is headed. In most cases,
companies already have a set of metrics that they’ve been using to measure performance prior
to outsourcing.When this is the case, these same metrics can often be shifted from the client
directly to the outsourcer. This is especially useful because, instead of starting from scratch,
enterprises start with a solid historical baseline of performance against which to measure future
results.At a minimum, metrics should cover operational performance (including service
quality), financial performance (what’s being achieved for the dollars spent), and the overall
health of the client-vendor relationship.

But enterprises need to measure more than just the overall effectiveness of the retained IT organization against its new job responsibilities. Accountability for program results should include management performance as well. Organizations need to seek out ways to measure the work of members of the senior IT management team and the value they’re contributing to the success of the new operating model, then reward their performance accordingly.

Finally, offshoring of IT services is increasingly used for infrastructure and applications development outsourcing, adding another element of complexity to the benchmarking challenge.
While IT outsourcing has a long history, offshoring of IT services for most companies began
during or immediately after the Y2K push. As a result, historical metrics for offshoring are less
available.Here especially, enterprises are being forced to abandon the old cost and service quality metrics and baselines for entirely new performance measures.

Skills Inventory and Career Mapping
Tying outsourcing to the overall business vision and getting the retained IT organization to become more strategic in focus—as opposed to the tactical organization it had been—is absolutely critical. For starters, it’s the only way that companies can hope to quickly and completely achieve the operating efficiencies and cost savings inherent to IT outsourcing. But in too many cases, this process is poorly managed. More often than not, instead of recruiting the best
available talent, retained organizations are staffed by simply reassigning past high performers to fill new jobs that require entirely different skill sets. To achieve real transformational value, companies need to critically assess the makeup and skills of their retained IT generalists in relation to their new job descriptions, including:
■ How big are the skills gaps among retained employees?
■ Can gaps be closed within a reasonably short timeframe using training programs designed for
retained employees’ unique needs?
■ To what extent will it be necessary to replace IT generalists with new professionals from outside the organization?
Retained IT organizations can most effectively plan for the post-outsourcing future by augmenting the skills inventory with job/career mapping.

Career mapping represents a global approach to measuring job worth, including:
■ External market pay and competitive market pay levels
■ Internal alignment within and across job families
■ Opportunities for career growth and a clear understanding of what differentiates available
career paths. For each step on the career ladder, there should be a detailed description of the employee’s expected contribution at that particular level.

In too many deals, the retained IT organization gets lost in the shuffle of the deal negotiations and is more of an afterthought than a primary consideration. The career map framework not only contributes to accurate job evaluation and pay linkage, but can also help managers quickly gain an understanding of which jobs sit where in the retained IT organization. At its broadest level, it provides a valuable tool to support workforce planning.At an individual level, it can provide a framework for one-onone discussions about employees’ career path opportunities and development needs.

Unfortunately, organizations frequently don’t even know what they have in relation to what they
will need in terms of employees/skills in the new retained organization.And the result is as
predictable as it is damaging:
■ Enterprises don’t distinguish between missioncritical versus noncritical skills needs.
■ Retained IT workers end up with jobs that are only partly mission-critical and largely
noncritical in nature.
■ People continue to do the nonstrategic/tactical work they always did, either because they’re
unsure of what to do, lack the requisite skills to tackle their new responsibilities, or,worst of all,
to keep busy and feel useful.
■ Enterprises end up with “shadow”organizations that actually compete with the outsourcer in an effort to demonstrate their value.
■ Instead of decreasing, IT costs after outsourcing often increase.
This becomes the low point—the “valley of death” as some call it—that many retained IT
organizations go through as they transition from the old to the new service model. It represents a costly and discouraging departure from plan.
Worse yet, retained employees can become a source of dissonance; they become anti-ITO and
exacerbate already difficult issues associated with outsourcing.
Finally,when it comes to skills inventory and career mapping, timing is everything.Assessing and
addressing retained IT generalists’ capabilities and competency gaps works best when done during the early stages of the client-vendor negotiations.

The total elapsed time from when a company first embarks on its ITO initiative—including
vendor identification, negotiations, and contract signing—to live outsourcing can take many
months. It’s a time when organizations are most vulnerable to having high-performing workers
exit the company because of uncertainty about their future prospects.Completing skills assessments during contract negotiations gives employers an advantage in terms of being able to engage high performers before they leave, design clear career paths within the new organization structure, and create job training programs that will be unique to the organization’s and employees’ particular needs.Companies that choose to complete skills inventory and career mapping later—first executing the outsourcing contract and only then worrying about the retained staff ’s skills gaps— make a costly mistake.

Consequence Management

A fourth critical component of cultural transformation is consequence management. Experience
shows that it’s impossible to execute effective change management without consequences—
sometimes positive and sometimes negative.We define consequence management as the method of ensuring that outsourcing processes are followed and strategic goals are met by systematically and consistently rewarding people for doing the right things and discouraging them from doing the wrong things.

There will always be some employees who are unable to accept the strategic decision to outsource IT services. These individuals (leaders, frequently) are incapable of making the transition and working in a committed way with new vendors. Expect it. It comes with the territory.

But deal with it quickly—either through education and encouragement, or, if that’s unsuccessful,
through employee separation from the business. Unless you reward and censure behaviors
appropriately, the ITO transition will never be entirely successful.

Ironically,many organizations tolerate or reward behaviors that are contrary to cultural
transformation and consequence management. This includes employee actions, such as 11th
hour heroics, where internal business clients reach out to their colleagues in the retained IT
organization and ask that they temporarily shift their attention away from their new strategic
focus and back to their former (now-outsourced) transactional focus in order to complete a particular client request. Instead of enforcing a policy of automatically redirecting these requests to the outsourcer, some organizations actually reward employees who undertake these in-the-nick-of-time efforts.When they do this, enterprises plant the seeds for phantom organizations to emerge and flourish, in the process damaging the viability of their own outsourcing programs.
Rational and Behavioral Communication Finally, the early phase of any IT outsourcing
agreement is particularly challenging for client companies as they walk a fine line between overcommunicating with their employees on the one hand and providing insufficient information
about the deal and its implications on the other.

Clients need to develop targeted and comprehensive communication programs for their affected
staff. They need to ensure that individuals have enough information about the deal and what will
happen to them throughout the process that they can be supportive of it.

Successful retention of client personnel—with their specialized technical, applications, and business process skills—is essential for ongoing service quality. Failure to address the possibility of unexpected or unwanted employee attrition may have severe business consequences for both the client and the outsourcer.This is the point at which clear, targeted, and timely communication is critical. When it comes to communicating with employees
about the impact of IT outsourcing on them and on the organization, a one-size-fits-all
approach is clearly inadequate.Communication programs need to address a combination of
rational as well as behavioral messages.“Rational communication” focuses on the what,why,when, and how of ITO.“Behavioral communication,”on the other hand, focuses on the “what’s in it for me if I do/don’t make the change?”Organizations tend to focus more on rational rather than behavioral communications. Both are critical to successful ITO implementation.
As an example, a major financial services company recently found itself struggling under the
burden of a high fixed-cost operating model with limited flexibility. To survive in its highly competitive market space, the company needed to replace its existing processes with a system where operating costs moved up or down in tandem with changes in product volumes.An outsourcing partner arrangement was deemed the most expeditious solution, and the company embarked on a major infrastructure deal to help it operate under a more flexible business model. The change involved transferring several thousand employees to the supplier organization.
The company realized that all impacted employees— those transitioning to the new organization
and those moving into the retained IT organization— would be understandably anxious about
their new deal.Using a series of rational communication messages, followed quickly by “softer”
behavioral communication programs, senior management embarked on an 18-month, centrally
managed global communication effort detailing the economics of the proposed change, how and when employees would be affected, and the net long-term benefit that would accrue tothem and to the organization as a result of outsourcing.

To demonstrate the importance it placed on the changes, senior management delivered most of the initial communication at inperson, on-site town hall meetings where employees had an opportunity to hear and question management’s reasoning and vision for the future. Once employees were able to literally look their senior leaders in the eye and understand the rationale for the new business model and how they would be positively affected, most embraced and supported the change. In the end, almost 100 percent of employees accepted a transfer to the
vendor organization.


Getting Started:
There’s an inherent risk profile that comes with IT outsourcing.Here’s a short list of items, all of
which you need to be able to answer with an emphatic yes before you seriously undertake an IT
outsourcing program.

  • Do you have a well-defined internal governance structure in place today that includes focus and expertise around the development of metrics, the creation of historical baselines, and good external reference points against which to measure your own internal performance?
  • Do you have a well-defined, highly skilled and mature program management office (PMO) structure in place that can successfully manage a portfolio of work for the businesses? By introducing an outsourcer into the IT mix, organizations increase the complexity of their IT processes. If you decide to outsource processes—whether infrastructure or application development— and don’t have sound PMO practices and an effective organization inplace, no matter how well the outsourcer is performing, the addition of a third party into the equation puts program performance at risk.
  • Does your organization possess a deep-seated human capital/change management orientation as part of its culture? Are behavioral communications and consequence management already core elements in how your organization operates? Moving to an IT outsourced environment is challenge enough without having to simultaneously change the organization’s fundamental customs.
Organizations need to look seriously at these three areas.And unless they feel positive about the
answers to these questions, no matter how good their plans for outsourcing, the risk profile may be dangerously high.

Positive News

Each ITO undertaking and structure will be unique, determined largely by the individual
organization, its background, personnel, and particular needs. Clearly, outsourcing involves a
major culture shift in the way employees perform their work in the retained organization.As such, there are no shortcuts. Experience shows a direct correlation between upfront preparation and ultimate success. The commercial payback that companies will enjoy tomorrow is directly related to the amount of time, attention, and effort invested in managing the change process today. For IT organizations, the news is positive: Studies show that, when all is said and done, success is within the reach of every company. It may be difficult, but there’s a discipline for managing a successful transition.Companies that focus on the discipline tend to have the greatest success with ITO. It’s not out of reach, but it’s frequently outside of IT organizations’ comfort zones.And that is the real challenge.

Companies need to be convinced of the link between people and outsourcing results.How well
companies—clients and service providers alike— manage the people aspects of ITO is the single
biggest factor contributing to the success or failure of outsourcing deals.Companies that incorporate people-focused programs into their pre-deal, transition, and transformation processes will enjoy greater success at a faster rate than companies that fail to recognize the full impact of people issues on outsourcing.

Monday, December 22, 2008

ERP at the Speed of Light

ERP Solutions at the Speed of Light
Making Rapid Implementation Work for You
Clients who are implementing Enterprise Resource Planning Solutions software for the first time tend to be initially intimidated by the time and cost of an implementation and seek to accelerate the go-live date. Such acceleration, when accomplished with the right strategy and tools, can be of tremendous benefit, including a reduction of costs and reduced time-to-benefit (TTB). However, without the right strategy and tools, implementation acceleration carries the risk of abbreviated end user training and change management, a lack of post implementation planning, over-engineering of business processes, and other problems that in fact lead to higher over-all cost of ownership and the erosion of business benefit.
For some, the question is: to accelerate or not to accelerate? Without acceleration, the implementation will be more costly but other risks will be mitigated. Cost versus risk appears to be the choice.
In point of fact, as we will illuminate herein, clients can have it both ways.
A Brief History of ERP Implementation Methods
Prior to 1997, methodologies deployed by the various systems integrators were not adequately tailored to the unique requirements of ERP. Most relied heavily on the As-Is and To-Be phases as per pre-ERP enterprise applications projects. In the As-Is phase, a firm’s current business processes were inventoried, charted, and scripted. In the To-Be phase, a firm’s future business processes were designed, charted, and scripted. Ideally, these steps went as follows:
As-Is described the status quo of business processes
To-Be described the direct transfer of the as-is process into a to-be process that eliminates the weak points and achieves the intended benefit.
The key weakness of these methodologies lay in the slavish attention to the As-Is phase in which lower-level business processes were pointlessly charted and scripted at an exorbitant cost to clients and with little or no benefit for the To-Be phase. Many thought of this as the “consulting partner’s retirement fund phase” and this aspect was one of the key drivers to highly-publicized cost over-runs in the mid 1990’s.1
Beginning in 1997, new methodologies emerged that more directly addressed enterprise software implementations and all stressed speed through a more direct approach, the use of conference room pilots, the deployment of templates, and greater leverage of best practices (i.e. the re-use of business processes that had demonstrably done the job).
In order to address client concerns about the high cost of implementations, many of these methodologies were branded as rapid: Deloitte’s “Fast Track”, Oracle Consulting’s “Fast Forward”, and KPMG’s (now BearingPoint) “Rapid Return on Investment” (also labeled R2i) are a few examples.
In short order, the market was saturated with “success” stories of six-month implementations, four month implementations, and even two-week implementations. Many were expecting an ultimate claim of an implementation being completed during a long lunch hour.
While it is true that these new methodologies reduced the time needed to implement, the sheer acceleration created new problems, such as inadequate knowledge transfer, abbreviated change management, and deficient post-implementation planning. In the ensuing ten years, most of these methodologies have been refined to address such problems. All the same, as will be detailed further in this document, accelerated implementations bear such risks.
Elements of Acceleration
The most crucial element of acceleration is the re-use of existing and proven assets. As the business flows, or processes, of firms within an industry are nearly identical, pre-configured processes can be easily implemented. For example, an order to cash business process that has already proven viable for hundreds of consumer packaged goods firms will probably be a good fit for another consumer packaged goods firm. In similar fashion, how much will sales order entry differ for a firm that sells automotive parts from a firm that sells aircraft parts?
Re-usability depends upon a client willingness to adapt itself to new business processes rather than bending the software to adapt to custom processes. The closer a client adheres to this principal, the faster the implementation due to:
**
A major reduction in the business process design and software configuration phases, which normally comprise more than half of the consulting effort expended
**
Higher level of re-usability of scripts, templates, set-up tools, reports, and user documentation
**
A reduction in scope management.
The rise of industry-focused solutions has resulted from the thousands of ERP implementations that have occurred over the past fifteen years and is a major step in the evolution of enterprise applications.

The Benefits of an Accelerated Implementation
Five key benefits can be derived from an accelerated implementation:
**
Reduced time and cost
**
Less disruption to the client’s existing operations
**
Reduced probability of over-engineering
**
Accelerated time to benefit
We begin with time and cost, the traditional measures of engagement success. An accelerated implementation is first and foremost intended to reduce time to implementation and, by consequence, time-to-benefit. In both instances, an accelerated implementation should result in reduced cost.
The level of cost reduction is not simply a matter of total hours spent but also a matter of the client-systems integrator relationship. There are two poles of this relationship. At one extreme is client ownership, in which the client actively partners with the systems integrator in order to hasten the go-live and knowledge transfer. At the other extreme is client acquisition in which the systems integrator completes the implementation with a minimum of client input or collaboration.



Another advantage of accelerated implementation is the reduced probability of over-engineering. Following more standard implementation methods, the business process design and software configuration activities tend to be iterative in a trial-and-error fashion as clients and systems integrators seek an “ideal” process. In doing so, the team will continually re-configure the software until they “get it right” and often the result is unwieldy for users and difficult to maintain.


One feature of accelerated implementation is a reduction of the business process design (or blueprint) phase as clients accept “out-of-the-box”, proven business processes. Such processes are not over-engineered and are often pre-configured, which also contributes to a reduction of the configuration process.
In all implementations, as clients climb the ERP learning curve, they discover that there is more that they can do than was included in the original project scope. The temptation is to expand the scope to include new benefits, thus lengthening the time to go-live. The age-old term “scope creep” does not correctly apply to ERP. While “scope creep” can occur for individual applications, ERP is enterprise-wide and embraces a suite of applications and “runaway scope” is always a risk.
In accelerated implementation, project scope is usually frozen prior to business process design. This means that newly-identified potential benefits in the course of the project will not be addressed. Obviously, if such benefits are truly desirable, they can be pursued after go-live. In any case, clients are urged to adopt a strategy of continuous business improvement after go-live, in which business processes (and, by consequence, configuration) will continue.

Beyond cost reduction, the greatest advantage of an accelerated implementation is the reduction in time to benefit. Depending upon the business goals, this reduction can be marginal or dramatic. For example, if a client is targeting a new market that requires the software, the difference between a six-month implementation and a ten-month implementation will be dramatic.
Prior to the advent of accelerated implementation methodologies, one of the most successful implementation projects I observed was for a firm that was going out of business. For such a business, acceleration was an obvious primary requirement. With potential bankruptcy looming, the client froze scope to address the most critical areas of its operations, accepted out-of-the-box business processes with little debate, and suffered only minor disruption of business operations as they were rapidly shifted to ERP supports that, in the end, saved the company from bankruptcy. In essence, perhaps the greatest advantage of an accelerated implementation is the sense of urgency and purpose it will engender.
The Risks of an Accelerated Implementation
Establishing a sense of urgency is essential to the success of an accelerated implementation. However, if the sense of urgency turns to alarm because deadlines are slipping or budgets are stretched, project speed can become a liability.
Key risks to a client opting for accelerated implementation are:
**
Abbreviated end user training
**
Abbreviated or inadequate change management
**
Deficient knowledge transfer
**
Lack of post-implementation planning
End users fulfill the business processes that are supported by ERP software and their competency, or lack thereof, has a direct effect on the efficacy of those processes. Unfortunately, end-user training is one of the more neglected aspects of ERP and can be even more neglected in an accelerated setting. This training is nearly always the penultimate step before go-live and if the project is running late and/or over budget, the tendency has long been to foreshorten it in order to save time. This time-savings will later be overwhelmed by end user incompetence and an inability to effectively fulfill the intended business processes.
Further, organizational change management often goes by the wayside in an accelerated implementation as there may be insufficient time to orient business staff to new business processes. This can be further exacerbated by the fact that “out-of-the-box” business processes may well be vastly different from those being replaced. The result of inadequate organizational change management is business disruption after go-live that can erode benefits as well as nerves.
Speeding toward go-live without taking a long-term view is also a risk of an accelerated implementation. In essence, clients should view the implementation phase as the “wedding” and the deployment of their software as “the marriage”. While the wedding may last for six months or longer, the marriage may well last twenty years. A failure to plan for the post-implementation phase in which the client will have to be properly positioned to operate and enhance its ERP plant will lead to a longer and costlier shake-out and erode intended benefits.
Acceleration is not about deadlines and it is certainly not about cutting corners. In order to avoid a result in which a go-live deadline is met but end users are incompetent, business leaders are enraged, and senior management is asking just what benefit they are getting from the investment, the following elements should be closely adhered to.
Why: Visible, measurable criteria for success
What: Mastery of scope
How: Effective transfer of knowledge from consultant to client
When: Acceleration methodology and associated tools & templates
A successful accelerated implementation will combine timely completion in accordance with established budgets and a client’s ability to “thrive after go-live”.
Best Practices for a Successful Accelerated Implementation
During an accelerated ERP implementation, there are a number of best practices and all should be given careful consideration prior to launching a project.
Cost of Implementation
Cost is the most over-riding concern when it comes to ERP implementations. As a result, many projects are under-funded from the beginning and doomed to finish “over budget”.
A best practice in regard to controlling cost is to plan with a realistic approach rather than an optimistic view. Has your organization successfully completed large-scale engagements in the past? How well has your organization worked with outside consultants? What levels of in-house expertise are available? Positive answers to such questions bode well for an accelerated approach. Clients answering negatively to such questions may need to consider less acceleration than is desired.
In this regard, clients are also advised to assure their organizational readiness for an implementation and to set deadlines according to business requirements rather than as artificial milestones.
The cost of the implementation will necessarily be a multiple of the combined cost of hardware & software. The benchmark has historically been a 2 to 1 ratio as implementation cost drivers are:

Earlier in this document, acquisition and ownership implementation scenarios were presented. Clients opting for an ownership scenario are advised to assign their best staff to the implementation project, despite that fact that in doing so they will almost certainly suffer a greater disruption of current business operations. Again, it is wise to take a long-term view. This is more difficult in small and medium businesses (SMB) as the critical mass of top talent is smaller than that of large businesses. Thus, SMBs often assign staff to implementation on a part-time basis. Such assignments will only succeed if senior management remains committed to them and resists the day-to-day temptation to satisfy short term business needs and thus let the project slip. To assure that this does not happen, many firms have used an off-site location for much of the project work, thus rendering key staff “unavailable”.
The type of staff required for the project is management or director level; individuals capable of seeing across various departments and who can understand horizontal business flow. For example, an individual currently assigned to sales order processing may well not be in a position to grasp the full Order to Cash (OTC) business process.


Since the advent of enterprise-wide software in the early 1990’s, the old paradigm of making the software enable the processes that firms choose for themselves has been turned on its head. Today, clients are urged to adopt the business processes inherent to the software on the premise that these processes are the “best practices”. This paradigm shift leads to a natural tug of war between systems integrators who tout best practices versus clients who insist they know their business better than outsiders.
Recent Performance Monitor field research confirms this as 697 survey respondents are clearly split on this issue.



For at least the first three months after go-live of an accelerated implementation, most clients will still be somewhat dependent upon their systems integrator and a necessary level of coverage should be contracted. In addition to systems integration support, clients can rely upon the software support and help desk services included in a maintenance contract. Depending upon a client’s level of self-reliance, higher levels of such support can be contracted.
Many clients, before or after go-live, opt to outsource their ERP operations. Such outsourcing is not an all-or-nothing proposition as clients can tailor the support to their needs:
Application Maintenance: basic applications hosting/operations, break/fix, debug, backup, etc. In short, keeping the ERP lights on.
Application Management: maintenance functions (above) plus a level of application improvement, upgrade, and/or business process transformation.
For the latter, there are various levels of management:
**
Functional application enhancement as needed to assure basic continuity
**
Frequent application enhancements to provide some optimization
**
Defined levels/stages of business process transformation
Conclusion: the End of the Beginning
Think of an accelerated implementation as “entry level” ERP. Go-live is only the end of the beginning.
Post go-live, client self-reliance will depend upon the level of acceleration. Highly-accelerated implementations will leave the client in a vulnerable position if continued knowledge transfer and change management are not emphasized after go-live.
A successful accelerated implementation will provide a client the power of ERP in the fastest way possible while causing the least disruption of existing business operations. With a working core of ERP software, time pressures will be removed and clients are positioned to go back to gaps that could not be addressed in the first go. Through time, a client will reach a higher level of ERP maturity and, hopefully, will thrive after go-live

Tuesday, December 9, 2008

How IT companies must act within the world’s financial crisis

In today’s business world specialists use such words as “tsunami” and “collapse” to describe the current financial crisis. Taxpayers and governments around the world have agreed to help the most injured firms offload their loans and resolve their liquidity concerns. The multi-billion dollar offshoring industry is keeping its fingers crossed and is hoping to withstand the crisis by attracting more companies realizing that offshore outsourcing is one of the major ways to cut costs.

Some outsourcing gurus claim that the financial crisis would be able to ultimately revive their business in spite of the fact it would have some very negative effects on it in the short term. As noted in the recent publication by The Economist, “banking survivors that already use outside contractors will give them more to do as they cut costs” and will have to embrace outsourcing to protect their margins.

According to Gartner, the world’s leading outsourcing consultancy agency, today’s “IT leaders must find the courage to look beyond the immediate threat towards the future.” Whit Andrews, Gartner’s distinguished analyst, believes that what is currently being faced by the global economies is, in fact, the end of the financial era. It is the time when conspicuous consumption is over and conspicuous thrift starts. As the world’s economy changes, the role of each IT leader must change accordingly to shape the major wants and needs of the upcoming age.

Gartner makes three key recommendations for IT offshore outsourcing services providers to survive the crisis and enter the new era well-prepared:
1. Each existing project must be approached as if it had never existed. It is necessary to determine the role each certain project plays in your company’s growth and efficiency and to answer the questions on how many people this project involves and whether or not you could perform it using less people.
2. As a result of the above inventory the IT leaders should come up with a list of things that really matter.
3. IT leaders should try to conserve as much of their budget as possible. They should recognize and respect the power of the cash on hand as they try to save their business for the future.
4. It is of high priority to protect the IT organization’s personnel and to assure people that in case of collapse the company will be able to save only those who make the greatest contribution.

The expectations versus the reality of IT are different by nearly each measure. In order to close the expectations gap, IT leaders must change to innovate. Gartner analysts say that the leaders focusing solely on technology are likely to lose in near future. Gartner sees this innovation in human activity. This activity consists in delivering faster to get IT leaders into business and radically restructuring IT money spending. Right before the world’s financial crisis most of the IT leaders were focused on speeding up their transactions. Gartner believes that after the crisis IT leaders will start investing in connecting people to speed up decisions.

Under the new business conditions it is important for IT outsourcing services providers to deliver agile business processes in order to support rapid change and business outcomes.
Software Outsourcing Company will have to change some of their infrastructure components: to modernize those of high priority and to get rid of those which are less important. As for the companies searching to outsource their processes, their IT managers will have to learn not only how to save, but also how to reprioritize their IT resources that really matter.
Historically, IT companies have delivered technology, not business performance. However, as believed by Gartner, it is no longer enough. All outsourced projects must deliver the promised improvements in business performance. From now on businesses will only be in need of IT that can deliver true value in terms of business performance.

Saturday, December 6, 2008

Composite Web Client Applications

A composite Web client application is an application that is composed of discrete, functionally complete pieces. These modules are integrated within a Web server environment to create a custom software application. Users access the application with a Web browser. To users, the application appears as a single Web client solution with many capabilities.

Composite Web applications are based on the Composite pattern. This pattern is popular because it provides a flexible and scalable architecture that has several benefits, including the following:

  • It allows a higher degree of separation between the application infrastructure and the business logic.
  • It allows independent development of the individual business logic components.
  • It provides flexibility because business logic components can be quickly combined to yield a specific solution.
  • It promotes code re-use because it allows business logic components and the application infrastructure to be re-used across multiple solutions.
  • It provides an excellent architecture for the front-end integration of line-of-business systems or service-oriented systems into a task-oriented user experience.

The Composite pattern also allows you to separate the roles of the developer and the architect. Developers typically focus on implementing modules that provide the business logic for a specific piece of functionality such as providing access to the inventory, customer relationship management (CRM) system, enterprise resource planning (ERP) system, and the human resources (HR) system. The architect provides an approach to a business problem such as the overall design for a call center or an on-line banking program.

Figure 1 illustrates a composite Web client application that presents an integrated view of multiple modules to the user. These modules can include Web services and functionality from other applications and other systems (Web client applications often interact with multiple back-end systems).

Modules

A Web client application that is based on the Composite pattern generally uses a shell module to provide the overall user interface structure. A shell module typically registers user interface components shared by other modules and contains global pages and one or more ASP.NET master pages that module developers can use to create a consistent layout for pages.

Modules contain functionally discrete pieces, but they integrate with the user interface and communicate with each other. The shell module provides access to common services that all the modules can use. Using the services that the shell provides instead of implementing them for each module allows you to develop Web client solutions quickly because the infrastructure is already in place. Each module should only implement the business logic that applies to a particular piece of the overall solution.

Service-Oriented Architecture

This kind of architecture fits extremely well into a service-oriented architecture. Frequently, an organization defines its Web service granularity based on business functions (which, in turn, is typically how the IT infrastructure itself is structured). This means that there will be a family of Web services for the ERP system, the CRM system, the inventory systems, and so on. This is a natural way for a service-oriented architecture to be developed and to evolve. Solutions are then built on top of these services, or on composites of these services; this forms "composite solutions."

Typically, in a service-oriented architecture, each service needs a certain amount of knowledge about the consuming client (for Web applications, the client of the service is the code that runs on the Web server) so the service can be properly consumed. For example, a client application may need to gather the appropriate security credentials, perform appropriate data caching, handle the semantics of dealing with the service in terms of tentative and cancelable operations, and so on. Typically, the client-side piece of logic that handles these issues for a service is known as a service agent.

There is a natural correspondence between the service agents and the modules that comprise a composite Web client application. By using modularity, developers who implement the business capabilities and the Web services that expose them can also develop the user interface and the client-side logic to take maximum advantage of those services. This means that in addition to a number of business capabilities and Web services, you can also have a number of service agents that allow you to construct a composite Web client solution.

Updates and Deployment

Because the modules that comprise the server-side solution are loosely coupled (that is, there is no hard dependency between them because the shell provides a standard but simple mechanism for them to interact with each other), these modules can be independently updated, developed, and deployed.

Symbyo Technologies is a global offshore software development outsourcing company offering Asp.net Outsourcing, Oracle Outsourcing, Websphere consulting and Java Outsourcing services through our Global Delivery Model and Centers of Excellence.

Tuesday, November 18, 2008

Global Product Development Seen as a Boon for Product Lifecycle Management Vendors

Nearly all leading product lifecycle management (PLM) software vendors argue that companies can now leverage globalization, outsourcing, and Web-based collaboration technologies to gain tremendous growth potential. Nonetheless, this relatively new phenomenon has been a long time in the making, as the following history of offshoring reveals.

In the 1980s, offshore manufacturing became commonplace, as manufacturing companies looked to reduce their labor costs and maximize their profits by moving some, or most, of their manufacturing capacity to low-cost labor markets like Mexico, South Korea, and Taiwan. This was purely a cost-saving initiative, exploiting low-wage regions and tax incentives around the globe. It was based on the assumption that straightforward manufacturing "build" instructions with discrete inputs and outputs and strong management oversight would minimize risk and preserve intellectual capital.

In the mid-1990s, as a result of the Internet and Web-based software technology revolution, the concept of using low-cost resources to develop software and maintain existing systems was born, and offshore development facilities in countries like India, China, Ireland, and the Czech Republic flourished. Software companies began to offshore low-intensity functions like documentation, quality assurance, and product maintenance for maturing products. Over time, whole business processes, like help desk support, claims processing, and other traditional call center functions, were moved offshore.

Today, we are witnessing the advent of modern three dimensional computer-aided design (CAD), computer-aided manufacturing (CAM), product visualization, and PLM technologies with sophisticated data synchronization, product data management capabilities, work force collaboration, digitized document management, and IT-enabled product development workflow. Major companies like GE, United Technologies, and Toyota have leveraged this technology in conjunction with low-cost offshore engineering and manufacturing services capabilities, to exploit the benefits of global product development. By rearranging product development activities, personnel, and processes around the globe to take advantage of advantageous cost structures, these companies have experienced increased product development productivity with real and quantifiable financial rewards.

Global Product Development Requires Significant Process Change

A lot of hard work and significant investment is required early on in order to take full advantage of global product development. Implementing global product development involves a lengthy transition process, and requires reconfiguring product development functions across multiple regions in order to maximize productivity and minimize long-term cost while balancing and mitigating risk.

Each step in the product development process must be broken down into clear and concise modular processes, so that the processes can be individually assessed as candidates for potential offshore outsourcing company. This is a major challenge for many manufacturing companies, as they do not have a good handle on current design and development processes, and therefore have trouble breaking each discrete process down into manageable pieces that can be examined for cost efficiencies. It is imperative that any manufacturing enterprise have a formal cross-functional product development process road map and an instinctive process discipline before consideration is given to onshore or offshore outsourcing. The bottom line is that a company's internal product development house must be in order before it starts moving pieces of the equation offshore.

Concerns Abound

There typically is no shortage of concerns over a decision to offshore a part or all of a product development process. Even setting political concerns aside, there are considerable business, technical, and organizational concerns. Merely coming to"what" and"where" decisions is a stressful and lengthy experience that could give any product development or corporate executive grey hair. This is because in most cases, there is no going back without incurring significant costs and triggering second guesses about the merits of the initial decision to outsource.

Organizational concerns most often fall into the realm of control and communication. Dispersed product development processes require the same level of management scrutiny as non-dispersed ones, and are potentially subject to more unexpected change. Thus, formal and effective business processes that address change control are paramount. A well-defined and well-understood hierarchy of management control is also needed, as are 24-hour-a-day, 7-day-a-week communication channels that have redundancy in order to avert risk. Moreover, the decision to offshore product development has an impact on departments besides engineering and manufacturing. Its effects on logistics and supply chain, human resources, marketing, and customer support have to be taken into account when assessing total internal costs and impact.

Technology concerns still exist, even though technology advances in PLM, collaboration, and business networking are the key reasons that global product development is feasible today. Most of the target countries for global product development via offshore outsourcing, like Ireland, Israel, and especially India, have invested heavily in technology infrastructure, education, and technology awareness with strong government support, especially in regions such as Bangalore and Pune in India. Technology investments have been supplemented by national governments through tax incentives and educational subsidies since the mid-90s, and have had time to take root and grow. These investments have been especially beneficial to contract manufacturing, engineering services, software development, and business process outsourcing firms in these countries. Despite these investments, however, the technology concerns most often cited are localized network reliability, local and global Web disruptions, IT governance, technology distribution for things like software upgrades, and common business practices for digitized intellectual property.

Pressures on Software Product Development

For product development in most manufacturing sectors, increasing global competition consistently translates into increasing pressure to do the following:

* improve product quality
* reduce product cost
* respond to changing customer needs
* react to shortened product life cycles
* improve product innovation
* improve product retirement processes
* create byproducts and up-sell products
* reduce time-to-market

With these competitive pressures driving product development at an ever-increasing rate, analyst firms have found, based on recent studies, that over 90 percent of manufacturing firms across a diverse array of industries are formally examining global product development opportunities. How quickly manufacturing firms actually outsource to offshore entities will be a calculated business balance between product development risk and the need to remain competitive.

PLM vendors are on the cutting edge of the global product development opportunity curve in that it is their technology that will enable the feasibility of global product development. Major software providers are honing their marketing messages with a view to global product development. Several of these PLM vendors have even instituted their own PLM products as control tools for internal product development.

Summary

Growth in the PLM software solutions market will be a good barometer of the transition to global product development. Enterprises that shift over time to a global product development strategy must invest in modern PLM technology in order to minimize risk and ensure that the perceived cost savings and productivity gains are achieved. The PLM software market could be on the edge of a significant growth cycle, thanks to the coincidence of new PLM software technology capabilities and innovation with the increasing global competitive forces driving manufacturing enterprises toward global product development strategies.

Sunday, November 16, 2008

HOW TO DEVELOP, MAINTAIN, AND SUPPORT A QUALITY MANAGEMENT AND DEVELOPMENT PROCESS

By James Downs

The task of defining test plans, acceptance criteria, and testing deliverables and processes for any software development effort can face many different and evolving challenges, from identifying applicable processes to maintaining those decisions over time.
Choosing a tool to help support these practices and strategies can not only help alleviate the burden placed on those involved, but it can add efficiency, organization, and a backbone for success.

Before explaining how we started and implemented the quality initiative at my company, let me provide a little background information. Meridian Knowledge Solutions is a leading provider of learning management system (LMS) and learning content management system software. We also provide professional services, courseware, development, and hosting services. We serve 4.5 million users at more than 200 public- and private-sector employers.

Our flagship product, Meridian Global LMS, integrates learning content management, workforce analytics, knowledge management, and competency modeling in one LMS. Meridian Global LMS provides users with access to courseware, documents, data, instructors, and other learners on demand. Any material designed to aid job performance is easily and readily available and completely integrated into a single Web site.

In this article, you will get an overview of the testing methodologies and processes used at Meridian, why we chose Oracle Test Manager for Web Applications to help us manage and support our processes and quality initiatives, and how we use Oracle Test Manager for Web Applications on a daily basis.

MERIDIAN QA: IMPLEMENTING A QUALITY INITIATIVE
As many people are aware, defining a single process from the ground up to support any software lifecycle is an arduous task. Trying to define all processes to join every portion of the lifecycle can feel practically impossible. Fortunately it can be done, and it really is not as tough as you think.
In 2004, we decided to go to the drawing board and redefine the responsibilities and accountability of quality assurance (QA) as it related to our new product (Meridian Global LMS), and the companywide image in general.

Accomplishing this goal meant not only defining new methodologies for the product QA team, but figuring out how to tie these processes together with existing processes from other teams in the lifecycle. Our initial priority was to keep things as simple and streamlined as possible. “Working smarter, not harder” appeared to be the perfect motto for our agenda.

We started with centralized and individual processes based on normal industry standards, as well as our own knowledge of what has worked in real-world examples throughout our many years of collective experience. Portions of our core processes are derived from very basic and normal industry best practices from leading entities, such as the Capability Maturity Model Integration approach by the Software Engineering Institute. Lastly, we wanted to make sure any shortcomings we experienced in the past would not be repeated in the new processes.

We developed simple and rational best practices for a change control process, a test strategy, documentation practices, readiness review procedures, as well as supporting templates and
guidelines for QA deliverables. None of these deviate very far from what other companies and organizations implement when they too set up a QA and lifecycle program. We believe our advantage is in our commitment to these QA initiatives and how well they work with development, requirements, and management processes.

So what are all these fancy processes and how can you determine what to create? Start by defining basic required processes, and supplement them with optional processes that add more value when needed. Such deliverables are formal detailed test plans, final test reports, and cumulative testing metrics, among others. Some processes that can be very advantageous are test readiness reviews, release readiness reviews, and a formalized change control board (CCB). All of these, among others, help make up our Meridian ideology and methodology.

So we are all set, right? Now what? The big question and concern then became, How can we maintain and support all of this while trying to keep up with constant code changes, requirement updates, and product scope tangents?

The answer was Oracle Test Manager for Web Applications: a tool that has helped us seamlessly bring all these pieces together, while collaboratively enabling communication through a common portal.

ORACLE TEST MANAGER FOR WEB APPLCIATIONS ARCHITECTURE: HOW IT SUPPORTS OUR INITIATIVES

The biggest advantage of Oracle Test Manager for Web Applications is its simplicity: all fundamental areas of the software lifecycle are available in a simple and intuitive interface. I personally have used numerous other applications for requirements, testing, and defect tracking that required too much time cross-pollinating information and trying to sync the independent applications. These applications were oftentimes much more expensive to implement and maintain than Oracle Test Manager for Web Applications was, and still are.

One of the most time-consuming, yet accountable, processes in QA is the ability to create traceability between requirements, tests, and defects. This is probably one of the most important traits Oracle Test Manager for Web Applications provides to us at Meridian. By allowing users to associate requirements to tests (see figure 1) and tests to issues (see figure 2), and thus automatically associating issues to requirements (see figure 3), Oracle Test Manager for Web Applications provides traceability and mapping that certainly aids in our CCB process, issue resolution, and testing preparation and execution.










Figure 1


Figure 2











Figure 3
Let us examine the Oracle Test Manager for Web Applications architecture and its independent Requirements, Tests, and Issues modules. The basic Oracle Test Manager for Web Applications setup using the dedicated license server and Microsoft SQL Server back end was an easy choice and convenient setup for us. Using Microsoft SQL Server provides a powerful and simple solution for database maintenance and backup support. Additionally, the dedicated Oracle Test Manager for Web Applications server does not have to be overly robust for basic operation, unlike other lifecycle solutions.

The Oracle Test Manager for Web Applications Requirements module provides a standardized platform for design and functional requirements creation and maintenance. Its out-of-the-box fields and options provide adequate support for even the most complex application. However, the ability to create custom fields in the Oracle Test Manager for Web Applications Administrator provides an even more powerful platform for requirements flexibility and management, and enables customization of the application to match your defined processes. The additional ability to attach files for such things as design images and functionality workflows increases productivity by allowing developers and testers the information needed to write code and tests correctly the first time. Also, Oracle Test Manager for Web Applications maintains all previously saved versions of a requirement and provides the ability to save comments with each saved version (through custom fields).

The Issues module shares all the same productivity, efficiency, and flexibility standards as the Requirements module. But it also provides us at Meridian with the platform we need to effectively and seamlessly manage our change control process. We take advantage of the custom field functionality to add any additional fields and options we need to manage ownership and expectations of defect and enhancement changes (see figure 4). The Issues module does not mange this process for us, but without it our change control process would not be anywhere close to the efficient and seamless process we use today. Additionally, the information contained in the Issues module gives us great flexibility in managing release readiness, as well as metrics reporting for each software release.


Figure 4

Last, but certainly not least, is the Tests module in Oracle Test Manager for Web Applications. Obviously, from a natural QA perspective, this is the focal point of the application. The ability to structure the tests by means of folders and test groups is integral for proper test management and maintenance. But the ability to manage individual manual, automated (from Oracle Functional Testing for Web Applications), and third-party tests is the core advantage to this module. For anyone converting existing QA material to a managed system like Oracle Test Manager for Web Applications, the manual test process is second to none. Simply and easily, existing tests in such applications as Microsoft Word, Microsoft Excel, or the like can be ported to the Oracle Test Manager for Web Applications test structure. Instantly, tests can be managed in a central location and reused countless times (see figure 5). What better way to make a test update in one location and seamlessly have this change applied to all instances of that test’s use?



Figure 5
Similarly for more-advanced QA departments, the ability to maintain automated tests from Oracle Functional Testing for Web Applications also shares the same core advantages as manual tests. In the end, the Tests module alone can be enough of an advantage for a QA department to use to outweigh the costs of implementation and maintenance. It is such a good tool for us that all of our QA personnel can easily spend an entire working day logged into Oracle Test Manager for Web Applications conducting all of their daily and long term tasks.

Collectively, as stated previously, these three core modules enable the association between them to completely cover traceability that is vital to so many organizations. Some could say the traceability coverage, which is practically seamless by nature in Oracle Test Manager for Web Applications, saves our team days, if not weeks, on every release—making sure our functionality is covered from A to Z.

MANUAL VERSUS AUTOMATED TESTING
The same age-old question exists for us as all other QA organizations: Do we use automated testing or manual testing? No surprise here, we have to make the same decisions as any other managed QA department does. Is it cost effective to automate and maintain this type of testing? If so, what volume of tests do we automate? How often do we run these tests? How do we properly implement a sound automation practice? The questions are nearly endless, and an entire white paper could be devoted (and undoubtedly has) to this subject.

One of the biggest advantages of automated testing is repetitive execution of functional test scenarios against a consistent and expected interface. Basically, automated regression testing is probably the safest route to take. Because regression testing is most often executed against a relatively unchanged function(s) it is a safer bet to automate, thus requiring less maintenance.
We spend so much time adding new features and changing outdated functionality, so we have had little chance to automate portions of our application. This is in no way a bad thing, especially since we have identified this from the get-go, and not talked ourselves into a false hope that automation can in some way “save us” from the “perils” of manual testing. In fact, manual testing has been extremely effective for us. Our features change often, the application is so customizable, and our user interface is always being tweaked that we have to be very selective in what we automate so we do not create unnecessary maintenance.

That said, we have been designing a formal process to implement automation at levels that would give us the greatest benefit, because our QA processes were introduced years ago. When the time is right we will slowly implement smoke testing at the basic functional level so it is easier to maintain, yet buy us the greatest benefit. Because we apply new builds of code to our QA environments so often, smoke testing will really help us identify low-level defects at the root level on a more-efficient basis, rather than waiting for a tester to manually execute the process.
Just like all our processes, automated testing is a great support tool, but it does not dictate our processes and end results. Using Oracle Test Manager for Web Applications to manage the execution of the automated tests, as well as scheduling out the process runs, will fall right into place with the other modules we use in Oracle Test Manager for Web Applications.

REPORTING AND STAKEHOLDER BUY-IN
The reporting capability has vastly improved in recent releases of Oracle Test Manager for Web Applications. Of particular interest is the dashboard style setup of the reports. In the past, we have used our own collaborative dashboard (in the form of weekly/monthly metrics) to present to management and owners to show such things as product status, defect resolution rate, and test/issue ownership. These are always vital for buy in and progress explanation; however, when they need to be presented frequently, they can really take a chunk of time to create. The dashboard reporting feature can greatly increase the time needed to put these numbers together, especially on a moment’s notice.

Also a somewhat new, but empowering, feature is the ability to create new reports for you and publish them for others to use. Different members of our product team employ their own private reports to help track internal progress, but also publish some instances to the team so information can be shared. This can be a huge advantage to check on status and other information instantly—without the need to call, e-mail, or meet with team members.
Standard out-of-the-box reports should not be overlooked either. We use these reports in varying degrees to report such things as test progress/status for release readiness, issue resolution/results to support our final test reports, as well as requirements to test traceability as
alluded to earlier. The Reporting module is another great advantage provided by the single, integrated solution offered by Oracle Test Manager for Web Applications.

QUICK TIPS AND TRICKS
With all applications, there are always certain shortcuts and tidbits that can help manage and maintain efficiency as well as contribute to knowledge transfer within the team. Here are some of those tips and tricks I have found over the years using Oracle Test Manager for Web Applications based on how we use it in our organization.

One of the biggest advantages is to reuse custom fields within the three core modules as often as possible. For example, we use the Version field across all three modules. Because we apply new versions of code to our testing environment at least once daily, the ability to log in and update the current version in one central location saves us time and ensures we don’t make any typos or mistakes. Naturally, correct versioning helps in reporting and metrics accuracy.

A very good tool recently included in Oracle Test Manager for Web Applications is the Screen Capture Utility. This is a wonderful application that can really speed up the time to include screenshots of bugs and GUI captures. Using the Screen Capture Utility can simplify the process of entering issues or the need to include GUI captures for requirements or tests. It is much more time efficient than using the old method of using the print screen (PrtScn) button, and some general tool such as Microsoft Paint to paste, crop, and save the picture.

Another interesting tidbit we have found is not to abandon the use of the Oracle Test Manager for Web Applications desktop interface. With more and more improvements to the Oracle Test Manager for Web Applications Web interface, it can be enticing to move to the more modern interface full time. However, there are a few key advantages to continuing to use the desktop interface. One of these is the use of drag-and-drop functionality. When working on large volumes of functional tests and test groups, such as we do, the ability to organize these tests assets by using drag and drop is a huge time-saver, rather than using the move left/right/up/down buttons in the Web interface. Similarly, the same advantage exists for requirements as well.
A final, but important, trick is to make sure the unique IDs are turned on. In the Tools > Options menu, be sure to select the check box to display unique IDs, rather than using the default index sorting (see figure 6). As you reorganize requirements and tests, the default indexing can change, which can be a real nuisance if you monitor traceability closely. Why? Because the indexing value will change as the entity is moved. Using unique IDs will avoid this potential problem. The unique ID will follow the Oracle Test Manager for Web Applications entity and maintain identification through its movement. Also under Options, you can control the number of records that will appear with a single node.



Figure 6
Another note about the unique IDs is to use different database instances per test manager project when copying a project (one project per database). Why? If you copy a project within the same database, the unique IDs will jump, so to speak. Because the IDs remain unique, what used to be TEST100 in Project A, will now be TEST500 in the copy of Project A. This obviously can throw a kink in maintaining consistent traceability. To avoid this, copy the project to a new and clean database instance where the ID mapping will remain the same. Copying within the same database can certainly have its advantages in certain situations, but we have found new database instances per test manager project as the best way to maintain our Oracle Test Manager for Web Applications archiving process and overall traceability. Following a major release, we copy the test manager project to a new database. The copy becomes the archive for that release, and we continue work in the “main” project.

CONCLUSION
In the end, Oracle Test Manager for Web Applications offers Meridian a relatively low-cost solution (especially when compared to similar vendors) that supports almost all aspects of our product development lifecycle. Because it is an extremely low maintenance and easily manageable application, we can spend more time relying on its benefits, and less time worrying about its dependability. It is important to note that we chose Oracle Test Manager for Web Applications as a supportive solution to help drive and maintain our processes and initiatives after they had been identified and defined. I believe that if you were to reverse this and choose a tool first, an organization can too easily become dependent on that solution, and their processes can become too restrictive and inflexible as their product and organization change over time.
In summary, Oracle Test Manager for Web Applications is a tool that assists us with decision-making, productivity, and knowledge throughput in a centralized interface. There is no doubt our efficiency would wane greatly without the ability for instant traceability and easy reuse and optimization of tests. Oracle Test Manager for Web Applications is a viable and credible solution that any organization should strongly consider to help support their lifecycle efforts.