Your free tv video platform

Most popular

  • Social ties, knowledge sharing and successful collaboration in globally distributed system development projects

    Social ties, knowledge sharing and successful collaboration in globally distributed system development projects

    Julia Kotlarsky1 and Ilan Oshri2

    1. 1Warwick Business School, The University of Warwick, Coventry, U.K.
    2. 2Rotterdam School of Management, Erasmus University, Rotterdam, The Netherlands

    Correspondence: Julia Kotlarsky, Warwick Business School, The University of Warwick, Coventry CV4 7AL, U.K. Tel: +44 24 7652 4692; Fax: +44 24 7652 4539; E-mail:

    Received 5 April 2004; Revised 29 September 2004; Re-revised 1 December 2004; Accepted 8 February 2005.



    Traditionally, the main focus of the information system (IS) literature has been on technical aspects related to system development projects. Furthermore, research in the IS field has mainly focused on co-located project teams. In this respect, social aspects involved in IS projects were neglected or scarcely reported. To fill this gap, this paper studies the contribution of social ties and knowledge sharing to successful collaboration in distributed IS development teams. Data were drawn from two successful globally distributed system development projects at SAP and LeCroy. Data collected were codified using Atlas.ti software. The results suggest that human-related issues, such as rapport and transactive memory, were important for collaborative work in the teams studied. The paper concludes by discussing the implications for theory and suggesting a practical guide to enhance collaborative work in globally distributed teams.


    social aspects, knowledge sharing, collaborative work, dispersed teams

    The biggest problem is a people problem: if people from different sites don't have the respect and trust for each other, they don't work well together
    (Anthony, Chief Software Architect, LeCroy)



    Recent years have witnessed the globalization of many industries. Consequently, globally distributed collaborations and virtual teams have become increasingly common in many areas, for example new product development (Malhotra et al., 2001) and information systems (IS) development (Carmel & Agarwal, 2002Herbsleb & Mockus, 2003Sarker & Sahay, 2004).

    Managing dispersed development projects is far more challenging than co-located projects. However, ongoing innovations in information and communication technologies (ICT) make it possible to cooperate in a distributed mode. Indeed, recent research in the IS field has focused on ICT in the context of globally distributed IS development teams (Carmel, 1999Herbsleb et al., 2002Mockus & Herbsleb, 2002). However, little is known about the social aspects associated with the management of globally distributed IS development projects and, in some studies, social aspects are perceived to be constraints on globally distributed collaboration (Jarvenpaa & Leidner, 1999Evaristo, 2003Sarker & Sahay, 2004). While other disciplines, such as organizational behaviour, have acknowledged the importance of social aspects, such as trust (Storck, 2000Child, 2001), in global collaborations, evidence about the role that human and social aspects play in global collaborative work is still missing. To fill this gap, this paper attempts to address the following questions: do social ties and knowledge sharing contribute to successful collaboration in globally distributed IS development teams and, if so, through what mechanisms are social ties established and facilitated?

    Following the Introduction, the paper will discuss the literature on globally distributed IS development projects. A review of past studies related to social ties, knowledge sharing and successful collaboration in various contexts, such as co-located sites and global alliances, will be provided. Following this, the motivation for this research, and an identification of the gap in the literature will be outlined. Following an outline of the research methods applied, data drawn from SAP and LeCroy, two companies that have engaged in globally distributed IS development projects, will be presented. A qualitative presentation of the findings will be followed by a quantification of the research data, providing evidence for the importance of social ties and knowledge sharing to collaborative work in globally distributed IS development teams. Evidence regarding the mechanisms supporting the build-up of social ties observed in the companies studied will also be outlined. Finally, the implications for theory and practice are discussed.



    Globally distributed IS development projects are projects that consist of two or more teams working together to accomplish project goals from different geographical locations. In addition to geographical dispersion, globally distributed teams face time zone and cultural differences that may include, but are not limited to, different languages, national traditions, values and norms of behaviour (Carmel, 1999).

    Traditionally, the main focus of the IS literature on globally distributed teams has been on technical aspects related to system development projects. Past research in the IS field suggests that the proper application of technical and operational mechanisms such as collaborative technologies, IS development tools and coordination mechanisms is the key to successful system development projects (Carmel, 1999Majchrzak et al., 2000Herbsleb et al., 2002). It has been claimed, for example, that a powerful ICT infrastructure is required to ensure connectivity and data transfer at high speed between remote sites (Carmel, 1999). Additionally, generic collaborative technologies (e.g. Groupware) are needed to enable remote colleagues to connect and communicate. The most commonly suggested collaborative technologies are e-mail, chat (e.g. Instant Messaging), phone/teleconferencing, video-conferencing, intranet, group calendar, discussion lists and electronic meeting systems (Smith & Blanck, 2002Herbsleb & Mockus, 2003). Finally, in addition to generic collaborative technologies, a number of specific tools for software development have been suggested to support globally distributed teams. These include configuration and version management tools, document management systems, replicated databases and CASE tools (Ebert & De Neve, 2001Carmel & Agarwal, 2002Smith & Blanck, 2002). Recent studies have focused on integrating software development tools (e.g. Integrated Development Environment) with collaborative tools (e.g. email, Instant Messaging) in order to offer solutions that deal with breakdowns in communication and coordination among developers in dispersed development teams (Cheng et al., 2004).

    A related stream of studies has focused on issues pertaining to the geographical dispersion of work. Naturally, because of several constraints associated with globally distributed work, such as distance, time zone and cultural differences, traditional coordination and control mechanisms tend to be less effective in global development projects (Herbsleb & Mockus, 2003). Distance, for example, reduces the intensity of communications, in particular when people experience problems with media that cannot substitute face-to-face communications (Smith & Blanck, 2002). Cultural differences, expressed in different languages, values, working and communication habits and implicit assumptions, are believed to be embedded in the collective knowledge of a specific culture (Baumard, 1999) and thus may cause misunderstanding and conflicts. Time zone differences reduce opportunities for real-time collaboration, as response time increases considerably when working hours at remote locations do not overlap (Sarker & Sahay, 2004). Such challenges raise the question whether globally distributed work can benefit from other factors, human in nature, involved in dispersed projects. The following sections provide a review of the literature on the human and social aspects involved in collaborative work. We draw on studies from several disciplines in order to assess the extent to which human and social aspects have been considered as enablers for collaborative work in globally distributed projects.


    Social aspects in globally distributed teams

    A large number of factors that may contribute to collaborative work have been given consideration in earlier studies. Among the many socially related factors contributing to collaboration, past studies have considered formal and informal communications (Storck, 2000Child, 2001Dyer, 2001), trust (Arino et al., 2001Child, 2001), motivation (Child, 2001) and social ties (Granovetter, 1973Storck, 2000Child, 2001). The literature on IS development projects is far more limited in addressing the impact that human-related factors may have on IS projects in general and successful collaboration in particular. It has been argued, for example, that informal communications play a critical role in coordination activities leading to successful collaboration in co-located IS development (Kraut & Streeler, 1995). As the size and complexity of IS development increase, the need to support informal communications also increases (Herbsleb & Moitra, 2001). Consequently, one of the central problems in distributed development projects is induced by time, cultural and geographical distances that greatly reduce the amount of such communication. Nonetheless, past studies related to IS in the context of globally distributed teams have mainly raised concerns about managers' ability to overcome geographical, time zone and cultural differences. According to Smith & Blanck (2002, p. 294), for example, 'an effective team depends on open, effective communication, which in turn depends on trust among members. Thus, trust is the foundation, but it is also the very quality that is most difficult to build at a distance'. Trust was defined byChild (2001, p. 275) as 'the willingness of one person or group to relate to another in the belief that the other's action will be beneficial rather than detrimental, even though this cannot be guaranteed'. Trust is more likely to be built if personal contact, frequent interactions and socializing between teams and individuals are facilitated (Arino et al., 2001;Child, 2001).

    Additional challenges to globally distributed work have been raised by Herbsleb & Mockus (2003). They claim that (i) distributed social networks are much smaller than same-site social networks, (ii) there is far less frequent communication in distributed social networks compared to same-site social networks, (iii) people find it much more difficult to identify distant colleagues with necessary expertise and to communicate effectively with them, and (iv) people at different sites are less likely to perceive themselves as part of the same team than people who are at the same site. Studies that have sought solutions to overcome the above challenges, often induced by the lack of personal interactions between remote teams, have suggested a division of labour and task between remote sites (e.g. Grinter et al., 1999Battin et al., 2001). While it seems that the main challenge is to create rapport between members of the dispersed teams, the solutions proposed have been mainly in the field of technical and project procedures. Rapport is defined as 'the quality of the relation or connection between interactants, marked by harmony, conformity, accord, and affinity' (Bernieri et al., 1994, p. 113). Past research has indeed confirmed that rapport is the key to collaboration between project teams and individuals, however in the context of co-located project sites (Gremler & Gwinner, 2000). Little is known about creating rapport between globally distributed teams.

    To summarize, while past studies in the various disciplines have acknowledged the importance of social aspects in collaborative work, the studies that have focused on the IS field have tended to see such social aspects (e.g. trust and rapport) as very difficult to encourage or foster in the context of globally distributed projects.


    Knowledge sharing in globally distributed teams

    The importance of knowledge sharing for collaborative work has already been established in past studies (e.g.Hendriks, 1999Goodman & Darr, 1998). Storck (2000), for example, claims that sharing knowledge is important to building trust and improving the effectiveness of group work. Herbsleb & Moitra (2001) reiterated such an observation, claiming that without an effective sharing of information, projects might suffer from coordination problems leading to unsuccessful collaborations. Nonetheless, achieving an effective knowledge sharing process may encounter certain challenges, in particular when teams are faced with cultural, geographical and time zone differences (Kobitzsch et al., 2001Herbsleb & Mockus 2003). Herbsleb et al. (2000, p. 3) described how one global IS development project was facing major challenges in trying to identify who knows what: 'difficulties of knowing who to contact about what, of initiating contact, and of communicating effectively across sites, led to a number of serious coordination problems'. There seemed to be a need to know whom to contact about what in this particular organization, something that is far more challenging in globally distributed teams. This organizational aspect, knowing who knows what, has been acknowledged as the key to knowledge sharing activities by several studies (Orlikowski, 2002Herbsleb & Mockus, 2003). Faraj & Sproull (2000), for example, suggested that instead of sharing specialized knowledge, individuals should focus on knowing where expertise is located and needed. Such an approach towards knowledge sharing is also known as transactive memory. Transactive memory is defined as the set of knowledge possessed by group members coupled with an awareness of who knows what (Wegner, 1987). It has been claimed that the transactive memory may positively affect group performance and collaboration by quickly bringing the needed expertise to knowledge seekers (Faraj & Sproull, 2000Storck, 2000).

    Another socially constructed concept that was proposed as a connecting mechanism between individuals and teams is collective knowledge. Grant (1996) claims that collective knowledge comprises elements of knowledge that are common to all members of an organization. In the case of globally distributed system development projects, the 'organization' involves all people participating in the project in remote locations. Collective knowledge is defined as 'a knowledge of the unspoken, of the invisible structure of a situation, a certain wisdom' (Baumard, 1999, p. 66). Such a concept may entail the profound knowledge of an environment, of established rules, laws and regulations. It may include language, other forms of symbolic communication and shared meaning (Grant, 1996). Building a sense of collective knowledge in co-located organizations would mean the development of a collective mind (Weick & Roberts, 1993Weick et al., 1999) through participation in tasks and social rituals (Orr, 1990; Baumard, 1999Orlikowski, 2002).

    To conclude, while globally distributed teams have employed a range of communication tools (e.g. Groupware applications comprising chat, e-mail, discussion list and application sharing capabilities) that support the sharing of knowledge across remote sites, evidence from recent research suggests that the challenges involved in sharing knowledge across globally distributed teams are still widespread, and that breakdowns in sharing knowledge do occur. Indeed, technical solutions are important, but are not sufficient. This calls for further investigation of socially constructive elements involved in developing collective knowledge and transactive memory as complementary mechanisms to existing technical solutions.


    Successful collaboration in information system projects

    The word collaboration comes from the Latin words com (prefix together) and laborare (verb to work). It means that two or more individuals work jointly on an intellectual endeavour (Webster, 1992). Collaboration is a complex, multi-dimensional process characterized by constructs such as coordination (Faraj & Sproull, 2000), communication (Weick & Roberts, 1993), meaning (Bechky, 2003), relationships (Gabarro, 1990), trust (Meyerson et al., 1996) and structure (Adler & Borys, 1996).

    The IS literature has discussed at length some factors that support successful collaboration. Successful collaboration is the process through which a specific outcome, such as a product or desired performance, is achieved through group effort. In this sense, successful collaboration is represented in this paper as either product success or a desired performance of a distributed team. Product success can be represented by various indicators, such as growth in sales, product delivery on time and within the budget (Nellore & Balachandra, 2001Andres, 2002) or short time-to-market (Datar et al., 1997). In line with these indicators, product success is thus defined as the achievement of project objectives (Gallivan, 2001). This criterion for product success can either be objective, that is, based on market or company data, or subjective, that is, based on project participants' perception of product success.

    A desired result of a distributed team can also be a people-related outcome (Hoegl & Gemuenden, 2001), which entails meeting the psychological needs of the members (Gallivan, 2001). Hoegl & Gemuenden (2001) and Gallivan (2001), for example, suggest that, in addition to performance objectives, teams must also work in a way that increases members' motivation to engage in future teamwork. There should be some level of personal satisfaction that motivates individuals and teams to continue their engagement in collaborative work despite geographical, time and cultural differences. We perceive personal satisfaction as the outcome of a positive social experience. Such positive social experience can, for example, be in the form of stress-free communication rituals between remote counterparts and collegial relationships between remote teams. Some factors that may foster people-related outcomes and thus may improve personal satisfaction are open and multiple informal communication channels (Hoegl & Gemuenden, 2001), the encouragement of interactions between parties involved in the development process (Nelson & Cooprider, 1996), and the cohesion of a team (Gallivan, 2001Hoegl & Gemuenden, 2001). Naturally, geographical, cultural and time-zone differences pose additional challenges to globally distributed teams to achieve successful collaboration, whether seen either as a people-related outcome or as a product outcome.


    The motivation for the research: the gap

    Thus far, the solutions proposed to support globally distributed teams have been technical in nature, paying little attention to the human and social aspects involved in globally distributed work (Al-Mushayt et al., 2001). Furthermore, in the few studies that focused on social aspects in globally distributed projects, these aspects were presented as concepts that added challenges to the coordinating of collaborative work because of cultural, geographical and time-zone differences. Jarvenpaa & Leidner (1999), for example, indicated that lack of trust is likely to develop between globally distributed teams, while Carmel (1999) raised a concern about possible breakdowns in communications that may cause coordination problems because of language barriers, cultural differences, asymmetry in distribution of information among sites and lack of team spirit.

    While we accept the observation that insufficient trust and poor social relationships may act as barriers to successful collaboration in globally distributed teams, and sufficient trust and well-established social relationships may act as enablers to collaborative work, we also argue that there is a need to understand whether, and how, social aspects actually contribute to successful collaboration. The importance, and the contribution, of social aspects to collaborative work in globally distributed projects is neglected in the IS literature, and the little that is known about this area is mainly based on co-located project teams. To fill this gap, three concepts – social ties, knowledge sharing and successful collaboration – will be studied in an attempt to address the following questions: do social ties and knowledge sharing contribute to successful collaboration in globally distributed IS development teams and, if so, through what mechanisms are social ties established and facilitated?

    Figure 1 illustrates the three main concepts, social ties, knowledge sharing and successful collaboration; and their categories, trust and rapport, transactive memory and collective knowledge, and product success and personal satisfaction, respectively. In addition, the importance of collaborative tools will be studied in order to assess their impact on successful collaboration in comparison to the contribution that social ties and knowledge sharing have made to successful collaborative work. Lastly, the mechanisms that support social ties will be explored in an attempt to explain how companies may create social ties between globally distributed team members.

    Figure 1.
    Figure 1 - Unfortunately we are unable to provide accessible alternative text for this. If you require assistance to access this image, please contact or the author

    Main concepts and their categories.

    Full figure and legend (67K)

    Research method and approach

    An in-depth ethnographic study of globally distributed software development projects is provided in this paper. A qualitative, interpretive approach is adopted. In line with much past IS research (e.g. Palvia et al., 2003), a case study method was selected for this research.

    Applying a case study method as a research strategy involves the use of an all-inclusive method and offers several approaches to data collection and analysis (Yin, 1994). Typically, a study based on a case study methodology from an interpretive perspective starts with a discussion of the existing literature followed by data collection and analysis procedures (Yin, 1994). In this study, evidence was gathered from a variety of sources such as documentation, archival records and interviews (Eisenhardt, 1989Yin, 1994). Data were also triangulated through interviews with team counterparts in different locations and in cases where the interpretation of subjective evidence was questionable, such as in the case of successful collaboration. In addition, data analysis methods involved both the presentation of qualitative data in the form of statements made by interviewees as well as a quantification of data in the form of statement frequencies.

    To correspond with the main interests of the research, only project teams at SAP and LeCroy that were globally distributed across at least two locations were considered for this study (see Company background in Appendix A). Interviews were conducted at two remote sites per company: in India and Germany for SAP, and Switzerland and USA for LeCroy. Interviewees were chosen to include (1) counterparts working closely at remote locations, and (2) diverse roles such as managers and developers. In total, 10 interviews (five at each company) were conducted (see Interviewees' details in Appendix B). Interviews lasted 1 h and 30 min on average; they were recorded and fully transcribed. A semi-structured interview protocol was applied, to allow the researchers to clarify specific issues and follow up with questions (see Interview protocol in Appendix C).

    Data analysis followed several steps. It relied on iterative coding of the data using the open-coding technique (Strauss & Corbin, 1998), sorting and refining themes emerging from the data based on the definitions of the categories with some level of diversity (Miles & Huberman, 1994Strauss & Corbin, 1998), and linking these to categories and concepts (see Appendix D).

    Coding was done in Atlas.ti – packaged Qualitative Data Analysis (QDA) software. The QDA software facilitated the analysis process. In particular, it was used for coding, linking codes and text segments, documenting diversity in codes, creating memos, searching, editing and re-organizing, and visual representation of data and findings (Miles & Huberman, 1994Weitzman, 2000).

    Data were analysed by the researchers independently. The interpretation of selective codes (those that seemed to have dual meaning), the consolidation of codes into categories and the examination of empirical findings against the literature were done by both researchers together. In addition, feedback sessions with key informants in the case companies were organized and their comments were incorporated into the research findings. Such a data analysis approach is believed to enhance confidence in the findings (Eisenhardt, 1989).


    Empirical results and analysis

    In this section, the results of two case studies carried out at SAP and LeCroy will be presented. Based on the empirical evidence presented below, we argue that social ties and knowledge sharing contributed to successful collaboration in the companies studied. In principle, we claim, based on the data analyzed, that in globally distributed IS development teams, social ties and knowledge sharing improved collaboration. Furthermore, several organizational mechanisms supporting the build-up of social ties between remote sites were reported. In order to support the above claim, three levels of evidence will be outlined in the following section. The first level is an outline of statements made by interviewees associated with the concepts under investigation (i.e. social ties, knowledge sharing and successful collaboration). The second level is the frequency of these statements. The third level will present the number of instances in which social ties, knowledge sharing and collaborative tools were linked to successful collaboration.


    Social ties in globally distributed teams: evidence

    Statements made by interviewees about rapport and trust are presented below. These statements were analysed and associated with rapport and trust based on the definitions provided above.


    Most of the guys know each other very well – we try to make sure they interact, we increase the possibility that they really get to know each other (Anthony: see Interviewees' details in Appendix B).
    I need to have good relationships with the people I am working with [...] the better you know the people the easier it gets. I know Sudhir and Thomas, both of them I think by now quite well (Christoph).



    It makes a big difference, when the guys know each other but more importantly when the guys trust each other(Anthony).
    The team-building exercise was a way to show that we care about remote locations. The end result of that exercise was that the entire team [globally distributed] feels more comfortable to work together. Now they know each other and trust each other better (Stefan).


    Knowledge sharing in globally distributed teams: evidence

    Statements made by interviewees about transactive memory and collective knowledge are presented below. These statements were analysed and associated with rapport and trust based on the definitions provided above.

    Transactive memory


    When a problem occurs it is important for the team, instead of finding the bug, to find quickly who knows best about the failing component (Gilles).
    What I did in the past was – this was in the very early phase of the project, I sent requests only to Sudhir and he would distribute the issues between people. But by now, after 6 months, I know quite well what everybody is doing. So after a time, you just know who's doing what (Christoph).


    Collective knowledge

    How do you pick all the guys that we had – pure embedded programmers – and teach them all about Windows and a new Microsoft COM technology at the same time. Well, we all got together in the mountains of France. It was a real fun week with two purposes: one was to teach us all about this new technology. The other which was fairly equally important if not more important in some way – was to really try to build relationships between people (Larry).
    It [team-building] was a pretty good experience for myself: learning the culture and also how the team internally works. So my understanding of what you can expect from the team, and what you cannot expect, is very important for us (Stefan).


    Successful collaboration in globally distributed teams: evidence

    Successful collaboration can be defined by various indicators. The perception of interviewees that a project team was collaborative is one indication of successful collaboration. However, there may also be external indicators of successful collaboration, such as project and product success. These indicators can be either subjective or objective. Subjective evidence may include statements made by interviewees about their perception of product success, while objective evidence presents evidence in the form of sales, growth, and industry recognition associated with the product. While objective evidence should not be biased, one has to acknowledge that some indicators may have been manipulated prior to presentation by the company (e.g. sales figures). The perception of interviewees with regard to product success and personal satisfaction, representing successful collaboration, is presented below. These statements were analysed and associated with product success and personal satisfaction based on the definitions provided above.

    Product success

    Engineers described the Maui project as the first project to adopt a component-based architecture, claiming that this new approach serves as a basis for future products because 'we can take the bunch of different components and create different instruments [...] within a few months rather than in a few years' (Larry).
    We just went through a merger, so setting up a global project was not an easy task. Despite all the difficulties we managed to have a successful second software release in 8 months (Stefan).


    Personal satisfaction

    The job here is very demanding and challenging. I think that those who stay onboard are the engineers who share the same goal: to work on complex problems in cutting edge technologies. I think that the fact that we share this goal helps us to work well together (Gilles).
    The team building exercise from our side [Bangalore team] was more of a building of awareness about the whole team of Stefan, because he heads now all our team, so he needed to have a good picture of how the team composition is, what each individual is like or what different people are like (Sudhir).


    In addition, objective evidence, presented below, supports the perception of product success that was reported by interviewees.

    Product and project success (objective evidence)


    • LeCroy's WaveMaster 8600, the first release of the Maui Project, was announced as the Best Product of Year 2002 by EDN, a leading magazine for design engineers.
    • While revenues in 2003 were down to $107.8M from $111.5M in 2002 because of the difficult economic environment, the WaveMaster had a positive impact on the financial results of year 2003: Our high-end oscilloscope product orders grew by 7in the first quarter of fiscal 2003 over a comparable period in fiscal 2002. This success is due to the new WaveMaster product line, including the introduction of the world's highest performance oscilloscope during the quarter, the WaveMaster 8600A (Tom Reslewic, CEO, LeCroy, news release, 16 October 2002).



    • According to JupiterResearch, a leading research and consulting company in emerging technologies, SAP Enterprise Portal is the third largest software solution, with 17% of the USA market in 2002. The studied Collaboration Project developed Collaborative tools as one of the three main features of the SAP Enterprise Portal.
    • The 2003 revenues for SAP Enterprise Portal were up by 5% representing 13% of SAP software sales (SAP's 2003 annual report).


    Concept frequencies for social ties, knowledge sharing, collaborative tools and successful collaboration

    The above section presented a sample of statements made by interviewees from SAP and LeCroy with regard to social ties, knowledge sharing and successful collaboration. This section presents a calculation of all statements made by interviewees at SAP and LeCroy in the context of social ties, knowledge sharing, collaborative tools and successful collaboration. We refer to this calculation as concept frequencies. In all, 51 statements were made by interviewees from SAP, for example, with regard to knowledge sharing in globally distributed teams. In addition, 'diversity in codes' was calculated. 'diversity in codes' represents the number of different codes grouped within one category (as illustrated in Appendix D). Under the category 'trust', for example, three different codes were identified. In other words, 'diversity in codes' represents the number of instances that a statement was found to be somehow different from another statement in the context of a particular category (Table 1).

    Our calculations show that 81 statements were made with regard to social ties, 72 statements concerning knowledge sharing and 102 statements about collaborative tools. Within the concepts, a large number of statements were associated with rapport (71). These findings may suggest that interviewees have considered developing rapport with counterparts from remote sites to be an important element in collaborative work. The importance of social ties and knowledge sharing in successful collaboration will be further discussed in the following section.

    The relationships between social ties, knowledge sharing, collaborative tools and successful collaboration

    To assess the importance of social ties and knowledge sharing for successful collaboration, a calculation was made of statements that represented explicit relationships between social ties, knowledge sharing, collaborative tools and successful collaboration (see an example in Appendix D). These calculations are presented in Table 2 under the column 'Relationships with successful collaboration'.

    Two conclusions can be drawn from the calculations presented in Table 2. Firstly, Table 2 suggests that social ties and knowledge sharing were positively associated with successful collaboration in 30% and 43% of the statements made, respectively. Collaborative tools were positively associated with successful collaboration in 37% of statements made about this concept. Secondly, social ties (30%) and knowledge sharing (43%) were associated with successful collaboration, almost to the same extent or even further than collaborative tools (37%). The significance of these findings can be further underlined by the observation that interviewees were asked a similar number of questions about human-related issues and about collaborative tools (see Interview protocol in Appendix C).

    Based on the evidence above, we argue that our findings suggest that, in addition to technical solutions, human-related issues in the form of social ties and knowledge sharing were considered as the key to successful collaboration.

    Organizational mechanisms supporting social ties in globally distributed teams

    The analysis of the evidence collected at SAP and LeCroy suggests that there were two phases of activities that supported the build-up of social ties: (i) before Face-to-Face (F2F) and (ii) after F2F. In addition, the analysis of the empirical evidence suggests that there were some particular tools that the projects studied have applied. Table 3outlines the activities associated with the two phases of building up social ties and outlines the set of tools applied by the projects studied. In addition, a calculation of the number of statements made with regard to a particular activity or tool is provided per company. The highest frequency calculated is in bold.

    Table 3 suggests that interviewees from SAP considered activities prior to an F2F meeting important for building social ties, that is, rapport and trust, between members of the globally distributed team. In particular, a short visit to a remote location was mentioned as an important mechanism prior to a formal introduction of the team. Interviewees from LeCroy considered activities before F2F and after F2F as equally important for the build-up of social ties. Nonetheless, managers from LeCroy also considered an initial introduction activity before F2F as important for instituting social relationships. In terms of post-F2F activities, interviewees from both companies indicated the importance of open communication channels. A non-hierarchical communication approach was another mechanism contributing to social relationships. Lastly, the tools through which social relationships were created across different sites were mainly phone, email and groupware applications. Nonetheless, interviewees also indicated that the quality of messages, meaning, the assurance that messages communicate the issue successfully and are understood and interpreted properly, is important for establishing social relationships between team members.

    So far, evidence about the importance of social aspects in globally distributed teams and the means through which social ties can be established, has been presented. The following section will discuss the implications for research and practice.



    Human and organizational aspects involved in system development projects are the centre of this study. The cases of SAP and LeCroy demonstrated the importance of some human aspects, e.g. social ties and knowledge sharing activities, and organizational aspects, for example, tools and project procedures, in globally dispersed collaborative work. The implications for human and organizational aspects are both theoretical and practical.


    Theoretical implications

    From a theoretical perspective, this study suggests that more attention is needed to understand the relationships between social ties, knowledge sharing and successful collaboration in globally distributed teams. As it stands, the IS literature tends to overemphasize the contribution of technical solutions and collaborative tools to the flow and sharing of information (e.g. Battin et al., 2001Ebert & De Neve, 2001), and in some cases to downplay the role of social aspects, such as rapport, in globally distributed collaborative work. We claim that collaborative work can also be understood from a social construction viewpoint in which the quality of the relation or connection between interactants in globally distributed teams can be enhanced through story telling (Orr, 1991) and participation in social rituals (Lave & Wenger, 1991). In this respect, the social practice is the primary activity and collaboration is one of its characteristics. The learning involved in the manner in which people successfully collaborate is located within the social world. As part of the participation involved in a collaborative practice, members of a globally distributed project change locations and perspectives to create and sustain learning trajectories (Lave & Wenger, 1991, p. 36). We argue that collaboration is actually about renewing the set of relations between globally distributed project members through continuous participation and engagement. In this sense, collaborative tools are one mediator through which collaboration as a learned social practice is developed.


    Practical implications

    From a practical viewpoint, we argue that in order to achieve successful collaboration in globally distributed teams, companies need to introduce organizational mechanisms that create social spaces between team members. There is substantial support in research and practice, as for example in this study, for F2F meetings, suggesting that such meetings are important for teamwork and performance (Jarvenpaa et al., 1998Govindarajan & Gupta, 2001).

    We argue that some activities should be planned both before and after F2F meetings, to ensure the participation and engagement of project members in collaborative work. We suggest, for example, that managers should facilitate social interaction prior to a F2F meeting, such as short visits to a remote location of key project members, the introduction of a contact person to the dispersed team, support for language courses and the dissemination of clear communication procedures. These activities, often ignored prior to a F2F meeting in globally distributed teams, have been reported as the key to establishing social and human contact and supporting the build-up of rapport between counterparts from remote sites. Regular meetings, either virtual or in terms of short visits, after F2F meetings, will ensure participation of project members over time. We also suggest that a variety of communication tools be utilized to assist the maintenance of a high level of participation of project members and to enrich the quality of messaging involved in collaborative work, such as phone, videoconference media and email.

    Lastly, from a strategic viewpoint, management should demonstrate strong commitment to addressing human-related issues in globally distributed system development projects and should dedicate resources that ensure the renewal of social relationships, as was done at SAP and LeCroy.


    Concluding remarks

    In this paper, the contribution of social ties and knowledge sharing to successful collaboration in distributed IS development teams has been explored. We conclude that in addition to technical solutions, human-related issues in the form of social ties and knowledge sharing were reported as keys to successful collaboration. In particular, the importance of rapport and transactive memory was evident in the studied projects. Furthermore, organizational mechanisms that create and maintain social ties between dispersed team members were reported in detail.

    The conclusions offered in this paper are based on an in-depth study of two companies, by applying a qualitative, interpretive methodological lens. Additional methodological approaches may contribute to further understand the relationships between social ties, knowledge sharing and successful collaboration in globally distributed teams. We propose that future studies should conduct a survey across the IS industry in which the causal relationships between these three main concepts will be further investigated.



    1. Adler PS and Borys B (1996) Two types of bureaucracies: enabling and coercive. Administrative Science Quarterly41, 61–89.
    2. Al-Mushayt O, Doherty NF and King M (2001) An investigation into the relative success of alternative approaches to the treatment of organizational issues in system development projects. Organization Development Journal19(1), 31–48.
    3. Andres HP (2002) A comparison of face-to-face and virtual software development teams. Team Performance Management 8(1/2), 39–48. | Article |
    4. Arino A, De La Torre J and Ring PS (2001) Relational quality: managing trust in corporate alliances. California Management Review 44(1), 109–131.
    5. Battin RD, Crocker R and Kreidler J (2001) Leveraging resources in global software development. IEEE Software(March/April), 70–77.
    6. Baumard P (1999) Tacit Knowledge in Organizations. Sage, London.
    7. Bechky BA (2003) Sharing meaning across occupational communities: the transformation of understanding on a production floor. Organization Science 14(3), 312–330. | Article |
    8. Bernieri FJ, Davis JM, Rosenthal R and Knee CR (1994) Interactional synchrony and rapport: measuring synchrony in displays devoid of sound and facial affect. Personality and Social Psychology Bulletin 20, 303–311.
    9. Carmel E (1999) Global Software Teams: Collaborating across Borders and Time Zones. Prentice-Hall, Upper Saddle River, NJ.
    10. Carmel E and Agarwal R (2002) The maturation of offshore sourcing of information technology work. MIS Quarterly Executive 1(2), 65–77.
    11. Cheng L, De Souza CRB, Hupfer S, Patterson J and Ross S (2004) Building collaboration into ides. Queue 1(9), 40–50. | Article |
    12. Child J (2001) Trust – the fundamental bond in global collaboration. Organizational Dynamics 29(4), 274–288. | Article |
    13. Datar S, Jordan C, Kekre S and Srinivasan K (1997) New product development structures and time-to-market.Management Science 43(4), 452–464.
    14. Dyer JH (2001) How to make strategic alliances work. MIT Sloan Management Review 42(4), 37–43.
    15. Ebert C and De Neve P (2001) Surviving global software development. IEEE Software (March/April), 62–69.
    16. Eisenhardt KM (1989) Building theories from case study research. Academy of Management Review 14(4), 532–550. | Article |
    17. Evaristo R (2003) The management of distributed projects across cultures. Journal of Global Information Management 11(4), 58–70.
    18. Faraj S and Sproull L (2000) Coordinating expertise in software development teams. Management Science 46(12), 1554–1568. | Article |
    19. Gabarro JJ (1990) The development of working relationships. In Intellectual Teamwork: Social and Technological Foundations of Cooperative Work (GALEGHER J, KRAUT RE and EGIDO C, Eds), pp 79–110, Lawrence Erlbaum Associates, New Jersey.
    20. Gallivan MJ (2001) Striking a balance between trust and control in a virtual organization: a content analysis of open source software case studies. Information Systems Journal 11(4), 227–304. | Article |
    21. Goodman PS and Darr ED (1998) Computer-aided systems and communities: mechanisms for organizational learning in distributed environments. MIS Quarterly 22(4), 417–440.
    22. Govindarajan V and Gupta AK (2001) Building an effective global business team. MIT Sloan Management Review42(4), 63–71.
    23. Granovetter MS (1973) The strength of weak ties. American Journal of Sociology 78(6), 1360–1380. | Article |
    24. Grant RM (1996) Toward a knowledge-based theory of the firm. Strategic Management Journal 17(Winter), 109–122. | Article |
    25. Gremler DD and Gwinner KP (2000) Customer–employee rapport in service relationships. Journal of Service Research 3(1), 82–104.
    26. Grinter RE, Herbsleb JD and Perry DE (1999) The geography of coordination: dealing with distance in R&D work. In Proceedings of the International ACM SIGGROUP Conference on Supporting Group Work (Group 99). ACM Press, Phoenix, Arizona.
    27. Hendriks P (1999) Why share knowledge? The influence of Ict on the motivation for knowledge sharing.Knowledge and Process Management 6(2), 91–100. | Article |
    28. Herbsleb JD, Atkins DL, Boyer DG, Handel M and Finholt TA (2002) Introducing instant messaging and chat into the workplace. In Proceedings of the Conference on Computer–Human Interaction pp 171–178, Minneapolis, Minnesota.
    29. Herbsleb JD, Mockus A, Finholt TA and Grinter RE (2000) Distance, Dependencies, and Delay in Global Collaboration Conference on Computer Supported Cooperative Work. Philadelphia, PA, USA.
    30. Herbsleb JD and Mockus A (2003) An empirical study of speed and communication in globally-distributed software development. IEEE Transactions on Software Engineering 29(6), 1–14. | Article |
    31. Herbsleb JD and Moitra D (2001) Global software development. IEEE Software (March–April), 16–20.
    32. Hoegl M and Gemuenden HG (2001) Teamwork quality and the success of innovative projects: a theoretical concept and empirical evidence. Organization Science 12(4), 435–449. | Article |
    33. Jarvenpaa SL, Knoll K and Leidner DE (1998) Is anybody out there? Antecedents of trust in global virtual teams.Journal of Management Information Systems 14(4), 29–64.
    34. Jarvenpaa SL and Leidner DE (1999) Communication and trust in global virtual teams. Organization Science10(6), 791–815.
    35. Kobitzsch W, Rombach D and Feldmann RL (2001) Outsourcing in India. IEEE Software (March/April), 78–86.
    36. Kraut RE and Streeler LA (1995) Coordination in software development. Communications of the ACM 38(3), 69–81. | Article |
    37. Lave J and Wenger E (1991) Situated Learning Legitimate Peripheral Participation. Cambridge University Press, Cambridge.
    38. Majchrzak A, Rice RE, King N, Malhotra A and Ba S (2000) Computer-mediated inter-organizational knowledge-sharing: insights from a virtual team innovating using a collaborative tool. Information Resources Management Journal 13(1), 44–54.
    39. Malhotra A, Majchrzak A, Carman R and Lott V (2001) Radical innovation without collocation: a case study at Boeing-Rocketdyne. MIS Quarterly 25(2), 229–249.
    40. Meyerson D, Weick KE and Kramer RM (1996) Swift trust and temporary groups. In Trust in Organizations: Frontiers of Theory and Research (KRAMER RM and TYLER TR, Eds). Sage, Thousand Oaks, CA.
    41. Miles MB and Huberman AM (1994) Qualitative Data Analysis: An Expanded Sourcebook (2nd edn). Sage, Thousand Oaks, CA.
    42. Mockus A and Herbsleb JD (2002) Expertise browser: a quantitative approach to identifying expertise. InProceedings of the International Conference on Software Engineering pp 503–512, Orlando, FL.
    43. Nellore R and Balachandra R (2001) Factors influencing success in integrated product development projects. IEEE Transactions on Engineering Management 48(2), 164–174. | Article |
    44. Nelson KM and Cooprider JG (1996) The contribution of shared knowledge to IS group performance. MIS Quarterly20(4), 409–432.
    45. Orlikowski WJ (2002) Knowing in practice: enacting a collective capability in distributed organizing. Organization Science 13(3), 249–273. | Article |
    46. Orr J (1991) Sharing knowledge celebrating identity: community memory in a service culture. In Collective Remembering (MIDDLETON D and EDWARDS D, Eds). Sage, London.
    47. Palvia P, Mao E, Salam AF and Soliman KS (2003) Management information system research: what's there in a methodology? Communications of the Association for Information Systems 11, 289–309.
    48. Sarker S and Sahay S (2004) Implications of space and time for distributed work: an interpretive study of US-Norwegian system development teams. European Journal of Information Systems 13(1), 3–20. | Article |
    49. Smith PG and Blanck EL (2002) From experience: leading dispersed teams. The Journal of Product Innovation Management 19(4), 294–304. | Article |
    50. Storck J (2000) Knowledge diffusion through 'strategic communities'. Sloan Management Review 41(2), 63–74.
    51. Strauss AL and Corbin JM (1998) Basics of Qualitative Research (2nd edn). Sage, Thousand Oaks, CA.
    52. Webster (1992) Webster's Dictionary. Oxford University Press, Oxford.
    53. Wegner DM (1987) Transactive memory: a contemporary analysis of the group mind. In Theories of Group Behaviour (MULLEN G and GOETHALS G, Eds), Springer-Verlag, New York.
    54. Weick KE and Roberts KH (1993) Collective mind in organisations: heedful interrelating on flight desks.Administrative Science Quarterly 38(3), 357–382.
    55. Weick KE, Sutcliffe KM and Obstfeld D (1999) Organizing for high reliability: processes of collective mindfulness. In Research in Organizational Behaviour (STAW B and CUMINGS LL, Eds), vol. 21, pp 81–123, JAI Press, Greenwich, Connecticut.
    56. Weitzman EA (2000) Software and qualitative research. In Handbook of Qualitative Research (2nd edn) (DENZIN NK and LINCOLN YS, Eds), pp 803–820, Sage, Thousand Oaks, CA.
    57. Yin RK (1994) Case Study Research: Design and Methods. Sage, Newbury Park, CA.


    Appendix A

    Company background

    Background of LeCroy and studied project

    Founded in 1964, LeCroy Research Systems is recognized as an innovator in instrumentation. LeCroy specializes in the design and production of oscilloscopes and other signal analyzer equipment. LeCroy employs more than 400 people worldwide and its 2003 sales amounted to $107.8 million. LeCroy's teams are located in New York (headquarters, manufacturing and software development) and Geneva (software development). The software development team, globally distributed between New York and Geneva, is described in this paper. There were about 10–15 people in Geneva and the same amount in New York. In particular, the Maui project ('Maui' stands for Massively Advanced User Interface) was investigated. The Maui project has developed software platform for new generations of oscilloscopes and oscilloscope-like instruments based on the Windows operating system.


    Background of SAP and Studied Project

    Founded in 1972, SAP is a recognized leader in software solutions. SAP employs nearly 30,000 people in more than 50 countries with software sales of 2.148 EUR million in 2003. This case study focuses on the Knowledge Management (KM) Collaboration Unit/Group that is part of the Enterprise Portal Division. The KM Collaboration Group develops a collaborative platform to foster teamwork. This Group consisted of four teams: two teams in Walldorf, Germany (10 people in each team), one team in Bangalore, India (six people) and one team in Palo Alto, USA (five people). Each team worked on a different part of the Collaboration project. The Collaboration project started in September 2001.


    Appendix B

    Interviewees' details

    LeCroy: interviewees' details

    Interviews were carried out between November 2001 and January 2003. Interviewees' details


    SAP: interviewees' details

    Interviews were carried out between February and June 2002. SAP: interviewees' details

    Roles are correct for 2002. Interviewees were selected based on the criteria presented in the Research method and approach section. Interviewees were not selected based on gender; however, they all happened to be male because of the team composition.

    Appendix C

    Interview protocol

    1. Please tell me about your role and involvement in the project.
    2. Please describe the structure and division of work in your project across different sites
    3. The use of media and collaborative tools:
      1. What tools do you use for collaboration:
        1. Which media and collaborative tools?
        2. Which software development/technical tools?
      2. Why did you choose these particular tools?
      3. Did the use of these tools have any impact on the level of collaboration between remote sites? How and why?
      4. What problems did these tools have? How did you solve these problems?
    4. Human- and socially-related issues:
      1. Please describe with whom you mainly collaborate within the project and across remote sites and explain why.
      2. Do human-related elements matter in collaborative work in these cases? Which ones and why?
      3. Did your project have socially-related activities to assist in collaboration across remote sites?
        What kind of activities? What was the impact?
      4. Were there any challenges related to human factors in this respect?
    5. Methodologies:
      1. Did your project have any methodologies (project management, product development) for collaboration across
        remote sites? Were they helpful?
      2. Were there particular challenges that negatively affected collaboration between sites?
    6. Coordination:
      1. What were the criteria for dividing work between the different sites in your project?
      2. How was the coordination of work carried out during the project?
      3. What organizational mechanisms were important for coordinating global work in your project?
      4. Were there particular problems in coordinating work across the different sites? What kind of problems and why?


    Appendix D

    Figure 2 presents the process through which codes, which are chunks of text that are partial or complete sentences or expressions describing specific activities (Strauss & Corbin, 1998), were associated with categories. A bottom-up, interpretive approach was used to associate codes with particular categories and concepts.

    Figure 2.
    Figure 2 - Unfortunately we are unable to provide accessible alternative text for this. If you require assistance to access this image, please contact or the author

    Data sorting and linking.

    Full figure and legend (10K)

    Interview transcripts were analysed using Atlas.ti software. Figure 3 illustrates how the data were analysed: in the statements analysed, codes were identified and grouped, and their association with categories (e.g. trust and rapport) as well as their corresponding concepts (e.g. social ties) were established.

    Figure 3.
    Figure 3 - Unfortunately we are unable to provide accessible alternative text for this. If you require assistance to access this image, please contact or the author

    Example interview statements analysed according to codes and categories.

    Full figure and legend (131K)

    Figure 3 also shows how relationships between concepts were established. The types of relationships examined were: 'lead to' (as shown in the Figure 3), 'therefore' and 'in order to'. Given that these relationships were based on our interpretation and interviewees' perception, a triangulation procedure was carried out by validating these relationships with counterparts from remote locations.



    Data collection in India was sponsored by a grant from the Netherlands Foundation for the Advancement of Tropical Research (WOTRO).


    About the authors

    Julia Kotlarsky is lecturer in Information Systems, Operations Research and Information Systems Group, Warwick Business School, U.K. She is completing her Ph.D. in management and information systems at Rotterdam School of Management, The Netherlands. Her main research interests revolve around social, technical and design aspects in globally distributed teams. Julia has written on this subject and her work was presented in various conferences and was published in International Journal of Production Research.

    Ilan Oshri is Assistant Professor in the Department of Strategy and Business Environment, Rotterdam School of Management, The Netherlands. He holds a Ph.D. in strategic management and technological innovation from Warwick Business School, England. His main research interest lies in the area of innovation and the organization of the firm for innovation. Ilan has written extensively on this subject and his work was published in several books and journals including Management Learning and Knowledge Management Research and Practice.

    source :

    Read more »

  • The turnaround of the London Ambulance Service Computer-Aided Despatch system

    The turnaround of the London Ambulance Service Computer-Aided Despatch system (LASCAD)

    Guy Fitzgerald1 and Nancy L Russo2

    1. 1Department of Information Systems and Computing, Brunel University, Uxbridge, Middlesex, U.K.
    2. 2Department of Operations Management & Information Systems, Northern Illinois University, DeKalb, IL, U.S.A.

    Correspondence: Guy Fitzgerald, Department of Information Systems and Computing, Brunel University, Uxbridge, Middlesex UB8 3PH, U.K. Tel: +44 1895 266018; Fax: +44 1895 251686; E-mail:

    Received 4 May 2004; Revised 25 October 2004; Re-revised 25 November 2004; Accepted 19 July 2005.



    The implementation of the Computer-Aided Despatch system at the London Ambulance Service has been one of the most notorious cases of failure within the information systems (IS) literature. What is less well known is that there followed, some time later, a much more successful implementation, described as a turnaround. This paper, based on a case study approach, describes the context and detail of that implementation. A framework from the literature, used in an analysis of the initial failure, is used to analyse and compare the similarities and differences in the development of the two systems. The framework provides four interacting elements and relationships for analysis. These elements are Supporters, Project Organisation, Information System, and the Environment in which they operate. The turnaround system was found to address directly almost all the issues identified as problematic in the failure. These included the approach taken by management to understand the needs of users, including issues unrelated to the system itself, their involvement in the development process, an improvement in the availability of resources (brought about in some part because of the previous failure), the ability to follow a relaxed timeline driven by users' acceptance levels, the preparation of infrastructure projects to develop confidence, participation and prototyping, thorough testing, phased and simple implementation, and trust building. Certain environmental factors could not be so directly addressed but nevertheless were overcome by attention to detail and internal needs. Conclusions indicate that the factors addressed are not new and are to be found in the success literature. What is unusual is that they were implemented in this case in such unlikely circumstances.


    London Ambulance Service, Computer-Aided Despatch, information system, implementation, systems, success factors, failure, case study



    The London Ambulance Service (LAS) Computer-Aided Despatch (CAD) system (LASCAD) has become widely known as a prime example of an information systems (IS) failure (see, e.g., Beynon-Davies, 1995Finkelstein & Dowell, 1996;Collins, 1997). The LASCAD 'crash' happened in 1992 hitting the newspaper headlines with suggestions that 20–30 people had died as a result, leading to the resignation of the Chief Executive (CE) (The Guardian, 1992The Independent, 1992). Questions were asked in the Parliament and a Public Inquiry instigated. This was followed by intense media interest and further government enquiries. Subsequently, however, the LAS disaster and its aftermath faded from prominence with little media coverage and few front-page stories. In 1996, a new LAS CAD system was implemented, with relatively little fanfare, which was very successful, enabling LAS to improve its performance substantially and to win the BCS (British Computer Society) award for Excellence in IS Management in 1997. Given the magnitude of the failure of the 1992 system, this was a significant turnaround and this paper examines how such a transformation was achieved and what lessons might be learnt. The next section examines some of the issues of failure and success in the IS literature.



    Information systems failure and success

    Failure, including time and budget overruns, is an ongoing theme in the IS literature. For example, according to a Standish Group survey (Jiang & Klein, 1999) only 16% of IS projects are completed on time and within budget. Of the remainder, approximately 53% are over budget in terms of both time and money, and 31% of all projects are cancelled. Despite the fact that much has been written about IS success and failure over the years, there is no generally agreed definition of these terms. Lyytinen & Hirschheim (1987), for example, identify four types of failure: correspondence failure, process failure, interaction failure, and expectation failure. Correspondence failure refers to the failure to meet the objectives originally specified for the system. Process failure is when the system is not developed within time and budget constraints, or when the system is never implemented. Interaction failure refers to poor usage of a system, where the system meets technical specifications but fails to meet the needs of the users and is either not used in part or not used in its entirety. Expectation failure encompasses the others and refers to the 'inability of an IS to meet a specific stakeholder group's expectations'. Sauer (1993), however, criticises this definition of failure for being too broad. Expectation failure could be applied to unreasonable expectations, and does not take into account expectations that could not be known when the system was created. Under this category, a system could at one time be viewed as a success by one group of stakeholders and as a failure by other groups; indeed, Wilson & Howcroft (2002) argue that it is not even necessary for a technology to actually change for it to be perceived differently over time. For them success or failure is a 'social accomplishment' dependent on the perspective of the subject and how legitimacy is ascribed to different voices. Sauer (1993) defines failure as having finally and irreversibly occurred when the level of dissatisfaction with a system is such that there is no longer enough support to sustain it. Similar to this is the definition used by Markus & Keil (1994) that takes failure to mean an unused system, not simply a system that does not live up to expectations.

    There are also various definitions of success. For some this relates to the benefits provided, and results obtained, through the use of the system. Ein Dor & Segev (1978) identified success definitions such as profitability, application to major problems, quality of decisions/performance, user satisfaction, and widespread use. Ives et al. (1983) identified several aspects of success, including system quality (decision-making performance, perceived quality), system acceptance, use or change in attitudes or behaviour. For others the success of an IS refers to qualities of the system itself, such as the timeliness, accuracy, and reliability of output (DeLone & McLean, 19922003Li, 1997), or to the users' satisfaction with the system (Bailey & Pearson, 1983DeLone & McLean, 1992).

    Beyond the various definitions are debates about the causes of such success and failure. These fall broadly into two categories. The first relates to factors that are inadequately addressed, known as 'risk factors', or, at least in some sense, their opposite, that is, 'success factors'. Jiang & Klein (1999), for example, suggest that lack of system success can be related to risks inherent in the development process, including non-existent or unwilling users, multiple users, personnel turnover, inability to specify purpose, lack of management support, lack of user experience, and technical complexity. Sarkis & Sundarraj (2003), in the context of their study of a successful IS implementation at Texas Instruments, identify important lessons (or factors) that relate to strategic planning. These are: aligning IT with the business, top management support, addressing change management issues, rationalising business processes, identifying the importance of intangible issues, and focusing on metrics. These factors are frequently mentioned and are specifically identified by Sarkis and Sundarraj as having 'strong literature support'. Nevertheless, they suggest that they are often ignored in practice in the 'rush' to implement. Further factors identified include issues related to information characteristics and physical software design (Kirs et al., 2001), scope creep, lack of communication, and isolation of IT (Al-Mashari & Al-Mudimigh, 2003). A particular sub-set of the factors category is the importance of involving the users in the systems development process, which is frequently identified as predisposing IS projects to a greater chance of success (from the early work of Olsen & Ives (1981) through to more recent studies, for example,Iivari & Igbaria (1997), which takes a broader view of users, including organisational levels of users, task variety, and computing experience). Other factors that might particularly relate to the success of a system include the relationship between the IS staff and the users (Li, 1997), users' confidence (Li, 1997), service quality and conflict resolution, organisation size, structure, time frame, organisational resources, maturity, project climate of organisation, responsible executive, and the existence of a steering committee (Ein Dor & Segev, 1978).

    The second category relates to the effects of broader, organisational, social, and political elements and interactions. These effects are beyond individual success or failure factors, and in particular beyond purely technological factors.Markus (1983), for example, illustrated that the user resistance to a new system was motivated by political interests rather than technological deficiencies. Sauer (1993) views the causes of failure as occurring in the inter-relationship between the system itself, the supporters, and the project organisation. As do Kanellis et al. (1999), who view success as 'a perspective that emerges from the social and technical interplay within an organisation'. In other words, success is not a one-dimensional concept, but instead is reflected in 'multiple perceptions influenced by context'.

    A number of attempts have been made to analyse the LAS failure. For example, Flowers (1997) utilises Critical Failure Factors, Collins (1997) identifies '10 steps' to failure, and Finkelstein & Dowell (1996) adopt a 'false assumptions' approach. These studies essentially adopt analysis frameworks that utilise various failure factors. Introna (1996) uses Actor Network Theory and identifies 'Episodic Circuits of Power', Beynon-Davies (1995) adopts Sauer's (1993) Exchange framework, and Wastell & Newman (1996) use a multi-perspective psychophysiology methodology. These three studies go far beyond the simple concept of technical failure and relate to the wider context of organisational, social, and/or political elements for their explanations and interpretations. Such approaches seem highly appropriate for the complex LAS case. Therefore, to structure the comparison of the LAS failure with the subsequent turnaround and reduce potential bias, the authors chose the Exchange Framework (Sauer, 1993), utilised by Beynon-Davies (1995), in his analysis of the 1992 LAS failure, to underpin the analysis of the case. Beynon-Davies' analysis adheres closely to the findings of the Public Inquiry (Page et al., 1993), which is also beneficial as it is the major source for most analyses of the LAS crash. The framework itself is relatively comprehensive, well known, and well referenced. Further, it encompasses some of the broader social, political, and organisational perspectives, characteristic of the second category of the literature identified above.

    The Exchange Framework (Figure 1) describes the development of an IS as dependent upon interactions between the project organisation, the IS, and the supporters, all within a particular environment. This forms a 'triangle of dependencies' where the IS depends on the project organisation, the project organisation depends on its supporters, and the supporters depend on the IS. If there are problems ('flaws' in Sauer's terms) in any of the factors or relationships, then that is likely to have a detrimental effect on the IS development project, leading possibly to failure or termination. The triangle is not a closed system, as each relationship is also influenced by external and environmental factors.

    Figure 1.
    Figure 1 - Unfortunately we are unable to provide accessible alternative text for this. If you require assistance to access this image, please contact or the author

    Sauer's exchange framework (Sauer, 1993).

    Full figure and legend (10K)

    Research methodology

    The research methodology followed in this study is that of a case study in which the authors investigated the situation and environment of the LAS and its new CAD system primarily via a series of interviews with key players. The benefits of the case study approach are the degree of breadth and detail that can be obtained in complex real-world situations (Galliers, 1992Darke et al., 1998). Avison (1993) suggests that 'the strength of the case is... in its use for examining natural situations and in the opportunity it provides for deep and comprehensive analysis'.

    Interviews and visits were conducted over a period of 6 months, with some subsequent follow-ups to check queries and issues. The formal interviews were conducted using a semi-structured questionnaire designed to collect common information but allowing the interviewee the freedom to tell their own story, in their own words, and reflect on what had happened and on what they regarded as important. Formal interviews were conducted with five key players: Martin Gorham (CE), Avril Hardy (Training Manager), Ian Tighe (IT Director), Quentin Armitage (Systems Developer) and John Jennings (Sector Controller), and some were interviewed more than once. These people were chosen based on the objectives of the study and the key roles that they played in the development of the new LAS system. Of these, Gorham, Tighe, and Armitage were appointed after the 1992 crash, whereas Hardy and Jennings experienced both the old and the new system. All formal interviews were recorded and transcribed to ensure accuracy. Additional informal discussions were held with other people in LAS, in particular despatch staff on various shifts. Observation of the operations and control room was also undertaken. Despatch staff and sector controllers are the immediate users of the system with the ambulance teams being the resources that the CAD system despatches.

    Weaknesses of the case study approach are acknowledged. Common criticisms relate to the possibility of either researchers or interviewees, or both, biasing or unduly influencing the results. Interviewees who have played a key role in a development may wish for it to be seen in the best and most successful light. The authors were aware of this and attempted to cross-check data and perceptions wherever possible. Other criticisms relate to researchers having preconceived notions that result in them 'finding what they were looking for'. To minimize these potential problems, the authors interviewed the variety of people mentioned above, representing a range of managerial, operational, and user perspectives. Some interviewees reviewed versions of the case description in order to assure that the data had been credibly interpreted from their perspective. Despite attempts to minimise these potential issues, the case nevertheless reflects the biases of the participants and the authors.


    The CAD system

    This section begins with a brief description of the 1992 crash and its aftermath. This is followed by a description of the development of the new system, introduced in 1996.


    The crash of the 1992 LASCAD system

    The LAS is the largest ambulance service in the world and covers an area of 620 square miles with responsibility for the seven million people who live in the area plus the many who commute or visit. The LAS comprises 70 ambulance stations, 700 vehicles (around 400 ambulances, plus helicopter, motorcycles, and other patient transfer vehicles), and over 3000 staff (including 670 paramedics and 300 control staff). On average, the Service responds to around 2500 calls per day (1500 of which are 999 emergency calls). The demand for emergency services has increased steadily over the years with an annual growth rate of around 15%.

    A new CAD system was introduced on the night of 26th October 1992 to replace the previous manual despatching system. According to Beynon-Davies (1999), 'a flood of 999 calls apparently swamped operators' screens. It was also claimed that many recorded calls were being wiped off screens. This in turn caused a mass of automatic alerts to be generated indicating that calls to ambulances had not been acknowledged'. Operators were unable to clear the queues that developed and ambulances that had completed a job were not always cleared and made available, with the result that the system had fewer and fewer resources to allocate. Finally, at 1400 hours on the 27th October 1992 the system was unable to cope and LAS decided to terminate the system and revert to semi-manual operation. Calls continued to be taken via the system but the incident details were printed out and allocation was done manually, followed by mobilisation of ambulances via the system again. This improved the situation and LAS was at least able to respond to emergency calls and continue to despatch ambulances. This failure became known in the U.K. media as the 'crash of the London Ambulance system'. In the context of the definition of failure discussion above, the 1992 CAD system did not crash completely, although according to the Public Inquiry the problems 'cumulatively led to all the symptoms of systems failure' (Page et al., 1993).

    As a result the CE of LAS, who had championed the 1992 system, resigned. The next day a new CE, Martin Gorham, was appointed. He had been in the NHS (National Health Service) for about 25 years, mainly in hospital management, and had been director of corporate planning for a large health authority. Despite the change of CE it was not long before further problems emerged. On 4th November 1992, the semi-manual system failed to print out calls and LAS was forced to revert to a fully manual, paper-based system, with voice or telephone ambulance mobilisation. The Times (London) of 5 November 1992 reported a 25 min delay in despatching an ambulance and senior management were forced to 'concede that the system could not cope with its task'. In operational terms LAS was now back where it was prior to the 1992 system.

    A Public Inquiry was set up by the government and its findings were published in February 1993. The Report (Page et al., 1993) was highly critical of the management of LAS. In relation to the programme of change including the implementation of the CAD system, the report stated that '...the speed and depth of the change was simply too aggressive for the circumstances. Management clearly underestimated the difficulties involved in changing the deeply ingrained culture of LAS and misjudged the industrial relations climate so that staff were alienated to the changes rather than brought on board'. The report made a series of conclusions and recommendations for the future of LAS. Despite the significant problems experienced, LAS was recommended to continue to seek a computer solution for ambulance despatch but that 'it must be developed and introduced in a time scale which, whilst recognising the need for earliest introduction, must allow fully for consultation, quality assurance, testing, and training' (Page et al., 1993). In relation to the management of LAS, a restructuring was recommended together with a range of new appointments. It was acknowledged that such recommendations had resource implications and the South West Thames RHA (Regional Health Authority), now responsible for LAS, was encouraged to devise a financial strategy to achieve this.

    Gorham agreed that the LAS needed restructuring. He says, 'The simple fact was that the current structure was a complete obstacle to making progress. We didn't have the level of management resources that were needed. I think that's one of the reasons why my predecessor wasn't able to deliver what he set out to do. He just never had the amount of high level management resources you need to turn around a big high-profile, complex organisation, which had drifted 10–15 years behind the time.' Gorham implemented a four divisional structure and created an executive board, consisting of the CE, Finance Director, Director of Personnel, Operations Director, four Divisional Directors, and a Deputy who also managed the Control Room. Gorham also created a planning and an IT function with Ian Tighe appointed from the West Midlands Police as IT Director. Tighe in turn appointed Quentin Armitage as an IT developer.

    Meanwhile, the manual despatch system continued to operate, but problems were still being experienced highlighted by the case of Nasima Begum in June 1994. The 11 year old had a liver condition, for which she was receiving regular treatment, but unfortunately her condition worsened and despite four emergency calls she had to wait 53 min for an ambulance, only to die of renal failure. The tragedy was compounded by the fact that she lived only two minutes from a hospital and that the only available ambulance was sent elsewhere, to someone who did not really require emergency service. Again very bad publicity resulted, with the media attributing the death to the delay in despatching an ambulance (Collins, 1997).

    The Nasima Begum case resulted in another review of the Service, this time by William Wells, South West Thames RHA Chairman, on behalf of the Secretary of State. This review underlined the Page Report recommendations and introduced more initiatives. Further, in the first part of 1995 the House of Commons, through their Select Committee on Health, carried out a further inquiry into the Service and suggested 'that lives may well have been lost'.

    The new CAD system (1996)

    Gorham and the new LAS management team were under severe pressure to introduce a new computerised CAD system, but they felt that to do this quickly was likely to lead to at least some of the same problems that afflicted the 1992 system. They recognised that a manual system was not viable in the long-term and that a computerised solution would be necessary at some point, due partly to the increasing volumes of calls but also to meet new challenging performance targets required by the government for ambulance despatch.

    Thus, the approach adopted was one of continuing operation of the manual system, despite its problems, to buy time. Additional resources and extra staff were allocated to help the system function more efficiently. Meanwhile, Gorham attempted to build bridges with the LAS workforce together with a series of infrastructure improvements (known as warm-up projects), long deemed necessary, but also calculated to help build confidence and trust prior to publicly thinking about any new computerised system. These projects included replacing the electrical system, a new control room, a digital phone system, and upgrading of the ambulance fleet, with new vehicles and improvements to make conditions better for the crews. However, to the outside world this was perceived as inaction, as Ian Tighe, the new IT Director, reflected 'Most observers were certain that change should come far quicker than it was, and at times it was very hard to resist the pressure...'.

    However, the successful implementation of the warm-up projects eventually enabled the LAS to conclude that the time was right to begin addressing the design of a new CAD system. It was decided to develop the system in-house and although a package-based solution was considered and evaluated it was rejected. A participative approach utilising prototyping was adopted to help involve the users and instil ownership and acceptance of the system. A very slow and deliberate approach was adopted that provided time for participation and iteration. A great deal of attention was paid to testing and training and the system was only implemented when it was felt to be ready, not just in a technical sense but when the users were convinced about its capabilities. Indeed the implementation date for the new system was delayed at one point as a result.

    In relation to technology, a new hardware platform was chosen, as the old system was essentially a PC architecture, which was not thought to be adequate for a command and control system. The new system was UNIX based with an Informix database supporting around 60 workstations. Two systems were implemented, each with mirror disks and data replication between the two, with the second system capable of running the entire system. According to Tighe, 'We took the safe solution, this has worked for 20 years in other emergency services, and we know it works'.

    The new system went live on 17th January 1996. After about a week of successful running the operation moved into the new control room. The initial system was a very basic system enabling the operators to receive a call and enter the details of an incident directly into the system. The computerised Gazetteer, using a Postal Address File, looked up the location and provided a map reference within one second. The system provided the controller with information on which of the 150 or so deployment points around London was the nearest to the incident and the controller would then dispatch an appropriate ambulance. The system additionally provided the location of the nearest hospital for the controller to pass to the ambulance crew. The new CAD system was implemented with few problems and it provided immediate benefits.

    After a short period further enhancements were introduced with the most significant being in September 1996 when 'early call viewing' was introduced. Once the call-takers had established the address of the incident that information was immediately made available to the controllers to begin the despatch process, that is, before the call had finished. For the first time an element of re-engineering of the original manual process had been implemented and the benefits of a computerised system demonstrated. According to Armitage, in the hour before this new phase was implemented 38% of the calls were despatched within 3 min, in the next hour it was 50%, and in the following hour 60%. Next an AVLS (Automatic Vehicle Location System) was implemented, providing real-time information about what resources were available, where the ambulances were, their status, etc.

    The result was that the annual performance rates improved significantly, as shown in Table 1.

    These improvements also need to be viewed in the light of increasing demand for the service, for example, in 1996/97 emergency 999 calls increased by 16% on the previous year.1

    The impact in other terms was also impressive. As Armitage says, 'although there were one or two people who were still sceptical, I think the majority had confidence. They wanted a computer system, to move away from their antiquated procedures. Now they are desperate for more... it's very rewarding'. John Jenkins, one of the sector controllers, says, 'there is no doubt about it, things really changed for the better. Gorham made a big impact on this Service and... improved it dramatically.' Hardy agrees, stating that now people have 'trust in the system, it very rarely goes wrong. It does what you want it to do and it's very simple to use'.

    Other indicators also suggest success. The number of complaints from the public dropped quickly after the implementation of the system, from 100 per month to about 25 and below, over the following few months. The House of Commons Health Committee Report of December 1996 stated that they 'were struck not only by the technological improvements but also by the orderly and efficient atmosphere in the Central Ambulance Control. This contrasted strongly with the impression we had gained on our previous visit... which was of a Central Ambulance Control that was "not a pleasant environment in which to work, being noisy, overcrowded and claustrophobic'". They went on to say 'We warmly welcome the improvements in response times that the management and staff of the LAS have achieved... and the effective way in which new technology appears to have been introduced. We wish to congratulate both management and staff for their efforts in turning around an organisation which... was on 'the brink of collapse' only four years ago'.


    Analysis and comparison of the 1992 and 1996 systems

    The above briefly outlines the 1992 LAS crash and the development of the new, more successful, system in 1996. This now forms the basis for analysis and comparison of the two systems and in this section they are analysed using Sauer's Exchange Framework as utilised by Beynon-Davies (1995) in his analysis of the 1992 LAS failure. The four main elements of the framework are: environment, project organisation, IS, and supporters. Table 2 lists these elements and the factors within these elements and summarises the findings from the analysis in 1992 and 1996. The dates of 1992 and 1996 are used to denote which of the two systems is being discussed, although these are just the implementation years and of course much of the development of each system was in fact prior to these dates.


    Project organisation

    The first element in the framework is Project Organisation and Beynon-Davis identifies seven factors that contributed to the failure in this context. These are the inexperience of the developers, a history of failure, an over-ambitious project timetable, contractor problems, poor project management, incomplete software, and poor training. These are now examined in turn.

    Beynon-Davies (1995) and the Page Report comment that the 1992 developers had 'no previous experience of building despatch systems for ambulance services'. This resulted in insufficient attention being paid to the critical nature of the project with the specification being 'poor and leaving many areas undefined' (Beynon-Davies, 1995) and the system being implemented without having undergone proper testing. This issue was directly addressed in the 1996 system with experienced developers appointed from the West Midlands Police with knowledge of building command and control systems. This appears to be a significant difference between the two developments and an important factor in the turnaround. This experience led the developers to address the 1996 project in a significantly different way and to have opted for a more robust technical infrastructure.

    The second factor is the 'history of failure' in LAS. Prior to 1992, there had been an earlier attempt to computerise the despatch system that was abandoned as an expensive failure. This may well have been an important factor contributing to the 1992 failure. However, in 1996 the history of failure was even greater, because of the traumatic 1992 crash. Thus, this presents a problem for the analysis because in 1992 it was seen as a flaw or problem, leading to failure and yet in 1996 it was seen as an important driver for success. The explanation is probably to do with the degree of failure. The pre-1992 abandonment was only an internal LAS issue, whereas the 1992 system was a national disaster and was at the forefront of everyone's mind and acted as a catalyst for not letting the same thing happen again, that is, the greater the disaster the more it pushes the developers to adopt the opposite approach to that taken previously. However, this may be difficult to identify in other studies of failure/success due to the unusually high level of visibility of this failure, and the resulting attention and resources devoted to its solution.

    The third factor is the 'over-ambitious project timetable' identified as a problem in 1992. In 1996, despite the enormous political pressures on Gorham and LAS to move quickly, they resisted, and a very cautious approach was adopted before publicly considering a new computer system. The developers of the new system recognised this problem with the 1992 development and went to the other extreme with a relatively relaxed timetable that was even further delayed at one stage.

    The fourth factor is the contractor problems that were identified in 1992. It was felt that there had been some misleading of LAS management over their developers experience and confusion over the role of the prime contractor and subsidiaries in the project. No such problems were encountered in 1996 as the development was undertaken in-house by LAS itself. This was a deliberate decision based on the problems experienced previously and it ensured that there were very clear responsibility lines. It also meant that LAS could control the project themselves and as Tighe states, 'We wanted to control the pace of change, we didn't want to be in the position that to carry out a simple function you had to know about ten others, because that would have changed the pace. We needed to dictate the agenda'. (This was also one of the reasons for rejecting a package solution.) Given the history, the system had to be acceptable to the staff and according to Tighe, 'the only thing they would find acceptable is the thing that they invent'.

    The next factor was poor project management in 1992. It had originally been specified that PRINCE (Projects in Controlled Environments), a project management method, should be used but the contractors appointed did not know PRINCE and therefore it was not followed. Exception reports were never raised because of the fear of delivering bad news. In 1996, PRINCE was used as the method to drive the project, but on its own it would not have helped much, rather the culture needed to be changed to make it work. Tighe says, 'we had to convince people that if they saw a problem that they must not feel ashamed and feel that they have made a mistake and want to hide it. We had to instil the understanding that problems had to be aired, that it could be put right. This could only be achieved in relation to the development of the system if the culture of the whole organisation was also changing. The project management method was used throughout, so everyone understood it, the reports required, the project assurance team, the significance of it, etc. It began with the warm-up projects and thus people knew about it long before the computer system was developed'.

    The next factor was the 'incomplete software' and indeed the 1992 system had serious flaws in both the delivered system and the software development process. The most serious problem was the lack of adequate testing and the system failed, in part, because it could not handle the volume of calls required. In the 1996 system, testing was extensive including system testing, stress testing, and usability and organisational testing. The testing process was seen as critical in a command and control environment, but it was also recognised to be necessary for gaining the confidence of some, still sceptical, users. Users were involved throughout the process and their input was used to make modifications even at this relatively late stage. Armitage states, 'We did a lot of stress related testing on the system... we loaded the system up with 3000 jobs in an hour and at that time the busiest days were about 3000 jobs in the whole day. We let the users see this happen and they took heart, they saw the system working much much harder than it would ever need to, and surviving'.

    The final factor in relation to project organisation was the poor training of users in 1992. Although some training had taken place, it was deemed to have been too early and by the time the system had been implemented the skills had been forgotten (Page et al., 1993). In developing the 1996 system a great deal of effort was devoted to training, not just in relation to the computer system but training on all procedures. Hardy headed the training responsibility and she comments that 'it was a very strange experience... to be asked the question 'how long do you need to train people?', it was an absolute luxury compared to the time before when it was what can we cram into the short time that we've got left'. Indeed the need for full training was one reason why implementation was delayed at one point. Tighe adds, 'We weren't prepared to go live early... until we were all happy that the programs were fully tested, the system tested, integration tested and the users had tested and accepted and that we'd performance tested.'

    Thus, the factors relating to the element of Project Organisation in 1996 were quite different from those in 1992. Development was done in-house, with experienced developers following a well-established project management method. The developers were able to align the development activities with the specific needs and history of the LAS, with an appropriate timeline, an incremental process, and substantial user involvement.

    Information system

    The next element of the framework relates to the IS itself and Beynon-Davies (1995) identifies three factors of significance in 1992; the complexity of the system, communication and response time problems, and the frustration of the ambulance crews.

    The 1992 system had been seen as a way to automate the entire process with relatively little manual/user intervention. It was indeed complex, and substantially different from the previous way of working, with many new and different functions. Communication was also identified as problematic in the 1992 system, such as ambulance crews pressing the wrong buttons, or ambulances being in radio black-spots (Beynon-Davies, 1995), and this meant that incorrect information was sometimes used in allocation decisions. The misdirection of ambulances, the large queue of exception messages, and frustrated patients calling in multiple times all contributed to a 'vicious circle' leading to unacceptable response times. The 1996 system was in contrast deliberately straightforward. The initial system was a simple call-taking one, which was then built upon in stages, as each stage was understood and accepted.

    A second principle adopted for the new 1996 system was that it should be as close as possible to the functioning of the manual system. This was thought to be the best way to obtain buy-in and acceptance from the staff. As Hardy states, 'We wanted everything to be the same, the screen format was to follow as closely as possible the printouts on paper and the printed version of the call should also be the same format that would be used if you were hand-writing the form'. The past was influential here, with the 1992 crash system having had a very different look and feel to the manual system, which was thought to have caused unnecessary problems. It seems clear that these policies of simplicity, implementing in phases, and reflecting the manual system with which people were familiar enhanced the chances of success in this context. Further the system was tested thoroughly at each stage and was supported by a full back-up system that could take over immediately if any problem arose with the primary system. As a result, no response time problems were reported with the 1996 system and the performance was significantly improved.

    The final factor in this dimension relates to 'crew frustration' with the system. Beynon-Davies (1995) states that there was a belief that 'this may have led to an increased number of instances where crews failed to press the right buttons, or took a different vehicle to an incident than that suggested by the system'. The ambulance crews were very negative about the system in 1992, they felt that they had not been involved in its development, that they had not been listened to, and were very frustrated with its ineffectiveness in use. In 1996 as has already been indicated, these frustration problems were addressed in various ways, including the 'warm-up' projects, one of which not only upgraded the ambulance fleet but also included features to make the crews' job more comfortable, for example, air conditioning in the ambulances, an internal door, and portable radios. The crews, although some were still somewhat reluctant, were generally more amenable to the 1996 system than they had been in 1992.

    Thus, the element of IS in 1996 was also quite different to 1992. The system was very simple and straightforward, it reflected the manual way of working, and its development involved the users to overcome the previous resistance and frustrations. The result was a system that not only worked well from a technical perspective but also one that integrated well with the operators and users and with which they felt comfortable.


    The third element of the framework is that of Supporters of the system (Sauer, 1993). Although Beynon-Davies (1995)states that he prefers the term Stakeholders rather than Supporters, to indicate that some of those with interests in the system may be negative rather than supporters, and he identifies the staff of LAS to be in this category. He identifies four factors in relation to the 1992 staff/stakeholders as follows: mistrust of management, low morale, lack of ownership of the system, and an anti-computer bias.

    In 1992, the then CE wanted to push through automation as quickly as possible, and in one go. The rank and file of the organisation were not behind him in his attempt to impose what was seen as a technological solution to a wider set of organisational problems and there was a good deal of mistrust. The background of mistrust of management was partly the result of previous history and in particular an earlier pay dispute in which the London branch had held out against a national agreement with relations between the staff and management being extremely poor. Also the CAD system, with its automation of manual tasks, was seen as a threat to jobs. Further, Beynon-Davies (1995) quotes suppliers talking about their perceptions of LAS as exhibiting 'disorganisation, low staff morale, friction between management and the workforce, and an atmosphere of hostility towards computing systems'. Gorham concurs with this and says that some of the war stories he came across in his discussions with staff were 'frightening – this was nasty stuff' and 'it was an organisation that didn't respect individuals and that was a core problem'.

    There was clearly a very deep underlying mistrust of management in LAS and this had to be changed prior to any attempt to introduce a new computer system in 1996. As has been shown, Gorham attempted to build bridges with the workforce and demonstrate good faith. One important element of this was the warm-up or infrastructure projects, already discussed. These projects were undertaken with a great deal of care and effort was devoted to involving the staff in their development. For Tighe it was about asking, 'Why didn't this work before? The technological aspects had always spelt trouble but if we ignore that side, why did people reject it? How do we generate a different reaction? Can we create a different environment where people are actually more willing to accept that change is positive and that there might be something in it for them, as well as for everybody else'? One project reflecting this was the provision of portable radios for the ambulance crews so that they could be in communication with each other and the control room when away from their vehicles. This saved time but also was designed to improve staff security, with the addition of an emergency button, which gave something of benefit directly to the crews. Changing such a culture was a slow and laborious process, involving high levels of consultation and persuasion, but little by little perceptions changed and as each project was successfully implemented and seen to be delivering benefits the mood began to change. As Tighe states, 'people began to gain confidence in us, they saw that we actually did know a little bit about technology and implementation'.

    The participative approach adopted for the new CAD system was also important in re-establishing trust. One of the techniques employed initially was to have open forum sessions that anyone could attend. As Tighe recalls 'We constantly sat down with team and non-team members in open sessions where we pledged to answer any question as honestly as we could. We stood out at the front, as Directors, and they gave us hell, but we shared as much as we could of what was going on and our understandings. People wanted to know what was happening, what the view was on any topic, they wanted to know what would happen if you did this in this way, and that went on a lot'. There were a good number of these forum sessions but in the end they became quite poorly attended, whereas initially the meetings would attract 30–40 people in the end only two or three were turning up. This was interpreted by Tighe as the staff showing confidence and being happy to leave others to get on with it.

    The participative approach adopted included prototyping. The users would be presented with designs for comment and reactions. They were not expected to come up with formal specifications, they could just react to prototypes. The idea was to bring the users on board and give them confidence in the idea of computerisation and using the system. Initially, it was decided that the first part to be tackled would be the management of the resources, that is, the ambulances, their location, and deployment. This seemed sensible as it would deliver important benefits. Armitage, the project leader, and also the main programmer of the system recalls, 'I went away and produced the first prototype, not talking to users at all but having observed how the control room worked'. Although it might seem strange not to talk to the users, to begin with it was felt that a computer system was still such a contentious issue that it would be best to start with a prototype that people could see rather than asking them to provide a specification. The resources prototype was demonstrated to users and they said 'actually we'd like the computer to do the call-taking first, we're not so concerned about the resources. That can come later'. With some reluctance this was agreed and Armitage thinks that it was an important decision, 'I think to some extent it helped that we had gone down the wrong route (as the staff perceived it) and they said no, we want you to do something else, and we did. They saw that they could have considerable influence over the way things were done and that they weren't frightened to say 'oh we don't like it like that, we want it changed' and although it took time to build up the relationships it worked very well'.

    Hardy recalls group meetings to have a first attempt at designing the new screens layouts. 'A big group of us all sat there with the systems people who made a first attempt... and we would say that we'd like it to look somewhat different and they would make some changes, there and then, so that we could actually see it. This was one of those exercises where we went full-circle, loads of times. We'd all sit round the table with all the stakeholders there... Gradually we worked through it all until we were happy with the end result and this went an awfully long way to convincing people that things were going to be different this time'. Thus, people who wanted to be involved would participate in the working groups, while those who did not want to be directly involved could attend the Forums to hear what was happening and why.

    The degree of flexibility and response to users' comments and requests was significant, particularly in the early stages. Clearly, the system could have been developed much faster without striving for this consensus, but it was deemed to be the over-riding consideration. Hardy suggests that this was very painful at times, 'we would go around and around because you'd be presented with something and share it with the users. The users would mull it over and come up with all sorts of suggestions and then you'd come back again and review it... and it would just go on and on and on like that. You would often come full circle and be back where you started. But unless you go through that process you don't feel like you've been involved'.

    This participation, especially in the early period where the focus was on the call-taking, involved about 300 people, primarily the Control Room staff, rather than absolutely everybody. This 'Golden Circle', as it became known, has been criticised by McGrath (2002) for not involving the ambulance crews adequately and she suggests that some parties were deliberately kept out of this process, as they 'might challenge the legitimacy of the project'. Tighe confirms that at this point, the ambulance crews were not involved in the design of the call-taking system. He says that it was felt unnecessary to involve those who were not directly affected, and secondly, not everybody could be involved from a purely practical perspective. Further he suggests that it was importantly about empowering those whose views had been particularly ignored in the 1992 system, that is, the Controllers.

    Thus, the element relating to Supporters in 1996 was also quite different to 1992. We have seen the impact of the activities undertaken by the new LAS management and the development team to address the identified stakeholder problems, that is, mistrust of management, low morale, lack of ownership of the system, and an anti-computer bias. In 1996, they involved a broad set of users in many different ways, ranging from communication with management via the open forums or directly with the CE as he participated in the operations of the organisation or through participation and prototypes. This helped contribute to the feeling of ownership and buy-in of users, and the increasing level of trust helped reduce the anti-computer bias of 1992.


    The final element in the framework is the environment. Beynon-Davis identifies eight environment factors that contributed to the failure in 1992. These are the poor NHS and labour relations background, the lack of a strategic vision for the organisation, the aggressive pace of change, the lack of investment in LAS, a 'fear of failure' on the part of management, and the assumption that changes in working practices could automatically be achieved by the use of information technology.

    The NHS reforms were clearly an important contextual factor in the 1992 development. Beynon-Davies states that 'A great deal of the shape of the LASCAD project was determined by the internal tensions within the NHS'. The government of the time was attempting to reform the NHS to make it more 'efficient and effective' with the establishment of NHS Trusts and the introduction of more market-oriented purchaser/provider relationships. These changes were highly contentious and seen by some as a threat to the very existence of the NHS. The NHS unions actively resisted these changes and it had a detrimental effect in terms of morale within LAS, resulting in a lack of support for the CAD project and an antipathy towards management.

    The 1996 development also took place in this environment, with the NHS still being 'reformed' with opposition and bad feeling within the NHS an ongoing factor. However, after the crash Gorham, the CE, had the reports of the various public inquiries into the crash to help in the argument for the provision of additional time and money. The inquiry reports also helped set a different agenda for LAS. Gorham's goal was to improve LAS, in terms of management, personnel, infrastructure, and efficient use of resources and the new CAD system was an important, but just a small part of that. Additionally, the 1992 failure meant that the government did not want another set of bad publicity, which could and would be used by its political enemies. This was of some considerable benefit to the 1996 development.

    The poor industrial relations of the 1992 development were, at least initially, still very bad in 1996. When Gorham took over he found himself immediately facing the staff union representatives who came into his office demanding that the computer system be switched off. Gorham admits he did not really know what to do. He had the unions on one side saying it must be shut down, and what was left of his management team telling him that to shut down would be the final management abdication. Gorham says 'there I was sat in the middle and didn't really understand what they were talking about anyway but I managed to buy some time being new in post'.

    Gorham used this time to talk to the union representatives and try and establish some kind of relationship and dialogue. He saw their role as crucial and needed them to be, 'if not completely supportive, at least not too antagonistic'. He tried to appeal to the trade unions by stressing that it was about the future of LAS rather than just the system. He felt that getting them to work together with him was not impossible because the unions were apparently somewhat shocked and a little surprised at the turn of events, particularly the resignation of the CE. They had seen him as the problem and now found themselves having 'won' that battle but not quite knowing what to do next. Gorham describes this as the unions 'no longer having this frame of reference, which was quite useful'.

    The fact that there was no overall IT responsibility in the NHS in 1992 has been highlighted as a factor by Beynon-Davies, and possibly if there had been such a function there might have been some IT strategy, standards, or controls in place that might have helped the 1992 development for the better. However, such an overall IT responsibility in the NHS was still not in place by 1996 and the new system was developed without the benefits that such a responsibility might have provided. This seems to indicate that while an overall NHS IT responsibility might have been helpful it was not a necessary condition.

    The next factor identified by Beynon-Davies as problematic was the 'lack of a strategic vision' in 1992. It is not clear whether Beynon-Davies meant an NHS vision or an LAS vision. Certainly there was no specific NHS vision, as has been discussed above, but the 1992 CE did have a strong vision for the LAS and the CAD system, and here we disagree with Beynon-Davies; there was a vision, it was just that it might be considered somewhat inappropriate. In 1996, there was also a strong strategic vision for LAS, on the part of the CE, but it was somewhat different and to be achieved in very different ways.

    In 1992, management's fear of failure was argued to be the reason that the CE drove forward with the system in the face of evidence, even prior to implementation, that the system was inadequate. The fear of being seen to fail was perhaps the reason that the implementation was not delayed when problems came to light. Ironically the failure was actually much more public and disastrous than any loss of face that would have occurred if the system had been delayed, or even abandoned, prior to implementation. In 1996, there was still a fear of failure and a desire not to be seen to lose face. The commitment to a new computerised system was contentious with the staff, and the management needed to make sure it was successful, but this time losing face was not about backing down in the face of staff and unions but about bringing those people on board.

    Beynon-Davies identified as an additional environmental factor in 1992 the desire of the CE to use IT as the driver for 'changing working practices'. It was thought that the automation of the process would be to a level where the staff and operators would not really be required to make any decisions and thus did not need to be much involved. IT was seen as a battering ram for process change, which resulted in fears for their jobs by the staff. In 1996, there was also a desire to see changes in working practices in order to improve performance, but IT was used very gently rather than as the mechanism for pushing major change, it was not a threat to jobs, but it was part of a larger change process.

    Thus, it can be seen that when comparing the element of environment of the 1992 crash and the 1996 system, some important aspects were the same or at least quite similar (in the comparison of the previous elements significant differences were observed). Similarities can be seen in relation to the context of the NHS, which was still exhibiting internal tensions and continuing to be politically charged, with the government demanding modernisation of the NHS and LAS. Labour relations were also poor, at least initially in the 1996 development. However, there were some significant environment differences. The aggressive pace of change of 1992 was substituted with the very gentle, deliberate pace of 1996. The 1992 use of IT to drive changes in what were seen as restrictive working practices were replaced by the use of IT to more gently enable change with the system closely reflecting the manual system, at least initially. In relation to investment in LAS, the impact of the 1992 crash was such that the government was prepared to temper its demands of LAS somewhat and to provide significant extra funding to make sure that such a disaster did not occur again. The Public Inquiry reports were also undoubtedly helpful to the 1996 development in terms of putting pressure on the government to address certain key issues. Thus, a mix of similarities and differences are found in relation to the element of environment.


    The above analysis is summarised in Table 2, with the 1992 factors, identified by Beynon-Davies (1995), compared with the development of the 1996 turnaround system. In the next section, some implications of this analysis are discussed.




    The analysis shows that almost all the problem factors of 1992 were directly addressed in 1996, mainly by adopting exactly the opposite or inverse approach. For example, the problems of complexity of the 1992 system were addressed by developing, at least initially, a very simple system, reflecting the structure and outputs of the existing manual system, and implementing in a staged manner. The factors that were not addressed or changed were typically those outside the direct control of LAS management, for example, the history of failure, the context of the NHS reforms, and the lack of an overall NHS IT responsibility. It is interesting that none of these factors that remained similar were sufficient to undermine or derail the success of the new system. Although, as has been noted, the history of failure, resulting in the media interest and the Page Report, probably contributed significantly to the obtaining of improved funding that enabled the 'warm-up' projects and the system itself.

    Overall the following general factors emerge from the analysis as of key importance: good project management; realistic (relatively relaxed) project timescales; use of experienced developers; a participative approach; prototyping; good training; extensively tested software; staged development and implementation; ownership by users and line management; strategic vision and buy-in from senior management; establishment of trust between staff and management; adequate investment for the project in hand; and a combined IT and wider organisational focus of the project. None of these are particularly surprising or indeed original, as many have been identified in previous studies (e.g., Iivari & Igbaria, 1997Jiang & Klein, 1999Kirs et al., 2001Sarkis & Sundarraj, 2003); nevertheless, the case provides further evidence of their importance in relation to successful IS.

    This study set out explicitly to compare the 1992 and 1996 development environments based on Beynon-Davies' analysis (1995), utilising Sauer's Exchange Framework, of the 1992 failure, and therefore for consistency our analysis followed this framework. This framework enabled an interesting comparison that particularly highlights the way in which the new development effectively addressed the identified problems of the old. It shows how flawed the 1992 development was, which is well known, and also how it was possible to overcome such flaws, which in 1992 seemed unlikely. It shows the importance of the four elements of the framework and their interaction. None of the factors discussed within each element can be said to stand alone, but instead relate to, and are influenced by, other factors in other elements. It also, in our view, shows the importance of the environment factors in framing the two developments and contributing to understanding the effect of these factors and thus how to address them.

    There are, however, some limitations to this approach. The framework, or at least Beynon-Davies' use of it, seems to focus more on factors than processes, although the authors have tried to consider both in this analysis. Also the authors do not always concur with Beynon-Davies' categorisations of factors within Sauer's four elements; nevertheless, his categorisations have been followed. This framework also prevents the case being presented chronologically, as the four elements have to be addressed separately, and it is hoped that this has not prevented the story of the 1996 development emerging in its own right.



    Given the high cost of IS failure, the importance of determining failure and success factors cannot be ignored. This paper addresses issues of failure and success in IS by outlining the 1992 LASCAD crash and then describing and analysing the development of the 1996 turnaround system utilising a framework from the literature (Sauer, 1993, as used by Beynon-Davies, 1995) to compare the two cases in relation to factors that were identified as significant in the 1992 failure. The findings indicate that the failure factors identified in relation to the 1992 crash were, in the main, addressed successfully in the 1996 system. From this some specific and generic issues relating to successful systems development are suggested.

    The most important differences between the two processes relate to the IS, the project organisation, and to the supporters. Fewer of the environmental factors were different at the time the 1996 system was introduced. When the changes are examined, it becomes apparent that leadership and the understanding of the needs of staff, as opposed to the forcing through of change without consensus, were important. In general, there were a number of key people who were able to act in a way that reflected the overall ethos of the approach. Their presence was critical, as was the availability of resources and the ability to learn from previous mistakes. The analysis highlights the need for thorough testing and training and a sensible implementation deadline. It also shows the importance of taking into account the broader context, including the human element (supporters in Sauer's terms), into which an IS will be introduced.

    It is hoped that this study is interesting in its own right but also contributes to our understanding of IS success and failure. The study is one of a very small number of longitudinal examinations of a turnaround process. The application of this framework to other cases of IS development and indeed turnarounds would contribute further to our understanding of long-term success and failure of IS implementations.

    Whereas a single case study cannot be viewed as directly generalisable, the results do contain outcomes that are supported by various other studies and provide some implications for practice. The experience of LAS provides managers with a number of strategies for achieving successful IS implementation. Foremost is the recognition that turnaround is a gradual process, requiring an understanding of the context, including the system, the project organisation, the stakeholders, and the environment in which they interact.



    1 LAS has continued to evolve since this time. Currently over 3000 calls are received a day and around 1 million emergency calls a year, with ambulances responding to over 800,000 incidents. Recent government performance measures for ambulance services have changed with a wider range of indicators now used. One of these relates to Category A incidents (immediately life threatening), and in 2004, LAS met the target with more than 76% of emergencies reached within 8 min.



    1. Al-Mashari M and Al-Mudimigh A (2003) ERP implementation: lessons from a case study. Information Technology & People 16(1), 21–33.
    2. Avison DE (1993) Human, Organizational and Social Dimensions of Information Systems Development. North-Holland, Amsterdam, 496pp.
    3. Bailey JE and Pearson SW (1983) Development of a tool for measuring and analyzing computer user satisfaction.Management Science 29(5), 530–545.
    4. Beynon-Davies P (1995) Information systems 'Failure': the case of the London Ambulance Service's Computer Aided Despatch Project. European Journal of Information Systems 4, 171–184.
    5. Beynon-Davies P (1999) Human error and information systems failure: the case of the London Ambulance Service Computer-Aided Despatch system project. Interacting with Computers 11, 699–720. | Article |
    6. Collins T (1997) (with BICKNELL, D.) Crash: Ten Easy Ways to Avoid a Computer Disaster. Simon and Schuster, London.
    7. Darke P, Shanks G and Broadbent M (1998) Successfully completing case study research: combining rigour, relevance and pragmatism. Information Systems Journal 8(4), 273–289. | Article |
    8. DeLone WH and McLean ER (1992) Information systems success: the quest for the dependent variable.Information Systems Research 3(1), 60–95.
    9. DeLone WH and McLean ER (2003) The DeLone and McLean model of information systems success: a ten-year update. Journal of Management Information Systems 19(4), 9–30.
    10. Ein Dor P and Segev E (1978) Organizational context and the success of management information systems.Management Science 24(10), 1064–1077.
    11. Finkelstein A and Dowell J (1996) A comedy of errors: the London Ambulance Service case study. In Proceedings Eighth International Workshop on Software Specification & Design IWSSD-8, pp 2–4, IEEE CS Press: Washington, DC, USA.
    12. Flowers S (1997) Information systems failure: identifying the critical failure factors. Failure and Lessons Learned in Information Technology Management 1, 19–29.
    13. Galliers RD (1992) Choosing information systems research approaches. In Information Systems Research: Issues, Methods and Practical Guidelines (GALLIERS RD, Ed.), pp 144–162, Blackwell Scientific, Oxford.
    14. The Guardian (1992) Ambulance Chief Resigns, 29th November, pp 1–2.
    15. Iivari J and Igbaria M (1997) Determinants of user participation: a Finnish survey. Behaviour and Information Technology 16(2), 111–121. | Article |
    16. The Independent (1992) Software Failure May be Behind Ambulance Crisis by Susan Watts and Ian McKinnon, 30th October 1992, p 2.
    17. Introna L (1996) Management. Information and Power, Macmillan.
    18. Ives B, Olsen M and Baroudi JJ (1983) The measurement of user information satisfaction. Communications of the ACM 26(10), 785–793. | Article |
    19. Jiang JJ and Klein G (1999) Risks to different aspects of system success. Information and Management 36, 263–272. | Article |
    20. Kanellis P, Lycett M and Paul RJ (1999) Evaluating business information systems fit: from concept to practical application. European Journal of Information Systems 8, 65–76. | Article |
    21. Kirs JP, Pflughoeft K and Kroeck G (2001) A process model cognitive biasing effects in information systems development and usage. Information and Management 38, 153–165. | Article |
    22. Li Y (1997) Perceived importance and information system success factors: a meta analysis of group differences.Information and Management 32, 15–28. | Article |
    23. Lyytinen K and Hirschheim R (1987) Information systems failures – a survey and classification of the empirical literature. Oxford Surveys in Information Technology 4, 257–309.
    24. Markus L (1983) Power, politics and MIS implementation. Communications of the ACM 26, 430–444.
    25. Markus ML and Keil M (1994) If We Build It, They Will Come: Designing Information Systems That People Want to Use. Sloan Management Review 35(4), 11–25.
    26. McGrath K (2002) The Golden Circle: a way of arguing and acting about technology in the London Ambulance Service. European Journal of Information Systems 11, 251–256. | Article |
    27. Olsen MH and Ives B (1981) User involvement in systems design: an empirical test of alternative approaches.Information & Management 4, 183–195.
    28. Page D, Williams P and Boyd D (1993) Report of the Public Inquiry into the London Ambulance Service. HMSO, London (referred to as the Page Report).
    29. Sarkis J and Sundarraj RP (2003) Managing large-scale global enterprise resource planning systems: a case study at Texas Instruments. International Journal of Information Management 23(5), 431–442. | Article |
    30. Sauer C (1993) Why Information Systems Fail: A Case Study Approach. Alfred Waller, Henley-On-Thames, Oxfordshire.
    31. The Times (1992) New Failings Force 999 Staff to Ditch Computers' by Tim Jones, 11th May 1992, p 6.
    32. Wastell D and Newman M (1996) Information systems design, stress and organisational change in the Ambulance Services, A Tale of Two Cities. Accounting, Management & Information Technology 6(4), 283–299.
    33. Wilson M and Howcroft D (2002) Re-conceptualising failure: social shaping meets IS research. European Journal of Information Systems 11, 236–250. | Article |


    We thank all those who participated in the case for all their time and effort, especially Ian Tighe who also helped facilitate the research.


    About the authors

    Guy Fitzgerald is Professor of Information Systems at Brunel University and is Director of Research in the School of Information Systems, Computing, and Maths. Prior to this, he was at Birkbeck College, University of London, Templeton College, Oxford University and Warwick University. He has also worked in the computer industry with companies such as British Telecom, Mitsubishi, and CACI Inc., International. His research interests are concerned with the effective management and development of information systems and he has published widely in these areas, including articles in European Journal of Information Systems, Journal of Strategic Information Systems, International Journal of Information Management, Communications of the ACM, and Journal of Information Technology. He is co-author, with David Avison, of the text Information Systems Development: Methodologies, Techniques, and Tools, and is co-editor of the Information Systems Journal (ISJ), an international journal, from Blackwell Publishing.

    Professor Nancy L. Russo is Chair of the Department of Operations Management and Information Systems at Northern Illinois University. She received her Ph.D. in Management Information Systems from Georgia State University in 1993. In addition to studies of the use and customisation of system development methods in evolving contexts, her research has addressed web application development, the impact of enterprise-wide software adoption on the IS function, IT innovation, research methods, and IS education issues. Her work has appeared in Information Systems Journal, Communications of the ACM, Journal of Information Technology, Information Technology & People, and other journals, books, and conference proceedings.


    Read more »

  • The business model concept: Theoretical underpinnings and empirical illustrations

    The business model concept: theoretical underpinnings and empirical illustrations

    Jonas Hedman1 and Thomas Kalling2

    1Department of Informatics, School of Economics and Management, Lund University, Lund, Sweden
    2Institute of Economic Research, School of Economics and Management, Lund University, Lund, Sweden
    Correspondence: Jonas Hedman, Department of Informatics, School of Economics and Management, Lund University, Ole Römers väg 6 SE-223 63Lund, Sweden. Tel: +46 46 222 46 03; E-mail:

    Received 13 December 2001; Revised 27 March 2002; Re-revised 26 July 2002; Accepted 15 October 2002.

    The business model concept is becoming increasingly popular within IS, management and strategy literature. It is used within many fields of research, including both traditional strategy theory and in the emergent body of literature on e-business. However, the concept is often used independently from theory, meaning model components and their interrelations are relatively obscure. Nonetheless, we believe that the business model concept is useful in explaining the relation between IS and strategy. This paper offers an outline for a conceptual business model, and proposes that it should include customers and competitors, the offering, activities and organisation, resources and factor market interactions. The causal inter-relations and the longitudinal processes by which business models evolve should also be included. The model criticises yet draws on traditional strategy theory and on the literature that addresses business models directly. The business model is illustrated by an ERP implementation in a European multi-national company.

    'Business model' is a term often used to describe the key components of a given business. It is particularly popular among e-businesses and within research on e-businesses (Timmers, 1998; Afuah & Tucci, 2001; Amit & Zott, 2001; Applegate, 2001; Cheng et al., 2001; Rayport & Jaworski, 2001; Weill & Vitale, 2001). Business models are even subject to patent law, for example, has a patent for one-click purchase (Rappa, 2002). Within business research, the concept is used more sparsely, even if strategy research covers many if not all of the theoretical components that are included in the business model concept.

    The empirical use of the concept has been criticised for being unclear, superficial and not theoretically grounded (Porter, 2001). However, we believe that it has promise, one reason being that it could integrate disparate strategic perspectives such as the resource-based view (RBV) and industrial organisation (I/O). There are few integrative strategy models that unite finer aspects of strategy, such as resource-bases, activities, structure, products and external factors. In fact, strategists still tend to argue about what it is that makes companies successful, whether it is firm-internal resources (Barney, 1991) or successful reconfiguration of the value chain (Porter, 1985), or a well-implemented generic strategy (Porter, 1980).

    More importantly, a theoretically sound definition of the business model would also help the field of IS strategy research. Research into how IS improves strategies and provides competitive advantage has not recognised, sufficiently, RBV, and the importance of sustainability of advantage (Ciborra, 1994; Powell & Dent-Micallef, 1997; Sambamurthy, 2000). On a general level, it has been indicated that IS research tends not to be able to measure the bottom-line contribution of IS investments – the so-called IT productivity paradox (e.g., Strassman, 1985; Brynjolfsson, 1993; Shin, 2001). This, we believe, is partly related to the issues just mentioned, partly to the fact that IS does not always contribute to business performance. In order to contribute to performance, IS must be acquired cleverly, fit with other resources and implemented effectively, understood and used, and aligned and embedded with organisation in a unique way. Any improvements in value chain activities must be materialised by an offering that increases customer-perceived quality and/or reduces cost. All these factors and their causal inter-relations need to be understood for any specific business model.

    The aim of the paper is to provide an input as to which components should be included in a business model, by which managers and researchers can understand the causal relation between IS and business. We use concepts from strategy theory, extend them with models and concepts from strategy-related IS research, present a conceptually generic business model, and illustrate it empirically.

    Strategy theory
    Strategy theory concerns the explanations of firm performance in a competitive environment (Porter, 1991). There are many strategy perspectives, but we shall focus here on three 'paradigmatic' perspectives: I/O, RBV, and the strategy process perspective. I/O and RBV are both interested in competitive advantage. However, their views on what competitive advantage is and on what it is based differ. While both RBV and I/O may be seen as content-based approaches (cf. variance theories in Markus & Robey, 1988) to strategic management, the process-based view on strategy focuses on the processes through which strategy contents are created and managed over time.

    Porter (1980) brought in the I/O perspective (Bain, 1968), by claiming that external industrial forces affect the work of managers. Substitute products, customers and suppliers as well as potential and present competitors determine strategic choices. The two 'generic strategies' are differentiation and low-cost. Porter's work was further developed in 1985, with the value-chain model, which focuses on the activities and functions of the firm, the underlying factors that drive cost and differentiation advantages. Thorough control and grouping of activities enable firms to utilise cost and differentiation potentials through the reaping of scale advantages or the creation of innovative forums. The Porterian framework has been used extensively within IS research. McFarlan (1984) suggests that IS can be used to manipulate 'switching costs', and erect 'barriers to entry'. Porter & Millar (1985) argue that IS can be used to enhance value chain activities to gain competitive advantage through low cost or differentiation. Further, IS can be used for cost rationalisation (e.g., automation) and for niche positioning (Rackoff et al., 1985). The models have been used in research into the role of IS in competitive pricing (Wiseman, 1985), and customer and partner relationship management (Johnston & Vitale, 1988; Ives & Mason, 1990).

    Already in the mid-1970s, a focus on the strategy process (rather than strategy content such as market positions and strengths and weaknesses) initiated criticism of the ex ante and normative approach of the strategy field (Mintzberg, 1978,1994; Quinn, 1978). Uncertainty about the future leads to incrementalism, shorter planning horizons, less revolutionary strategic actions, and tentative moves. The pattern of action visible ex post makes up the 'emergent strategy' (Mintzberg, 1978). The focus on strategy content such as competitive position (or any other independent content concept, e.g. structure, size, degree of diversification, etc.) and its relation with performance became less interesting compared to research on how firms actually created the favourable positions over time. The independent variables of content research become the dependent variables in process research. The independent variables in process research are found in management- and organisation-related fields, including the acceptance of bounded rationality and the attention to the role of norms and values in formulation and implementation (Chakravarthy & Doz, 1992). The focal point of the process perspective is the management of cognitive and cultural constraints on strategic development and firm evolution (Whittington, 2000). The process perspective has progressed, focusing the managerial function (Prahalad & Bettis, 1986; Ginsberg, 1994), and has also been combined with RBV (Amit & Schoemaker, 1993; Oliver, 1997). Process approaches are also applied in IS research (Robey & Boudreau, 1999) and viewed as 'valuable aids in understanding issues pertaining to designing and implementing information systems, assessing their impact, and anticipating and managing the process of change associated with them' (Kaplan, 1991, p. 593). One of the first was the Nolan stage model (Gibson & Nolan, 1974; Nolan, 1979); recent developments include the MIT90s framework (Scott-Morton, 1990) and the strategic alignment movement (Henderson & Venkatraman, 1992). Recently, approaches combining a process approach and RBV have been applied to explain the processes by which organisations develop and utilise IS (Ciborra, 1994; Andreu & Ciborra, 1996; Kalling, 1999).

    Whereas I/O states that environmental pressure and the ability to respond to it are the prime determinants of firm success, RBV states that idiosyncratic and firm-specific sets of imperfectly mobile resources determine which firm will reach above-normal performance (Wernerfelt, 1984; Dierickx & Cool, 1989; Barney, 1991; Peteraf, 1993). RBV emphasises the characteristics of the underlying factors behind low-cost and differentiation and the value chain, that is, the resources of the company. The RBV literature holds numerous descriptions of resource attributes that render competitive advantage. Barney's typology (1991) summarises the main ones: value, rareness, and imperfect imitability and substitutability. A firm's resources are valuable if they lower costs or raise the price of a product. Certain resources have a better fit with certain organisations, and hence expectations, and value, are different depending on who is considering resource investment (Barney, 1986; Dierickx & Cool, 1989). A key RBV attribute is resource rareness, but a valuable, rare resource also needs to be costly to imitate or to substitute to sustain the advantage of the resource. A resource that could be acquired at an imperfect market price will only remain a source of advantage as long as competitors fail to realise and materialise the potential. A resource and its outcome can be imitated either by building/acquiring the same resource or by creating the same intermediate or final outcome with a different resource. The costs associated with imitation are driven by unique historical conditions, causal ambiguity, and the social complexity of resources (Barney, 1991). Using RBV in IS settings is becoming increasingly popular (Clemons & Row, 1991; Mata et al., 1995; Powell & Dent-Micallef, 1997; Andreu & Ciborra, 1996; Bharadwaj, 2000; Duhan et al., 2001; Wade, 2001). In an empirical analysis of IS-enabled competitive advantage at firms acclaimed for their pioneering role in IS usage, Kettinger et al. (1994) found that 'the pre-existence of unique structural characteristics is an important determinant of strategic IS outcomes' (p. 46). Frustrated over the inability of I/O to explain sustained advantages, researchers emphasised the difference between strategic advantage and necessity, and claimed that in order for IS to generate sustained competitive advantages, they need to be embedded with other unique resources. Interestingly, these researchers never saw IS as being able to generate advantage on its own, but only by facilitating other resources (cf. Powell & Dent-Micallef, 1997).

    The strategy field is fragmented, meaning there is no such thing as one theory of strategy. Proponents of the three fields juxtapose with each other, which is possible since they focus on different aspects of strategy. RBV occupies a more prominent role in strategy today than I/O, but RBV too has limitations. Critics put focus on the lack of empirical studies, the relative lack of process-orientation, and shortcomings in explaining hyper-competitive industries (D'Aveni, 1994; Foss, 1997; Williamson, 1999; Eisenhardt & Martin, 2000). Important criticism concerns the object of analysis: what, exactly, is it that should be unique; the resource, its impact on activities, or the profit? Mosakowski and McKelvey (1997) and Chatterjee (1998) suggest that the relevant unit of measurement is the so-called intermediate outcome, for example, a product feature that increases quality or a swifter handling process, that is, something between the resource and the product offering and profitability. In addition, strategy process researchers criticise both RBV and I/O for neglecting the obstacles to strategic dynamics and management (Sanchez & Heene, 1997).

    In theory, the strategy concept means whatever phenomenon we subjectively attach to it, such as the choice of industry, industry position, customers, geographical markets, product range, structure, culture, value chain, resource-bases, and so forth. We believe, however, that it is possible to integrate the relevant components into one model, and below we shall review some of the research that attempts to do so. As a starting point, however, the three perspectives do offer a set of valuable concepts: customers and competitors (industry), the offering (generic strategy), activities and organisation (the value chain), the resource-base (resources) and the source of resources and production inputs (factor markets and sourcing), as well as the process by which a business model evolves (in longitudinal processes affected by cognitive limitations and norms and values).

    Business model literature
    Business research
    One comprehensive, yet neglected, text on business strategy is Porter (1991). Porter claims that the low-cost and differentiation advantages that firms enjoy on the product market ultimately stem from 'initial conditions' and 'managerial choices'. Decisions taken affect the so-called drivers (resources or properties such as scale and scope), which are acted upon in activities, which in turn enable low cost and/or differentiation. These enable specific strategic positions in markets/industries, allowing, potentially, for firm success. It is not referred to as a business model, but it incorporates many features that could be included in such a model. Porter is not specific about the contents of the components, but the model summarises his previous models and adds the causal relations between initial conditions and managerial choices and firm success. Inherent in this model is also the strategy process, as the managerial choices are seen as taking place in a longitudinal dimension and is thus a response to criticism from the process perspective field (Mintzberg, 1978). The model encompasses both RBV and I/O, and highlights the complementary nature of the two viewpoints – a complementarity based on causality. So Porter's integrative causality model is also a response to the criticism from RBV. Ironically, Porter's criticism of the business model concept (2001) could be resolved by using his 'causality chain' model (1991).
    Others have described conceptually similar models, including Normann's work on the business idea (1977),(2001). Normann used the business idea concept, which distinguishes between three different components: (1) the external environment, its needs and what it is valuing; (2) the offering of the company; and (3) internal factors such as organisation structure, resources, knowledge and capabilities, systems, values. The concept is systemic in nature and the relation to the external environment depends on the offering, which in turn is dependent upon firm-internal factors.

    Much of the research within entrepreneurship is free from the RBV–I/O dichotomy and inherently longitudinal and process-orientated in nature. These approaches normally focus on the evolution of entire businesses and therefore often use concepts such as 'business models'. McGrath & MacMillan (2000) include 'the way an organisation organises its inputs, converts these into valuable outputs, and gets customers to pay for them' in the business model concept. Schumpeter (1934),(1950) stated that entrepreneurial innovation included the combining of previously disconnected 'production factors', and could result in new markets and industries, products, production processes, and source of supply, all being potential business model components. Eisenhardt & Sull (2001) suggest that the source of advantage is found in the position a company takes on the product market, in its resource base or in the key processes – all of which could be referred to as components of a business model. They claim that in the rapidly changing, ambiguous markets, the focus is more towards processes and, most importantly, the 'simple rules' that guide the key processes. The robustness that comes with a strategy based on resources and positions makes it difficult to act rapidly. Growth, rather than profit, is the ultimate objective of these fast-moving firms.

    Top of page
    E-business research
    As stated earlier, the business model concept is often used in e-business research. Cherian (2001) identified 33 types of e-business models, Applegate (2001) classified 22 e-business models, and Timmers (1998) listed 11 specific e-business models. E-business model research, empirical or conceptual, can be organised around two complimentary streams. The first stream aims to describe and define the components of an e-business model. The other stream aims to develop descriptions of specific e-business models.

    Timmers (1998, p. 4) defines an e-business model as: 'An Architecture for the products, service and information flows, including a description of the various business activities and their roles'. Weill & Vitale (2001) present a similar definition: 'A description of the roles and relations among a firm's consumers, customers, allies, and suppliers that identifies the major flows of product, information, and money, and the major benefits to participants.' Amit & Zott (2001) presented three components of e-business models, including content (exchanged goods and information), structure (the links between transaction stakeholders), and governance of transactions (the control of the flows of goods, information and resources). Afuah & Tucci (2001) presented a list of components including customer value (distinctive offering or low cost), scope (customers and products/services), price, revenue sources, connected activities, implementation (required resources), capabilities (required skills), and sustainability. Their list is applicable to both e-business models and conventional business models, but addresses neither causality between components nor processes of change. Applegate's (2001) business model framework, based on an I/O logic, consists of three components: concept, capabilities, and value. The business concept defines a business market opportunity, products and services offered, competitive dynamics, strategy to obtain a dominant position, and strategic option for evolving the business. Capabilities are built and delivered through its people and partners, organisational structure, culture, operating model, marketing and sales model, management model, development model, and infrastructure model. The value of a business model is measured by its return to all stakeholders, return to the organisation, market share, brand and reputation, and financial performance. The difference between industrial age business models and e-business models is the different business rules and assumptions of how business is done (Applegate, 2001). A summary of components is included in Appendix A

    The other stream of research on e-business models aims to describe specific business models, which explain how businesses use the Internet to interact and how value is created for customers and other stakeholders (Applegate, 2001). Weill & Vitale (2001) define eight finite e-business models (direct customer, full-service provider, intermediary, whole of enterprise, shared infrastructure, virtual community, value net integrator, and content provider) based on a systematic and practical analysis of several case studies. They show how each model works in practice, including how it makes money and the core competencies and critical factors required. Timmers (1998) and Rappa (2002) state that there is no single comprehensive taxonomy for classifying e-business models, yet they list a range of different e-business models. Applegate (2001) presents five general categories of business models and 22 specific types of e-business models. This classification is based on generic market role (suppliers, producers, distributors, and customers), digital business (whether or not the business is dependent on the Internet), and platform (whether or not the business is a provider of the infrastructure upon which digital business is built and operated on) (see Appendix B)

    Even if concepts differ in e-business research, the ideas are similar and could be referred to strategy theory. It provides useful descriptions of business models, but could benefit from a broader use of strategy theory, which would provide more content as well as a clearer coherence in terms of causality. Furthermore, they are based on e-business, not business.

    Top of page
    A business model proposal
    Based on the above review of the widely ramified literature, we would propose a generic business model that includes the following causally related components, starting at the product market level: (1) customers, (2) competitors (3) offering, (4) activities and organisation, (5) resources, and (6) supply of factor and production inputs. These components are all cross-sectional and can be studied at a given point in time. To make this model complete, we also include a longitudinal process component (7), to cover the dynamics of the business model over time and the cognitive and cultural constraints that managers have to cope with. In Figure 1, we refer to it as the scope of management.

    Figure 1.Figure 1 - Unfortunately we are unable to provide accessible alternative text for this. If you require assistance to access this image, please contact or the author
    The components of a business model

    Full figure and legend (45K)

    The model integrates firm-internal aspects that transform factors to resources, through activities, in a structure, to products and offerings, to market. The logic is that in order to be able to manage industrial forces and serve the product market, businesses need activities, resources and input from the factor market (capital and labour) and the supply of raw material. For instance, IKEA's low-cost strategy clearly gave them a unique position in relation to craft-orientated furniture competitors and customer segments such as young families. The low-cost strategy is based on effective value-chain configuration (design, sourcing, storing and retailing) based on scale and strategic locations. The value chain, in turn, is based on resources such as design skills, supplier relations, sourcing networks, and cultural factors like strong commitment and leadership visibly enforcing cost effectiveness.

    The same resource-base and value chain can produce different products and hence have a scope of different offerings, but at some point during diversification, new activities are needed and potentially also new resources, thus forcing the development of business models. With this view (even a non-diversified), firm can have many different business models. However, the more profound the differences between products, the higher the probability that the businesses are organised independently of each other.

    There are causal relations between the different components. In order to serve a particular customer segment and compete with the forces within that segment, the offering must have a favourable quality/price position. In order to achieve this, firms need to offer customer-perceived quality of physical product features and service, which in turn requires effective configuration and execution of value chain activities and organisational structure (efficient communication and division of labour and authority). This requires human, organisational, and physical resources that have to be acquired on factor markets and from suppliers of production inputs. Although not depicted graphically, external actors are potential partners or competitors in all aspects of the business: in the bundling of products, in activities and in the configuration of resources. Change can appear both in exogenous or endogenous processes. A poor offering (too high price/quality) may initiate change programmes that result in reformed activities and reconfigured resource base, but it can also work the other way; firms take stock of their resource base and may find new ways to combine resources, and new ways to dispose of activities as a result of resource modifications. This can result in new offerings and improved market positions. So change can take either direction, and the depth of change will vary. What is important though is the realisation that whatever the modification, it will affect other components of the model.

    The business model has to be managed and developed over time. This is how the process perspective is included. The model can be studied in a cross-sectional dimension (the causal dimension, vertical in the outline of the model) but it also evolves over time (the longitudinal dimension, horizontal in the outline of the model) as managers and people on the inside and customers and competitors on the outside continue to evolve. These processes include the bridging of cognitive, cultural, political obstacles, and are issues that managers deal with on a regular basis, for all components of the model, and claims that we need all three in order to understand the factors of success and failure. Resources must be acquired, activated, and organised in a way that improves the cost and quality of the offering in relation to customer preferences and competitors.

    Top of page
    An illustrative example
    Below we shall illustrate the business model and its components by discussing the experiences of ABC Multinational Manufacturing (anonymised) when implementing an ERP application. ABC is a B2B operation and is one of the largest European suppliers within its industry, with a sales turnover of roughly 4 billion Euros per annum. Having grown dramatically during the 1990s, they have developed into a company with more than 200 plants, represented in almost all European countries. The structure of ABC is geographical, with each geographic region holding 10–30 production units, each of which is run as a profit centre and relatively self-sufficient.

    In 1991, ABC decided to develop an ERP system. The prime reason was a desire to reduce costs in activities such as customer service, order entry, production planning and logistics, and to improve service in terms of customer response, and delivery performance and accuracy. ABC was also keen on replacing a broad range of legacy systems in place across units. ABC was aiming for both cost reductions and service differentiation.

    Resource level
    Together with consultants, and with the assistance of business experts across the organisation, ABC specified the functionality they wanted. It was summarised in three modules; sales, manufacturing, and logistics. No vendor could deliver exactly according to specifications, and ABC decided to cooperate with the one closest to the original specification to get the desired functionality. The contract was signed in mid-1994. Since top management had stated that 'the system should give competitive advantage', ABC initially tried, and failed, to restrict further sale of the system to competitors. The software was not implemented until 1997, when the sales module was piloted in two plants. The other two modules were implemented in 1998 and 2000, respectively. In terms of impact on the resource level, first of all, the system costs several tens of millions of Euros in licences, hardware, software, and consulting. The system also radically challenged existing knowledge required to conduct business tasks. The increased influence of customer service, the perceived lack of control followed by the fact that data are only entered once and have to be correct, and the increased visibility of data on performance, were aspects of the system that influenced the existing resource base.
    Activity and organisation level
    It turned out, perhaps not surprisingly, that using the system was not easy. Many plants struggled for long to get the system to work reliably. As intended, the value chain activities of customer service, order entry, production planning, and logistics planning got new tools to work with that were radically different from what they had. As a consequence, the system was not always used very effectively. Orders were entered incorrectly, and planners did not trust the automatic planning and had to spend more time than previously, doing it both on paper and on screen. Manually entered data were not always accurate, meaning a lot of control-related work had to be done. Semi-automating production planning also disrupted the manual routines that had been used successfully since the birth of ABC's industry. The level of systems usage differed between plants: the worst plants struggled to make operations reliable, and reported reduced operational performance with the new system. They needed to work harder, and in some cases more staff was hired. Other plants ensured that after the initial hardships, they brought operational performance (for instance, number of complaints, late deliveries, orders entered/full-time equivalents and capital costs for stock) to the level they had before the system was installed. To these units, the system did not significantly improve the value chain activities. The successful plants, however, managed to tackle the initial problems and implemented change programmes that helped them improve operational performance metrics, such as time spent per job, stock turnover, and complaints. They used the system to, if not optimally, at least to a level above previous performance levels, resulting in improved activities.
    Offering level
    Since so many plants failed to use the system in a way that improved activities, few plants did actually improve their offering in terms of quality or cost. Those plants that improved value chain activities and still failed to improve profits suffered from two problems; the improved activity either increased costs elsewhere in the value chain, or reduced the customer-perceived quality of the offering. For instance, one plant reduced man resources in customer service, because of automation of order entry processes, but this triggered extra work in finance, since invoices had to be checked regularly for any mishaps. Other units simply could not realise improvements in operations: managers claimed that rather than making people redundant they trusted business would grow and thus require more staff; some managers claimed they simply were not able to demonstrate or communicate to customers that the quality of service had been improved and that price increases were justified. Those plants that improved individual activities and improved their performance, did actively search for opportunities to cut costs and increase sales volume or price, were effective in communicating to customers that they would get better service, and made sure ABC was being paid for stock keeping. They undertook organisational change programmes, including process re-engineering and structural change, made people redundant, and optimised logistics.
    Market level
    Those units that improved offerings managed to reach at least temporary competitive advantage, since the improvements actually took some time and effort from competitors to respond to. The system was unique and so were the cost efficiency and quality of the supply chain activities. Other actors, like ABC historically, had focused attention on production, not administration or service, meaning ABC had first mover advantages, albeit for a short time period. The initial strategic intent with the system was to differentiate supply chain management, and a few plants that used the system and improved the offering did succeed with the objective.
    Management processes
    Moving from a business model without ERP to one with ERP was difficult for ABC, at least for certain plants. Cognitively, it required learning a new system, how to improve work tasks, and how to convert those work task improvements into improved performance. For individuals, it required that the knowledge base be expanded: top managers had to couple their strategic insight with knowledge about detailed operations and technology. Operative experts had to learn about technology and to put things into a strategic perspective. The move also required the management of culture; making users and middle and local managers favour the new system was not easy, since it forced major rethinking of existing ways of doing business, ways that often were healthy and profitable. Managing the cultural side also required strong communications of the strategic purpose of the system; ABC management reasoned that rather than directly controlling usage and challenging the decentralised structure, they would make users and profit centre managers understand the strategic purpose of the system. This gave mixed results, with some plants requiring much stronger incentives than profit responsibility to actually use the system and ensure that financial performance was improved.
    Top of page
    The validity of the business model construct will be discussed in terms of its integration (logical coherence), relative explanatory power and relevance (Glaser, 1978).

    Business model integration
    The resource (the system), the activation of the resource (activities), and the quality and cost of the offering in the light of competition are central factors needed to be understood and managed in order for IS investments to generate profit. In certain instances, systems are simply installed, not used – the business model is only affected on the resource level. Even if they are used, they may not be used effectively. And, even if they are used effectively, they may affect other activities negatively. Even if they improve profit, they might not create competitive advantage, since competitors could imitate.
    An IS application is a potential resource. Bringing it in means the resource base is altered, and that factor market sourcing skills are needed. Large pieces of capital are traded for the new resource. However, bringing the system in is a difficult task, it normally 'only' requires a financial commitment by decision makers. Anybody with a reasonable amount of cash or credibility among banks is able to buy an IS system.

    What is more difficult, though, is to use the system. The business model construct acknowledges that IS resources may not always be used optimally. Reasons behind non-optimal use may be lack of knowledge and lack of incentives and aspiration. Measuring whether a system is used well or not can be done by measuring improvements in operational performance: time spent on work tasks (for instance, response time for a customer enquiry, production planning, or design), stock level reductions, accuracy of accounting, and customer complaints, all depending on the functionality of the system.

    However, even if a system is used well, it is not certain that profits are improved in terms of cost and price of the offering. Operational improvements on the cost side may have negative effects on customer-perceived quality and vice versa. Improvements in one activity may affect another negatively. Furthermore, improvements may not result in improved profits if managers and users are unable to materialise on changes made. Failure to make staff redundant, failure to source in a way that realises stock reductions, and failure to prove to customers that the quality has been improved results in unaltered profitability. This connects with the market level; if the customer base does not favour the new offering – in view of competing offerings – sales will not improve. Again, knowledge, aspiration, and incentives are required. Being able to orchestrate improvements in individual activities in a way that hinders negative effects elsewhere is important, underlining the need for a strategic perspective. Metrics for improvements of the offering is ultimately improved profit.

    Whether IS-based profit improvements render competitive advantage depend on the ability of competitors to imitate the offering improvement by equivalent resources. An investment in an application that reduces costs might be imitated by a competitor using another system, the same system more efficiently, or by a non-IS resource.

    Apart from the causal inter-relations, the business model also includes a longitudinal component. Moving from a business model without IS into one successful with IS is not simply a matter of buying a system, but about making sure that activities and the quality and cost of the offering are improved. If not, the only change brought about is the creation of an idle, costly resource. If firms are unsuccessful in identifying, developing and using IS to improve activities in a way that is visible in the profit statement, nothing significant will happen with the business model. This process involves the management of knowledge, norms and values, aspiration levels, and organisational incentives.

    The business model in comparison
    The business model is characterised by an integration of various theoretical perspectives, including both variance and process theories (Webster & Watson, 2002), and addresses the interdependency between the components of the business context of IS. There are other studies addressing the same issue both within IS and strategy research. IS research (Scott-Morton, 1990; Brynjolfsson, 1993; Mata et al., 1995) has been based on a deterministic view of IS, meaning IS is studied with a content approach, yet still fails to present causalities between IS and performance. Furthermore, changes over time of the business model components are neglected (Markus & Robey, 1988; Robey & Boudreau, 1999). Within strategy research, Porter's causality chain model (1991) offers a similar approach, but the model described here is clearer on resources and organisational processes. Normann's models (1977),(2001) are not detailed enough about causalities and the finer aspects of the business model. Entrepreneurship research is not clear about business model components and their causalities. Eisenhardt and Sull's (2001) strategy approaches are, if integrated, similar to the business model concept presented here. However, their proposal that certain components of the business model are more important during certain life cycle phases or within certain environments seems a little hard to digest. The debacle of Enron, one of the success cases referred to, proved that strategic management is much more than 'simple rules for key processes'. The e-business research provides formal descriptions of how to conduct business and make revenues over the Internet (Rappa, 2002), but it has several shortcomings, for example, it does not address competition, causality between the components, and longitudinal management processes. Furthermore, they lack a theoretical ground, a notable exception being Amit & Zott (2001). The specific e-business models can be viewed as empirical examples of business models based on Internet. Each of the specific e-business models is applicable to either the whole or parts of the model (Timmers, 1998; Applegate, 2001). However, none of these addresses how IS in general relates to their models.
    It is not difficult to see how IS other than ERP affects business models. A customer relationship management (CRM) application is a resource that mostly affects activities related to sales and marketing. If done effectively, costs for sales activities, such as market and customer analysis will be reduced, and the overall knowledge about customers will increase, meaning 'sharper' offerings in relation to customer preferences, which can increase customer-perceived quality. If implemented successfully, profits will rise, possibly to an extent that renders competitive advantage if competitors are idle. As another example, e-business applications mean radically changing logistics, customer service, marketing and the geographical scope of business, all being potential sources of competitive advantage.
    One of the strengths of the model is its general application – any IS applied in a business could benefit from the model in explaining factors of success or failure. However, the general nature can also be seen as a weakness. The details of a given business model are so many that it is relatively pointless to list metrics and factors on the different levels in a general sense. Such operationalisation will have to be made in relation to specific IS applications and specific businesses (cf. Shin's conclusions on aligning the IS with business strategy). However, the core concept of any operationalisation of the business model is 'use', that is the correlation between use of IS and performance. The conceptual discussion here has obvious limitations, being based on theory and the ABC illustration, and we believe future research should have a strong empirical focus, potentially based on system types.

    Top of page
    One can ascribe many roles to IS, but it does not have one role. We are interested in the economic role of IS, with a particular focus on business context. We claim that one of the roles of IS has become to improve businesses, and that the business model construct is a good tool to understand how this is done or not done. The business model concept is becoming increasingly popular, both within e-business and general business. However, the construct is not well defined, nor is there theory to support it (Porter, 2001). We believe these questions can be partly resolved by an integration of existing business strategy theory and emergent strategy-related research into IS and e-business.

    With this paper, we propose a business model that gives structure to the broader business context of IS. IS is at best a potential resource, something with a potential value. Theoretically, the bottom line is that its economic value is determined by a firm's ability to trade and absorb IS resources, to align (and embed) them with other resources, to diffuse them in activities and manage the activities in a way that creates an offering at uniquely low cost or which has unique qualities in relation to the industry they compete in. Any empirically defined business using IS can be viewed through the business model, but a contingency view must be applied: the value and the relations within the business model vary between different IS applications and between different businesses. As a generic model, we believe it captures relevant aspects to consider for any IS decision-maker or student of IS and business.

    There are obvious windows for research in relation to the business model. Different IS systems could be studied in different settings to understand their impact on given business models. Conversely, the impact of different IS on specific business models would also be interesting to research. Detailed correlation studies of, for instance, IS application investments and effects on activity metrics or on financial performance could be done. Cross-sectional comparisons of different firms (business models) and the impact of a given business model are another potential area of research. The concept of 'use' has to be further investigated and operationalised beyond user satisfaction and traditional diffusion and adoption models. The business model concept is useful not just within the domain of e-business, but also in order to understand the impact of any IS. Hence, more business model research should be conducted on general IS. Case studies will be important, at least initially. However, quantitative studies could be conducted on certain variables. Furthermore, we believe the business model concept can be used for retrospective research, using it to reinterpret previously reported cases.

    Top of page
    Afuah A, Tucci CL (2001) Internet Business Models and Strategies: Text and Cases. McGraw-Hill, Boston.
    Amit R, Schoemaker PJH (1993) Strategic assets and organizational rent. Strategic Management Journal 14 (1), 33–46. | Article | ISI |
    Amit R, Zott C (2001) Value creation in e-business. Strategic Management Journal 22, 493–520. | Article |
    Andreu R, Ciborra C (1996) Core capabilities and information technology: an organizational learning approach. In Organizational Learning and Competitive Advantage (Moingeon B and Edmondson A, Eds), pp 139–163, Sage, London.
    Applegate LM (2001) Emerging e-business models: lessons from the field. HBS No. 9-801-172. Harvard Business School, Boston.
    Bain JS (1968) Industrial Organization (2nd edn). Wiley, New York.
    Barney J (1986) Strategic factor markets, expectations, luck and business strategy. Management Science 42, 1231–1241.
    Barney J (1991) Firm resources and sustained competitive advantage. Journal of Management 17, 99–120. | Article | ISI |
    Bharadwaj AS (2000) A resource-based perspective on information technology capability and firm performance: an empirical investigation. MIS Quarterly 24 (2), 169–183. | Article | ISI |
    Brynjolfsson E (1993) The productivity paradox of information technology. Communications of the ACM 36 (12), 66–77. | Article |
    Chakravarthy BS, Doz Y (1992) Strategy process research: focusing on corporate self-renewal. Strategic Management Journal 13, 5–14.
    Chatterjee S (1998) Delivering desired outcomes efficiently: the creative key to competitive strategy. California Management Review 40, 78–95.
    Cheng EWL, Heng L, Love P, Irani Z (2001). An e-business model to support supply chain activities in construction. Logistics Information Management 14 (1/2), 68–77.
    Cherian E (2001) Electronic business: the business model makes the difference. In Proceedings of the Eighth European Conference on Information Technology Evaluation (Remenyi D and Brown A, Eds), pp 171–174, Oriel College Oxford.
    Ciborra C (1994) The grassroots of IT and strategy. In Strategic Information Systems: A European Perspective (Ciborra C and Jelassi T, Eds), pp 3–24, John Wiley, Chichester.
    Clemons E, Row M (1991) Sustaining IT advantage: the role of structural differences. MIS Quarterly 15 (3), 275–293.
    D'aveni R (1994) Hypercompetition:The Dynamics of Strategic Maneuvering. Free Press, New York.
    Dierickx I, Cool K (1989) Asset stock accumulation and sustainability of competitive advantage. Management Science 35 (12), 1504–1511. | ISI |
    Duhan S, Levy M, Powell P (2001) Information systems strategies in knowledge-based SMEs: the role of core competencies. European Journal of Information systems 10, 25–40. | Article |
    Eisenhardt KM, Martin JA (2000) Dynamic capabilities: what are they? Strategic Management Journal 21(S), 1105–1121. | Article | ISI |
    Eisenhardt KM, Sull DN (2001) Strategy as simple rules. Harvard Business Review, 79 (1), 107–116.
    Foss NJ (1997). Resources and strategy: problems, open issues and ways ahead. In Resources, Firms and Strategies. A Reader in the Resource-based Perspective (Foss NJ, Ed). Oxford University Press, Oxford.
    Gibson CF, Nolan RL (1974) Managing the four stages of EDP growth. Harvard Business Review, 52 (1), 76–88.
    Ginsberg A (1994) Minding the competition: from mapping to mastery. Strategic Management Journal 15, 153–174.
    Glaser BG (1978) Theoretical Sensitivity: Advances in the Methodology of Grounded Theory. Sociology Press, California.
    Henderson J, Venkatraman N (1992) Strategic alignment: a framework for strategic information technology management. In Transforming Organizations (Kochan T and Useem M, Eds), pp 97–117, Oxford Press, New York.
    Ives B, Mason RO (1990) Can information technology revitalize your customer service? The Academy of Management Executive 4(4), 52–69.
    Johnston RH, Vitale MR (1988) Creating competitive advantage with interorganizational information systems. MIS Quarterly 12 (2), 153–165. | Article | ISI |
    Kalling T (1999) Gaining competitive advantage through information technology. A resource based approach to the creation and employment of strategic IT resources. Doctoral dissertation, Lund University. Lund: Lund Business Press.
    Kaplan B (1991) Models of change and information systems research. In Information Systems Research: Contemporary Approaches and Emergent Traditions (Nissen H-E, Klein hk and Hirscheim R, Eds), pp 593–611, North-Holland, Elsevier, Amsterdam.
    Kettinger WJ, Grover V, Guha S, Segars AH (1994) Strategic information systems revisited: a study in sustainability and performance. MIS Quarterly 18 (1), 31–58. | Article |
    Markus ML, Robey D (1988) Information technology and organizational change: causal structure in theory and research. Management Science 34 (5), 583–598.
    Mata FJ, Fuerst WL, Barney JB (1995) Information technology and sustained competitive advantage: a resource-based analysis. MIS Quarterly 19 (4), 487–505. | Article |
    Mcfarlan FW (1984) Information technology changes the way you compete. Harvard Business Review 62 (3), 98–103. | ISI |
    Mcgrath RG, Macmillan IC (2000) The Entrepreneurial Mindset. Strategies for Continuously Creating Opportunity in an Age of Uncertainty. Harvard Business School Press, Cambridge.
    Mintzberg H (1978) Patterns in strategy formation. Management Science 24, 934–948.
    Mintzberg H (1994) The Rise and Fall of Strategic Planning. Prentice-Hall International, Englewood cliffs, NJ.
    Mosakowski E, Mckelvey B (1997) Predicting rent generation in competence-based competition. In Competence-based Strategic Management (Heene A and Sanchez R, Eds), John Wiley, Chichester.
    Nolan RL (1979) Managing the crises in data processing. Harvard Business Review 57 (2), 115–126.
    Normann R (1977) Management for Growth. Wiley: Chichester.
    Normann R (2001) Reframing Business. When the Map Changes the Landscape. Wiley, Chichester.
    Oliver C (1997) Sustainable competitive advantage: combining institutional and resource based views. Strategic Management Journal 18, 697–713. | Article | ISI |
    Peteraf MA (1993) The cornerstones of competitive advantage: a resource-based view. Strategic Management Journal 14 (3), 179–191. | Article | ISI |
    Porter ME (1980) Competitive Strategy. Free Press, New York.
    Porter ME (1985) Competitive Advantage. Free Press, New York.
    Porter ME (1991) Towards a dynamic theory of strategy. Strategic Management Journal 12(S), 95–119. | Article |
    Porter ME (2001) Strategy and the Internet. Harvard Business Review, 79 (2), 63–78.
    Porter ME, Millar V (1985) How information technology gives you competitive advantage. Harvard Business Review 63 (4), 149–160.
    Powell TC, Dent-Micallef A (1997) Information technology as competitive advantage: the role of human, business and technology resources. Strategic Management Journal 18 (5), 375–405. | Article | ISI |
    Prahalad CK, Bettis RA (1986) The dominant logic: a new linkage between diversity and performance. Strategic Management Journal 7 (6), 485–501. | ISI |
    Quinn JB (1978) Strategic change: logical incrementalism. Sloan Management Review 20 (1), 7–21.
    Rackoff N, Wiseman C, Ullrich WA (1985) Information systems for competitive advantage: implementation of a planning process. MIS Quarterly 9 (4), 112–124.
    Rappa M (2002) Business Models on the Web.
    Rayport J, Jaworski B (2001) Introduction to E-Commerce (Int. edn.) McGraw-Hill/Irwin, Boston.
    Robey D, Boudreau M-L (1999) Accounting for the contradictory organizational consequences of information technology: theoretical directions and methodological implications. Information Systems Research 10 (2), 167–185.
    Sanchez R, Heene A (1997) Competence-based strategic management: concepts and issues for theory, research, and practice. In Competence-Based Strategic Management (Heene A and Sanchez R, Eds), John Wiley, Chichester.
    Sambamurthy V (2000) Business Strategy in hypercompetitive environment': rethinking the logic of IT differentiation. In Framing the Domains of IT Management: Projecting the Future Through the Past (Zmud RW, Ed), pp 245–261, Pinnaflex, Cincinnati.
    Schumpeter JA (1934) The Theory of Economic Development. Harvard University Press, Cambridge.
    Schumpeter JA (1950) Capitalism, Socialism, and Democracy (3rd edn). Harper & Row, New York.
    Scott-Morton MS (1990) The Corporation of the 1990s: Information Technology and Organizational Transformation. Oxford University Press, New York.
    Shin N (2001) The impact of information technology on the financial performance: the importance of strategic choice. European Journal of Information Systems 10 (4), 227–236. | Article |
    Strassman P (1985) Information Payoff: the Transformation of Work in the Electronic Age. Free Press, New York.
    Timmers P (1998) Business models for electronic markets. Elecronic Market 8 (2), 2–8.
    Wade M (2001) Exploring the role of information systems resources in dynamic environments. In Proceedings of 22nd International Conference on Information Systems (George J and Ives B, Eds), pp 491–496, New Orleans.
    Webster J, Watson RT (2002) Analyzing the past to prepare for the future: writing a literature review. Management Information Systems Quarterly 26 (2), xiii–xxiii.
    Weill P, Vitale MR (2001) Place to Space. Harvard Business School Press, Boston.
    Wernerfelt B (1984) A resource-based view of the firm. Strategic Management Journal 5 (2), 171–180. | Article | ISI |
    Whittington R (2000) What is Strategy – and Does it Matter? International Thomson, London.
    Williamson OE (1999) Strategy research: governance and competence perspectives. Strategic Management Journal 20 (12), 1087–1108. | Article | ISI |
    Wiseman C (1985) Strategy and Computers. Dow Jones- Irwin, Homewood.


    Read more »

  • Understanding online purchase intentions: Contributions from technology and trust perspectives

    Understanding online purchase intentions: contributions from technology and trust perspectives

    Hans van der Heijden1, Tibert Verhagen1 and Marcel Creemers1

    1Department of Information Systems, Marketing, and Logistics, Faculty of Economics and Business Administration, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands

    Correspondence: Hans van der Heijden, Department of Information Systems, Marketing, and Logistics, Faculty of Economics and Business Administration, Vrije Universiteit Amsterdam, De Boelelaan 1105, 1081 HV Amsterdam, The Netherlands Tel: +31 20 444 6050; Fax: +31 20 444 6005;

    Received 12 July 2000; Revised 20 August 2001; Re-revised 30 July 2002; Accepted 15 October 2002.



    This paper explores factors that influence consumer's intentions to purchase online at an electronic commerce website. Specifically, we investigate online purchase intention using two different perspectives: a technology-oriented perspective and a trust-oriented perspective. We summarise and review the antecedents of online purchase intention that have been developed within these two perspectives. An empirical study in which the contributions of both perspectives are investigated is reported. We study the perceptions of 228 potential online shoppers regarding trust and technology and their attitudes and intentions to shop online at particular websites. In terms of relative contributions, we found that the trust-antecedent 'perceived risk' and the technology-antecedent 'perceived ease-of-use' directly influenced the attitude towards purchasing online.



    The research objective of this paper is to explore the factors that influence online purchase intentions in consumer markets. Firms operating in this segment sell their goods and services to consumers via a website. These online stores are important and sometimes highly visible representatives of the 'new economy', yet despite this, they do not enjoy much sound conceptual and empirical research (Hoffman & Novak, 1996Alba et al., 1997). An increased understanding of online consumer behaviour can benefit them in their efforts to market and sell products online.

    We investigate consumers' intentions to purchase products at online stores using two different perspectives: a technology-oriented perspective and a trust-oriented perspective. Technology and trust issues are highly relevant to online consumer behaviour, yet their inclusion in traditional consumer behaviour frameworks is limited. We discuss the contributions of each perspective to our understanding of online purchase intentions. We also present an empirical study that examines the contribution of each perspective by surveying 228 potential online shoppers.

    The paper is organised as follows. First, we deal with the theoretical background, paying attention to technology and trust-oriented perspectives of online consumer behaviour. The subsequent section deals with the empirical study. Next, we present a summary of the findings. We conclude with a discussion and further directions for research.



    To a very large extent, online consumer behaviour can be studied using frameworks from 'offline' or traditional consumer behaviour. A number of general frameworks in consumer behaviour are available that capture the decision-making processes of consumers (Engel et al., 1995Schiffman & Kanuk, 2000). These frameworks distinguish a number of stages, typically including at least the following: need recognition, prepurchase search, evaluation of alternatives, the actual purchase, and postpurchase evaluation. These stages are relatively abstract and do not consider the mediumthrough which the consumer buys. Hence, the stages can be applied to online consumer behaviour (O'Keefe & McEachern, 1998).

    Looking more closely at the difference between online and 'off-line' consumer behaviour, we can identify at least two types of issues that differentiate online consumers from off-line consumers. First, online consumers have to interact with technology to purchase the goods and services they need. The physical shop environment is replaced by an electronic shopping environment or, in other words, by an information system (IS). This gives rise to technical issues that have traditionally been the domain of IS and human computer interaction (HCI) researchers (O'Keefe et al., 2000).

    Second, a greater degree of trust is required in an online shopping environment than in a physical shop. It is by now a folk theorem that trust is an important issue for those who engage in electronic commerce (Keen et al., 1999). Trust mitigates the feelings of uncertainty that arise when the shop is unknown, the shop owners are unknown, the quality of the product is unknown, and the settlement performance is unknown (Tan & Thoen, 2001). These conditions are likely to arise in an electronic commerce environment.

    Given these differences, research in online consumer behaviour can benefit from models that have been developed to study technology and trust issues in particular. We will examine the contributions of each of these models in more detail in the following sections.

    Contributions from technology-oriented models

    The technology perspective focuses on the consumer's assessment of the technology required to conduct a transaction online. In the context of this paper, technology refers to the website that an online store employs to market and sell its products. Researchers have long been studying how consumers search for information about products and how useful technology can be to acquire this information (Stigler, 1961Thorelli & Engledow, 1980Keller & Staelin, 1987Widing & Talarzyk, 1993Moorthy et al., 1997). Information-seeking behaviour by consumers is characterised by a trade-off between the cost of searching and evaluating more alternative products and the benefit of a better decision when more alternatives are taken into account (Hauser & Wernerfelt, 1990). Technology has the potential to both decrease the cost of searching and evaluating alternatives and increase the quality of the decision (Haubl & Trifts, 2000).

    The advent of the internet and the proliferation of online stores have given rise to a number of studies that look at the consumer's intention to purchase online. There is some evidence that online consumers not only care for the instrumental value of the technology, but also the more immersive, hedonic value (Childers et al., 2001Heijden, forthcoming). These and other studies (Chau et al., 2000) build their models upon a well-known theory in IS research: the technology acceptance model (TAM).

    The TAM was first developed by Davis to explain user acceptance of technology in the workplace (Davis, 1989Davis et al., 1989). TAM adopts a causal chain of beliefs, attitudes, intention, and overt behaviour that social psychologists Fishbein and Ajzen (Fishbein & Ajzen, 1975Ajzen, 1991) have put forward, and that has become known as the Theory of Reasoned Action (TRA). Based on certain beliefs, a person forms an attitude about a certain object, on the basis of which he/she forms an intention to behave with respect to that object. The intention to behave is the prime determinant of the actual behaviour.

    Davis adapted the TRA by developing two key beliefs that specifically account for technology usage. The first of these beliefs is perceived usefulness, defined by Davis as 'the degree to which a person believes that using a particular system would enhance his or her job performance.' The second is perceived ease-of-use, defined as ?the degree to which a person believes that using a particular system would be free of effort?. Moreover, TAM theorises that all other external variables, such as system-specific characteristics, are fully mediated by these two key beliefs. The model has recently been updated (Venkatesh & Davis, 2000) with a number of antecedents of usefulness and ease-of-use, including subjective norms, experience, and output quality. There is ample evidence that not only usefulness (i.e., external motivation) but also enjoyment (i.e., internal motivation) is a direct determinant of user acceptance of technology (Davis et al., 1992Venkatesh, 1999). This is in line with a recent evaluation of the TRA, in recognition of the evidence that attitudes are not only based on cognition, but also on affection (Ajzen, 2001). Viewed in this light, perceived usefulness and ease-of-use represent the cognitive component of the user evaluation, while perceived enjoyment represents the affective component.

    Researchers have empirically validated the original TAM in a variety of settings. Of particular interest here are recent studies on technology acceptance in internet usage and website usage. These studies by and large confirm the relevance and appropriateness of ease-of-use and usefulness in an online context, and find substantial evidence for the intrinsic enjoyment that many consumers have when surfing the web (Teo et al., 1999Lederer et al., 2000Moon & Kim, 2001). Support has also been found for ease-of-use being an antecedent of usefulness and perhaps not directly contributing to attitude formation (Gefen & Straub, 2000).

    Summarising, the contribution of TAM and other similar models is that they explain why online transactions are conducted from a technological point of view. In doing so, they highlight the importance of website usefulness and the usability of the website. Also, these models direct our attention to the hedonic features of the technology and demonstrate how these features can affect consumer's intention to purchase online.

    Contributions from trust-oriented models

    The trust-oriented perspective quickly gained momentum after the introduction of wide-scale electronic commerce in the beginning of the 1990s (Keen et al., 1999). Trust is a multidimensional concept that can be studied from the viewpoint of many disciplines, including social psychology, sociology, economics, and marketing (Doney & Cannon, 1997). While there are many definitions of trust, the one that we will adopt in this paper is ?the willingness of a consumer to be vulnerable to the actions of an online store based on the expectation that the online store will perform a particular action important to the consumer, irrespective of the ability to monitor or control the online store cf. the more general definition from Mayer et al. (1995).

    While researchers have been concerned with interpersonal trust and inter-organisational trust, they have paid less attention to trust between people and organisations (Lee & Turban, 2001). Recent conceptual and empirical research has started to look into this type of trust in more detail, in particular in the context of business to consumer electronic commerce. Researchers have developed instruments to measure trust in internet shopping (Cheung & Lee, 2000;Jarvenpaa et al., 2000), and these measures are beginning to be used in the testing of empirical frameworks.

    To what extent does trust in the company influence the intention to buy at a specific website? The existing empirical evidence suggests that trust in the company negatively influences the perceived risk that is associated with buying something on the internet (Featherman, 2001Pavlou, 2001). Perceived risk can be regarded as a consumer's subjective function of the magnitude of adverse consequences and the probabilities that these consequences may occur if the product is acquired (Dowling & Staelin, 1994). The more a person trusts the internet company, the less the person will perceive risks associated with online buying. Perceived risk, in turn, negatively influences the attitude towards internet shopping. Trust in the online store may also directly influence this attitude (Jarvenpaa et al., 2000).

    People develop trust in the webstore through a number of factors. One is the perceived size of the company, another is their reputation (Jarvenpaa et al., 2000). The larger the perceived size and the perceived reputation, the greater the trust in the company. Reputation is closely relaled to familiarity with the store, which researchers have also identified as an antecedent of trust. Familiarity deals with an understanding of current actions of the store, while trust deals with beliefs about the future actions of other people (Gefen, 2000).

    It should be noted that trust in the company does not have to be a necessary condition to purchase online. It has been argued that lack of trust in the organisation can be offset by trust in the control system (Tan & Thoen, 2001). Such a control system would include the procedures and protocols that monitor and control the successful performance of a transaction, and could include the option to insure oneself against damage. We may not trust the internet company, but we may trust the control system that monitors its performance (Tan & Thoen, 2002).

    In sum, the trust-oriented perspective highlights the importance of trust in determining online purchase intentions, and its antecedents include a number of trust drivers. In doing so, it emphasises constructs such as perceived risk, trust in the online store, perceived size, and perceived reputation.



    To explore the contributions and the relative importance of each perspective, we conducted an empirical study. The following sections describe the model, the measurement instrument, and the sample.

    Conceptual model

    The model that we attempted to test is depicted in Figure 1. The backbone of this model is the relation between attitude towards online purchasing and intention to purchase online. This conforms the general relation between attitudes and intentions that the theory of reasoned action predicts, and is consistent with prior online purchase models (Jarvenpaa et al., 2000Pavlou, 2001).

    Figure 1.
    Figure 1 - Unfortunately we are unable to provide accessible alternative text for this. If you require assistance to access this image, please contact or the author

    Conceptual model (adapted from Ajzen & Fishbein, 1980; Davis, 1989; Jarvenpaa et al., 2000).

    Full figure and legend (29K)

    The attitude construct has four antecedents in total: two from the technology perspective and two from the trust perspective. The technological antecedents are the perceived usefulness and perceived ease-of-use. Both constructs originate from TAM. The trust antecedents are trust in the online store and perceived risk; these constructs appear in the Jarvenpaa et al. study. Signs and directions of the relations between the constructs are displayed in Figure 1 and have been discussed in the Theory section.

    Measurement instrument

    In order to increase reliability and ease of comparison with previous work in this area, we operationalised each construct with multiple items. The operationalisations for the trust constructs were taken from Jarvenpaa et al. (2000). The operationalisations for the usefulness constructs were taken from Chau et al. (2000), based on Davis (1989). We made modifications, most of which were adaptations to increase the applicability of the items to the local context. A substantial adaptation involved the replacement of the word 'Internet' with 'This website' to increase consistency in the unit of analysis for each construct (Ajzen & Fishbein, 1980DeVellis, 1991). Also, we changed the wording of the ease-of-use and usefulness items to make them more suitable for e-commerce websites. The resulting items can be found in the Appendix.


    Our sample consisted of a group of undergraduate students who enrolled for a mandatory IS course at a Dutch academic institution. Each student was notified in class of the survey, and invited to participate for partial class credit.

    Before the subjects started the survey, their task was to study two specific websites carefully. The first one was the website of a 'pure player' CD store: this website was a newcomer to the Dutch CD market and sold its products only over the internet. The second one was the website of a 'bricks-n-clicks' CD store. This website represented a large and well-known chain of CD stores in the Netherlands. Like the first online store, the website's purpose was to sell CDs directly over the internet.

    After the students had studied the websites, they were asked to complete a survey for each website. Respondents returned the questionnaires both at home and on campus. It was possible to submit the responses through the internet or return them handwritten.



    Eventually, 228 students took part in the survey. Table 1 provides information on their internet experience and their experience with online shopping.

    As the profile data show, this group is relatively homogeneous in terms of age and balanced in terms of internet experience. The gender balance in the current sample is similar to the gender balance of the entire population of internet users (Kehoe et al., 1999). The online purchase experience of the respondents is heterogeneous, and this reveals a large set of inexperienced online purchasers and a small set of very experienced online purchasers.

    Reliability and validity

    We used Cronbach's alpha and exploratory factor analysis to examine the reliability and unidimensionality of each construct. Passing of these tests is a prerequisite for further analysis (Nunally, 1967DeVellis, 1991). To obtain acceptable values, we modified the trust and perceived risk scales, in line with the modifications that Jarvenpaa et almade (see the Appendix for the exact changes). Table 2 displays the resulting alpha coefficients for each of the constructs and for each of the websites. All resulting scales are unidimensional and sufficiently reliable.

    We estimated the model's parameters using the scales and structural equation modelling (SEM). The goodness-of-fit measures, depicted in Table 3 show good fit with the data and consequently, we can proceed with an analysis of the path parameters.

    Figure 2 displays the path coefficients that the SE package has estimated, along with the squared multiple correlations of the intermediate and dependent variables.

    Figure 2.
    Figure 2 - Unfortunately we are unable to provide accessible alternative text for this. If you require assistance to access this image, please contact or the author

    SEM estimation results. Standardised path coefficients are significant at p<0.005 except otherwise noted. Normal font represents values of the pure player online store, italics font represents values of the bricks-n-clicks player. Percentages indicate squared multiple correlations (variance explained).

    Full figure and legend (37K)

    The data supported a strong positive relation between attitude towards online purchasing and intention to purchase online. The only predictor of attitude that was significant in both cases was the perceived risk. Perceived ease-of-use was a significant predictor only in the case of the bricks-n-clicks online store. The relation between trust and risk and the relation between perceived ease-of-use and perceived usefulness were supported.



    The result of this research suggests that perceived risk and perceived ease-of-use are antecedents of attitude towards online purchasing. The effect of perceived risk was strongly negative in both cases, and the effect of perceived ease-of-use was positive in one case. The data did not support a positive effect from trust in the online store and from the perceived usefulness of the website. Trust in store appears to be indirectly related to a positive attitude through its direct negative effect of perceived risk. In sum, contributions from both the trust perspective and the technology perspective could be found, although the contribution of the technology perspective appears to be less.

    Our results are only partly in line with other TAM studies. The impact of perceived ease-of-use on perceived usefulness is in line with earlier research. The lack of effect of perceived usefulness and perceived ease-of-use on attitudes towards online purchasing is not. One explanation for this result is that the dependent variables of our study are narrower in scope to the ones commonly found in TAM models. TAM models focus on usage intention of the technology, as opposed to purchase intention. In an e-commerce context, usage intention is broader in scope than purchase intention. This is because a person may use an online store not only to purchase, but also to learn about products and services. Hence, respondents do not intend to purchase items at the online store, even though they perceive the store as useful.

    A second, related explanation is that - in retrospect-perceived usefulness may have been too narrowly operationalised. For example, 'speed' and 'convenience' are included in the scale, but perceptions about the price levels of the online store are not. A more detailed assessment of the perceived usefulness of online stores may reveal more appropriate items. Perhaps, our operationalisation failed to tap salient aspects of usefulness, and in doing so tampered its predictive value.

    It is interesting to compare the results from this study to the study from Jarvenpaa et al. This study looked at the impact of trust and perceived risk in a similar empirical setting. Our results partly corroborate their findings. We did find a positive effect of trust in the store on perceived risk, and an effect of perceived risk on the attitude towards online purchasing. In contrast, we did not find any effect of trust in the store on attitude.

    How can these differences be explained? Why were trust, ease-of-use, and usefulness not significant in our study, even though previous research has found empirical support for them? We believe it is conceivable that trust, ease-of-use, and usefulness are 'threshold' variables. This means that once a certain evaluation level is reached, the variable no longer contributes to a favourable attitude. Hence, these variables affect attitudes only on low evualation levels, that is, when respondents evaluate them as being poor. A shopper may or may not purchase at a trustworthy website, but he/she will definitely not purchase at an untrustworthy site. A shopper may or may not purchase at a user-friendly website, but he or she will definitely not purchase at an user-unfriendly website.

    This line of reasoning suggests another explanation of our findings: we did not select websites where trust in the store, ease-of-use, and usefulness were poorly evaluated. Indeed, the three no-effect constructs (trust, perceived usefulness, and perceived ease-of-use) received high evaluations from the respondents. While the unbridled trust in the 'pure player' may seem odd in hindsight, the reader should keep in mind that the survey was administered at a time when stock exchanges and investors were still euphoric about internet start-ups. We also observed that the two websites appeared to be well designed and offered efficient online purchasing facilities. So, we speculate that trust, perceived usefulness, and perceived ease-of-use reached threshold levels in many of the respondents? minds and this is why they did not generate any effect on their attitudes towards purchasing online.

    Our conceptualisation of these constructs as threshold variables borrows from Herzberg's research on hygiene andmotivational factors in the area of job satisfaction. Herzberg theorised that some factors influence job satisfaction but not dissatisfaction (achievement, recognition), while others only influenced job dissatisfaction but not satisfaction (salary, relationship with supervisor). The former ones are called motivational factors, the latter hygiene factors (Herzberg et al., 1959). Similar to this line of reasoning, it is possible that some or all of the antecedents in our model are hygiene factors. In other words, perceived risk, trust in the store, perceived usefulness, and perceived ease-of-usenegatively influence an unfavourable attitude towards online purchasing, but do not positively influence a favourable attitude towards online purchasing.

    Clearly, this threshold hypothesis requires further theoretical and empirical analysis, because it requires the conceptualisation of two separate attitudes (one favourable, one unfavourable). Further research may shed additional light on why antecedents appear to influence online consumer behaviour in some studies, but not in other studies.

    An important limitation of our empirical study is the relatively large proportion of inexperienced online shoppers. While we are confident that our findings extend to other populations with the same profile, the generalisibility of the results to larger, more experienced populations is limited. For this reason, we encourage other researchers to replicate and extend our study in settings with more experienced online shoppers.

    Online consumer behaviour is a broad area of study, and we realise that our research has only investigated a modest part of this area. Besides further exploration of the threshold hypothesis, a promising direction of further research is the extent to which technology itself helps to build trust. This calls for an interesting mixture of both perspectives. For example, it is defensible to argue that a website's design can increase a consumer's confidence, much like a good interior design in a restaurant can promote confidence in the quality of the forthcoming food. In doing so, the technology can increase a person's trust in the store. A related subject is the degree to which specific website features help bolster trust. Are third party assurance seals and certifications of any importance? Do testimonials from consumers, or a personal note from the shop owner increase trust? To what extent, and under which conditions? We encourage other researchers to examine these subjects further.



    † Indicates dropped item by Jarvenpaa et al. (2000); indicates dropped item in our own research, "modified" indicates adaptations from original work

    * Indicates dropped item by Jarvenpaa et al. (2000);;

    ** indicates dropped item in our own research, "modified" indicates adaptations from original work



    1. Ajzen I (1991) The theory of planned behaviour. Organisational Behaviour and Human Decision Processes 50, 179–211. | Article |
    2. Ajzen I (2001). Nature and operation of attitudes. Annual Review of Psychology 52, 27–58. | Article | PubMed | ISI | ChemPort |
    3. Ajzen I and Fishbein M (1980) Understanding Attitudes and Predicting Social behaviour. Prentice-Hall, Englewood Cliffs, NJ.
    4. Alba J, Lynch J et al. (1997) Interactive home shopping: consumer retailer, and manufacturer incentives to participate in electronic marketplaces. Journal of Marketing 61, 38–53. | Article |
    5. Chau PYK, Au G et al. (2000) Impact of information presentation modes on online shopping: an empirical evaluation of a broadband interactive shopping service. Journal of Organisational Computing and Electronic Commerce 10 (1), 1–22.
    6. Cheung C and Lee MKO (2000) Trust in Internet Shopping: A Proposed Model and Measurement Instrument. American Conference on Information Systems, Long Beach.
    7. Childers TL, Carr CL et al. (2001) Hedonic and utilitarian motivations for online retail shopping behaviour. Journal of Retailing 77, 511–535. | Article |
    8. Davis FD. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology.MIS Quarterly 13 (3), 319–340. | Article | ISI |
    9. Davis FD, Bagozzi RP et al. (1989) User acceptance of computer technology: a comparison of two theoretical models. Management Science 35 (8), 983–1003.
    10. Davis FD, Bagozzi RP et al. (1992) Extrinsic and intrinsic motivation to use computers in the workplace. Journal of Applied Social Psychology 22, 1111–1132. | Article | ISI |
    11. DeVellis RF. (1991) Scale Development. Sage, Newbury Park, CA.
    12. Doney PM and Cannon JP (1997) An examination of the nature of trust in buyer?seller relationships. Journal of Marketing 61 (4), 35–51. | Article | ISI |
    13. Dowling GR and Staelin R (1994) A model of perceived risk and intented risk-handling activity. Journal of Consumer Research 21, 119–134. | Article |
    14. Engel JF, Blackwell RD et al. (1995) Consumer Behavior. Dryden, Forth Worth.
    15. Featherman MS. (2001) Extending the Technology Acceptance Model by Inclusion of Perceived Risk. American Conference on Information Systems, Boston.
    16. Fishbein M and Ajzen I. (1975) Belief, Attitude, Intention and Behaviour: An Introduction to Theory and Research. Addison-Wesley, Reading, M.A.
    17. Gefen D. (2000). E-commerce: the role of familiarity and trust. Omega 28, 725–737. | Article |
    18. Gefen D and Straub D (2000) The relative importance of perceived ease of use in IS adoption: a study of e-commerce adoption. Journal of the Association for Information Systems 1 (8), 1–30. | Article |
    19. Hair JF, Anderson RE et al. (1998) Multivariate Data Analysis. Prentice-Hall, Upper Saddle River, NJ.
    20. Haubl G and Trifts V. (2000) Consumer decision making in online shopping environments: the effects of interactive decision aids. Marketing Science 19 (1), 4–21.
    21. Hauser JR, Wernerfelt B (1990) An evaluation cost model of consideration sets. Journal of Consumer Research16, 393–408. | Article | ISI |
    22. Heijden Hvd (forthcoming) Factors influencing the usage of websites: the case of a generic portal in the Netherlands. Information & Management.
    23. Herzberg F, Mausner B et al. (1959) The Motivation to Work. John Wiley, New York.
    24. Hoffman DL and Novak TP (1996) Marketing in hypermedia computer-mediated environments: conceptual foundations. Journal of Marketing 60, 50–68.
    25. Jarvenpaa SL, Tractinsky N et al. (2000) Consumer trust in an internet store. Information Technology & Management 1 (1), 45–71.
    26. Keen P, Ballance C et al. (1999) Electronic Commerce Relationships: Trust By Design. Prentice-Hall, Englewood Cliffs, NJ.
    27. Kehoe C, Pitkow J et al. (1999) Results of GVU's Tenth World Wide Web User Survey., viewed 24 July 2001.
    28. Keller KL and Staelin R (1987) Effects of quality and quantity of information on decision effectiveness. Journal of Consumer Research 14, 200–213. | Article |
    29. Lederer AL, Maupin DJ et al. (2000) The technology acceptance model and the World Wide Web. Decision Support Systems 29, 269–282. | Article |
    30. Lee MKO and Turban E (2001) A trust model for consumer internet shopping. International Journal of Electronic Commerce 6 (1), 75–91.
    31. Mayer RC, Davis JH et al. (1995) An integrative model of organizational trust. Academy of Management Review20 (3), 709–734. | Article | ISI |
    32. Moon J-W and Kim Y-G. (2001) Extending the TAM for a World-Wide-Web context. Information & Management 38, 217–230. | Article |
    33. Moorthy S, Ratchford BT et al. (1997) Consumer information search revisited: theory and empirical analysis.Journal of Consumer Research 23, 263–277. | Article | ISI |
    34. Nunally J. (1967) Psychometric Theory. Mc-Graw Hill, New York.
    35. O'Keefe B and McEachern T (1998) Web-based customer decision support systems. Communications of the ACM41 (3), 71–78. | Article |
    36. O'Keefe RM, Cole M et al. (2000) From the user interface to the consumer interface: results from a global experiment. International Journal of Human-Computer Studies 53, 611–628. | Article |
    37. Pavlou PA. (2001) Integrating Trust in Electronic Commerce with the Technology Acceptance Model: Model Development and Validation. American Conference on Information Systems, Boston.
    38. Schiffman LG and Kanuk LL. (2000) Consumer Behavior. Prentice-Hall, Upper Saddle River, NJ.
    39. Stigler G. (1961) The economics of information. Journal of Political Economy 69, 213–225. | Article |
    40. Tan Y-H and Thoen W. (2001) Toward a generic model of trust for electronic commerce. International Journal of Electronic Markets 5 (2), 61–74.
    41. Tan Y-H, Thoen W (2002) Formal aspects of a generic model of trust for electronic commerce. Decision Support Systems 33, 233–246. | Article |
    42. Teo TSH, Lim VKG et al. (1999) Intrinsic and extrinsic motivation in Internet usage. Omega: International Journal of Management Science 27, 25–37. | Article |
    43. Thorelli HB and Engledow JL (1980) Information seekers and information systems: a policy perspective. Journal of Marketing 44, 9–27.
    44. Venkatesh V (1999) Creation of favorable user perceptions: exploring the role of intrinsic motivation. MIS Quarterly 23 (2), 239–260.
    45. Venkatesh V and Davis FD (2000) A theoretical extension of the Technology Acceptance Model: Four longitudinal case studies. Management Science 46 (2), 186–204. | Article | ISI |
    46. Widing RE and Talarzyk WW. (1993) Electronic information systems for consumers: an evaluation of computer-assisted formats in multiple decision environments. Journal of Marketing Research 30, 125–141.



    All items were measured on a seven point Likert strongly disagree/strongly agree scale, unless mentioned otherwise.

    Trust in store

    1. This store is trustworthy
    2. This store wants to be known as one who keeps his promises (modified).**
    3. I trust this store keeps my best interests in mind.
    4. I think it makes sense to be cautious with this store (modified)(reverse).*
    5. This retailer has more to lose than to gain by not delivering on their promises.*,**
    6. This store's behaviour meets my expectations.*,**
    7. This store could not care less about servicing students*,** (modified) (reverse).

    Attitude towards online purchasing

    1. The idea of using this website to buy a product of service is appealing (modified).
    2. I like the idea of buying a product or service on this website (modified).
    3. Using this website to buy a product or service at this store would be a good idea (modified).

    Online purchase intention

    1. How likely is it that you would return to this store's website?
    2. How likely is it that you would consider purchasing from this website in the short term? (modified).
    3. How likely is it that you would consider purchasing from this website in the longer term? (modified).
    4. For this purchase, how likely is it that you would buy from this store?*

    Risk perception

    1. How would you characterise the decision to buy a product through this website? (a very small risk ? a very big risk).**
    2. How would you characterise the decision to buy a product through this website? (high potential for loss ? high potential for gain)(reverse).
    3. How would you characterise the decision to buy a product through this website? (a very negative situation ? a very positive situation) (reverse).
    4. What is the likelihood of your making a good bargain by buying from this store through the Internet? (very unlikely ? very likely) (reverse).


    1. Learning to use the website is easy.
    2. It is easy to get the website to do what I want.
    3. The interactions with the website are clear and understandable.
    4. The website is flexible to interact with.
    5. The website is easy to use.


    1. The online purchasing process on this website is fast.
    2. It is easy to purchase online on this website.
    3. This website is useful to buy the products or services they sell.
    Read more »

  • How does solar power work.?

    The sun—that power plant in the sky—bathes Earth in ample energy to fulfill all the world's power needs many times over. It doesn't give off carbon dioxide emissions. It won't run out. And it's free.

    Where can I buy solar cells.?

    Here you can buy with confidence with PayPal..This is global shopping center

    So how on Earth can people turn this bounty of sunbeams into useful electricity?

    The sun's light (and all light) contains energy. Usually, when light hits an object the energy turns into heat, like the warmth you feel while sitting in the sun. But when light hits certain materials the energy turns into an electrical current instead, which we can then harness for power.

    Old-school solar technology uses large crystals made out of silicon, which produces an electrical current when struck by light. Silicon can do this because the electrons in the crystal get up and move when exposed to light instead of just jiggling in place to make heat. The silicon turns a good portion of light energy into electricity, but it is expensive because big crystals are hard to grow.

    Newer materials use smaller, cheaper crystals, such as copper-indium-gallium-selenide, that can be shaped into flexible films. This "thin-film" solar technology, however, is not as good as silicon at turning light into electricity.

    Right now, solar energy only accounts for a tiny portion of the U.S.'s total electricity generation, because it is more expensive than alternatives like cheap but highly polluting coal. Solar power is about five times as expensive as what people pay for the current that comes out of the outlets.

    In order to have a hope of replacing fossil fuels, scientists need to develop materials that can be easily mass-produced and convert enough sunlight to electricity to be worth the investment.

    We asked Paul Alivisatos, deputy laboratory director at Lawrence Berkeley National Laboratory in California and a leader of their Helios solar energy research project, to explain how people capture energy from sunlight and how we can do it better.

    [An edited transcript of the interview follows.]

    What is a solar cell?
    A solar cell is a device people can make that takes the energy of sunlight and converts it into electricity.

    How does a solar cell turn sunlight into electricity?
    In a crystal, the bonds [between silicon atoms] are made of electrons that are shared between all of the atoms of the crystal. The light gets absorbed, and one of the electrons that's in one of the bonds gets excited up to a higher energy level and can move around more freely than when it was bound. That electron can then move around the crystal freely, and we can get a current.

    Imagine that you have a ledge, like a shelf on the wall, and you take a ball and you throw it up on that ledge. That's like promoting an electron to a higher energy level, and it can't fall down. A photon [packet of light energy] comes in, and it bumps up the electron onto the ledge [representing the higher energy level] and it stays there until we can come and collect the energy [by using the electricity].

    What's the biggest difference between how a plant captures light energy and how we do it with solar cells?
    We wish we could do what plants do because plants absorb the light, and [they use] that electron to change a chemical bond inside the plant to actually make fuel.

    Could you do artificial photosynthesis and emulate a plant?
    We would love to be able to make a solar cell that instead of making electricity makes fuel. That would be a very big advance. It's a very active topic right now among researchers, but it's hard to predict when we will be able to use it.

    One of the reasons we like to plant trees is because they take the CO2 out of the air. If we could do that [with a solar cell], then we could actually deal with global warming problems even more directly because we'd be pulling the CO2 out of the air to make our fuel.

    How good are current solar cells at capturing light energy?
    So we can talk about the power efficiency. The power efficiency of a typical crystalline silicon cell is in the 22 to 23 percent [range, meaning they convert as much as 23 percent of the light striking them into electricity]. The ones that you typically might be able to afford to put on your rooftop are lower than that, somewhere between 15 and 18 percent. The most efficient, like the ones that go on satellites, might have power efficiencies approaching 50 percent.

    The power efficiency is one measure, but the other thing that we're very concerned about is the cost of making them and the scale of production.

    In my opinion, the silicon technology doesn't scale [up] too well [because it's expensive to make]. We need to invent some new technology, [which] may not be as efficient, but you need to be able to make millions of acres of stuff if you want to get a lot of energy. People are trying to use new materials like plastics and nanoparticles.

    The total solar production in 2004 was around one thousandth of the total power consumption of the U.S. It's just not enough. Something's gotta change. We're not there yet. There's a lot of discoveries still to be made.


    Read more »