[Home] [Current Edition] [Compendium] [Forum] [Web Archive]
[Email Archive] [Guestbook] [Subscribe] [Advertising Rates]

Online Customer Service Chat: Usability and Sociability Issues

By Dorine C. Andrews, D.C.D, Georgetown University

Email: dca4@georgetown.edu
Web: http://georgetown.edu/faculty/dca4


Karla N. Haworth, The Chronicle

Email: Karla.haworth@chronicle.com

Dorine Andrews is a research professor at Georgetown University's Communication, Culture and Technology Masters Program in Washington, DC. She teaches, researches and consults in the areas of marketing and the Internet, online community development, managing organization & technology change, and knowledge management. She received her doctorate in Communications Design from the University of Baltimore after 18 years as a business owner and management consultant. She has published books in systems design and business reengineering.

Karla Haworth is a graduate of Georgetown University's Communication, Culture and Technology Program. She is the web site manager for the Chronicle of Higher Education where she applies her expertise and continuing interest in website usability issues and methodologies.


Interest in customer service chat for use on e-commerce websites has grown significantly in recent years. It is viewed as a cost-effective way to reduce purchasing risk through increasing social interaction, responsiveness to consumer questions, and personalization of the shopping experience. However, there is little evidence demonstrating that this customer service solution improves the online shopping experience, reduces perceived purchase risk or that it reduces purchase abandonment rates.

A usability study of five e-commerce websites that provide customer service chat was conducted to evaluate its viability as a customer service solution, whether operational problems and whether a positive experience using customer service chat affects online shoppers' inclination to purchase.

Results indicate that technical and sociability issues must be addressed for customer service chat to be a successful customer service solution. Key findings include: 1.) Access to customer service chat is a primary barrier to a successful experience; 2) The customer service chat experience begins before the connection is made, as the user searches for the link/icon; 3.) Users expect to interact with an actual person during the chat and are disappointed with behavior that indicates “canned” or inattentive interaction; 4.) When a positive customer service chat experience occurs intention to purchase increases; and 5) Poor usability may be mitigated by positive trust in the established company and brand. An operational checklist for end user critical usability features is proposed based on the learning from the study.

The study concludes that the customer service chat experience is more complex and the total experience from the customer perspective is more than the chat itself. A five-phase model with performance measures for each phase is proposed. An operational evaluation guide is also proposed based on the usability study results.


Although email is the most frequently offered form of online customer service support on web sites, many consumers still prefer to call 24-hour, toll-free phone numbers to ask questions and place orders because it is faster and people can interact synchronously with a customer service representative (Smart Computing, 2000). Email can provide a significant level of social interaction, but consumers have been disappointed with its responsiveness. The average wait-time for a response, if any is received at all, is 28 hours (Stellin, 2002, White, 2000; Standley and Chu, 2000). An emerging alternative to slow email or expensive telephone customer service support is a service called "customer service chat."

Customer service chat consists of synchronous online text-based one-on-one interaction. Its technological foundation is instant messaging software, which allows individuals to communicate one-on-one with each other while they perform other computer-based tasks. Companies such as Sideware, Avaya Communications, Echopass, EGain Communications, Kana Communications, Servicesoft, Oracle, LivePerson, FaceTime, Cisco Systems, and NetDialog Inc. have created customized software for call center customer service representatives to interact synchronously, one-on-one, with consumers before, during and after they make online purchases. It is marketed as a cost-effective way to add social interaction and immediate responsiveness to the Internet shopping experience.

Customer service chat has been shown to be operationally less costly than telephone customer service support and about the same as email support. The total weighted costs for a customer service representative performing a phone call is $33 for the typical 8-10 minute call (Hulme, 2000). Email costs about 1/3 of telephone support at $10 per typical call (Zetlin, 2000). In contrast, customer service chat is purported to reduce these costs to between $4 to $10 per chat session (Wasserman, 2001; Standley and Chu, 2000). With customer service chat costing less than or equal to email service support, the question becomes whether chat technology is significantly more responsive to customers than email, where purchase abandonment rates are significant.

The Importance of Customer Service in E-commerce

At least 28% of online shoppers abandon a purchase before completing it (Greenfield Online, 2000). Some consumers abandon purchases because they perceive it as riskier to shop online than shopping in stores or over the telephone where consumers can obtain direct and immediate information to offset the risk of the item not meeting their expectations (Greenfield Online, 2000). Other consumers find the checkout process confusing, and others abandon their purchases because they cannot speak with a customer service representative (Wasserman, 2001; White, 2000; LivePerson, 2000).

These factors -- purchase risk, confusion, and lack of contact -- appear to be the primary, but not the only reasons for purchase abandonment. Poor interface design (Lohse and Spiller, 1998; Nielsen, 2000), privacy and security (Culnan and Armstrong, 1999; Hoffman, Novak and Peralta, 1999), search attributes and pricing (Brynjolfsoon and Smith, 1999), and emotional trade-offs of not shopping in a store (Luce, Payne, and Bettman, 1999) also have been documented.

Some of these primary and secondary factors may be mitigated with more effective online customer service interactions to improve contact and reduce risk factors. Others, such as poor interface design, may not. However, purchase risk factors such as perceived quality (Brucks, Zeithaml, and Naylor, 2000; Lal and Sarvary, 1997) and related purchase selection problems (Bauer, 1960, Levitt, 1986; Chaudhuri, 2000) are factors that may be directly addressed with customer service chat interaction. It appears that organizations are moving toward customer service chat as a low-cost mechanism to reduce online shopping risk (Wasserman, 2001; Hollman, 2000). Estimates for implementation of this software in call centers range from 27% to 45% within the next one to three years (Hulme, 2000; Vaczek, 2000).

Online Purchase Selection Problems That Increase Purchase Risk

When prospective consumers cannot taste, test, feel, smell, or watch a product in operation before buying it, consumers are buying a mere promise of satisfaction – a process that tends to increase the consumers' sense of purchase risk (Bauer, 1960, Levitt, 1986; Chaudhuri, 2000). Products that have variables such as size, texture, and color can be termed "high touch" goods. These variables, along with the lack of strong brand recognition, tend to create a greater perception of purchase risk for online consumers (E-Commerce Times, 2000). It appears that many consumers are motivated by their senses when they shop and want to interact with products directly (e.g., by touching, feeling, smelling, testing or watching the product first-hand), especially with non-standard products (Underhill, 1999;Tehrani, 2001; Greenberg, 2001).

Studies to better understand the risk issues for online consumers found online purchase risks similar to the risks associated with catalog and television home shopping (Cox and Rich, 1964; Kwon, Paek, and Arzeni, 1991; Spence, Engel, and Blackwell, 1970). For example, consumers cannot touch or try-on items in catalog or TV shopping. Lessons from these types of distance shopping provide insights to potential solutions for online shopping. One lesson is to increase social interaction with the use of a shopping companion (Gronhaug, 1972; Kirsner, 1999; Swaminathan, et al, 1999). A second lesson is to increase responsiveness to consumer questions, which has been linked to satisfaction and trust and may also reduce purchase risk (de Ruyter and Wetzels, 2000). Lastly, highly social interactive services, such as personalized shopper services, can be used to increase perceived service quality, satisfaction, and purchase intention (Reynolds and Arnold, 2000). In summary, research is needed to determine if customer service chat is a pu rchase risk reducing mechanism. The research should address whether customer service chat can reduce purchase risk by increasing social interaction when shopping for "high touch" items online, improving responsiveness to consumer questions, and providing personalized shopping experiences. If customer service chat can perform in this manner, then there would appear to be an increased likelihood that customer service chat will contribute to increasing e-commerce sales by reducing purchase abandonment.

Anecdotal evidence that customer service chat increases sales has been documented (Johmann, 2000; LivePerson, 2000; Heggenstaller, 2001); however, systematic, methodologically sound research has not been conducted to demonstrate customer service chat's ability to decrease purchase abandonment or increase sales. However, it appears that operational problems may be contributing to a lack of purchase risk reducing results. For example, a lack of broad availability reduces chat performance and some companies offer the service only to selected customers (Vaczek, 2000). Implementing customer service chat requires the redesign customer service operations to reshape how customer service representatives interact with customers (Waltner, 2000, Hollman, 2000). Additionally, the software can be difficult to properly install and manage (Hodge, 2000). Some companies find it poorly designed (Puente, 2000), inappropriate for complex problem solving (Bannan, 2000), and the source of workload management issues (Hodge, 2000).

Given the lack of systematic performance evidence, research is needed to determine whether customer service chat is a viable and beneficial online customer service mechanism in its current form. This paper documents the results of an in-depth usability study of five retail websites offering customer service chat to find the answers to these research questions:

  • Does customer service chat increase responsiveness to customer questions, social interaction of the shopping experience and personalization of the shopping experience?

  • Do operational problems interfere with effective application of customer service chat?

  • How does the usability of customer service chat affect shoppers’ inclination to purchase?

Study Methodology

Five retail shopping websites were selected for the usability study using the following criteria: (1) brand of customer service chat software installed, (2) hours customer service chat was provided, and (3) location of icons/links to initiate the customer service chat (Figure1). Two websites were online Internet e-commerce websites. Three were extensions of established, well-branded retail companies.

E-commerce Web site Chat software brand Hours of chat Chat icon location*
1. Art/Posters
Acuity WebCenter Express 9 a.m.-7 p.m. Mon-Fri(CST) CSP
2. Toys
FaceTime 9 a.m.-7p.m. Mon-Fri (EST) HP, CSP
3. Casual Clothing
Webline 9 a.m.-5 p.m., Sat-Sun
24 hours/ 7 days a week
4. Casual Clothing
EGain 24 hours/ 7 days a week CSP
5. Wedding Gifts
LivePerson 9 a.m.-6 p.m. Mon-Fri (EST) HP, OP, CSP
* CSP =customer service page; HP = home page; OP=other pages

Figure 1: Usability Test Websites

The usability study design sought to simulate the actual experiences of consumers shopping on the Internet. A variety of browser software was used. One subject used an AOL IE modified browser, while the other subjects used either Netscape or IE explorer. Half of the subjects worked from a MAC platform and half used a PC platform. Tests were conducted in locations (e.g. work and home) that were comfortable and familiar to the subjects. This ensured that both dial-up modems, DSL and T1 lines were used in the test, as recommended by Nielsen (2000) and Gordon (2000).

The usability study protocol included an opening statement describing the study and defining terms. Both pre- and post-test surveys were administered to each subject. The second-author researcher observed all website testing, using a "think aloud" protocol approach. She asked the subjects to describe their experiences aloud as they accessed and completed each website usability test. The researcher also reminded subjects that the websites, not the subject, was under study (Gordon, 2000). In addition to what was said, the researcher observed body language and facial expressions to confirm confusion, frustration, satisfaction, and surprise.

The usability tests were administered in the last week of March 2001. Twelve subjects participated during the test period. The order in which a subject tested the websites was rotated to control for familiarity and learning. Subjects participated in the tests at different times of the day, seven days a week to simulate their normal shopping hours -- subjects shopped at the time they normally shop. Some tests were conducting in the evening because of this. Each subject’s usability test of all 5 websites took approximately 90 minutes to complete. Twenty-five percent (25%) of the subjects were tested using two sessions due to scheduling needs. The pre-test survey included data collection of subject demographics, Internet shopping history, Internet shopping experience and attitude toward online shopping. Categories for product types used in the survey were based on those reported by other researchers (Hanrahan, 1998; Tehrani, 2001; Underhill, 1999, p. 213).

After completing the pre-test survey (Appendix 1), subjects were asked to complete six tasks on each of the five websites (Figure 2). Before initiating a customer service chat, subjects were also given a list of optional questions to ask the customer service representative (Appendix 2). Subjects were advised that they could ask their own questions or use ones from the list. The questions covered five broad categories: (1) item attributes such as size, feel, durability, taste or smell; (2) questions about availability, shipping, and order tracking; (3) price and billing; (4) return policies and consumer satisfaction policies; and (5) additional customer services, such as gift wrapping. These questions were developed based on research of questions consumers typically ask (Greenfield Online, 1999; Johmann and Rafferty, 2000).

A post-test survey (Appendix 3), using five point Likert scale questions where appropriate, was administered after each website usability test. It contained 19 multiple-choice questions to assess subject feelings and attitudes about the website site and its customer service customer service chat. If a question could not be answered, it was marked, “not applicable.” After all five websites had been tested, subjects were asked for their overall impressions of online chat technology and their likelihood to use it again. The complete study protocol was tested with two subjects and revised accordingly before the full study was conducted. Subjects were assigned a control number to protect their privacy.

Study Subjects

The subjects were experienced Internet users (Figure3) with profiles very similar to those most likely to shop online (Lohse, Bellman, and Johnson, 2000). Subjects were selected from people known to the researcher, not from a random sample. Subjects expressed continued interest in purchasing standardized products (e.g., CDs, books, videotapes, computers) online, but 83% said they “would not buy” or had “hesitation about purchasing” high touch or try-on products (e.g., jewelry, electronics costing more than $100, clothing, shoes, cars, furniture or perfume).

None of the subjects had used the customer service chats on any of the test websites prior to the usability test. They did not have extensive experience with the selected websites either.

Usability Test Results

A total of 35 customer service chats were completed out of 60 opportunities to chat. Two of the five e-commerce websites had considerable availability or connection problems, which accounted for the 42% test non-completion rate indicating that that customer service chat is not always convenient to use. Of the 58% customer service chat opportunities that were successful, subjects identified those chat features that performed well or poorly in each opportunity. An analysis of the results revealed four major groups of usability problems that made the chat difficult to use. These problem groups are (1) access, (2) technical design, (3) sociability design, and (4) website usability. These usability problems had an effect on the customer experience and on the buying intention, especially if there is a low trust of the brand or company reputation in addition to performance problems. Appendix 4 summarizes the test results.

Access Problems

Access is defined by two factors. The ability to find the chat feature (findability) and the time required access it after it is found (immediacy) as defined in Figure 4. A key finding of the study is that over 33% of the subjects had difficulty finding the customer service chat feature. In an effort to simulate actual online shopping behavior, participants were not alerted to the chat icon placement and were instructed to seek out the chat feature only after finding something of potential purchase interest (see figure 2 for usability task protocol). Researchers observed that subjects scanned, but did not read, the website pages as has been shown in previous Internet usage research (Nielsen, 2000, b). If the links or icons for the chat were not above the fold on the home page or the product pages, subjects were unaware of chat availability. Some websites placed the feature only on their help pages or at the bottom of pages. Only one website, where 92% of the subjects found the feature "very easy" or "easy to find," had placed the customer service chat icon in the upper left-hand corner of every page. This finding suggests design issues for customer service chat begin with “above the fold” icon placement for the feature.

A connection to a customer service chat function was completed when the chat window opened and the user was welcomed and/or prompted to enter information. While subjects waited to be connected, they expressed their reactions to what was happening. A wait time of more than a two minutes to reach a representative through the chat “window” was unacceptable to the test subjects. Slowness to connect was interpreted as a lack of company responsiveness and created frustration. Impatience and negative attitudes toward a website were dominant behavioral expressions while waiting. Had this not been a usability test situation, 80% of the subjects who had to wait more than two minutes said they would have abandoned the attempt to connect, although one could speculate that some of these users may have opened another browser to do other work or amuse themselves while they waited. However, waiting alternatives were not explored in this study.

Once a connection was completed and a chat window opened, subjects developed negative attitudes and became impatient when the customer service chat was not available, that is, they could not interact with the customer service representative immediately. For example, when the service was unavailable, some subjects were automatically pushed to an email for rather than being offered a choice of next step options. When there was a waiting queue after being connected, missing or inaccurate response time estimates made some subjects feel like they were being “strung along.” Other frequent causes for a lack of interaction availability included broken connections and “not open for business announcements.” For example, one website experienced more service failures with Macintosh computers paired with IE Explorer browser software. Some subjects were forced to wait anywhere from 5-10 minutes for interaction once connected. Subjects expected 24 X 7 availability comparable to their Internet connections or at least during evenings and weekends when shopping is most convenient for those who work full time. Many subjects stated that if the customer service chat is offered at all, it should be available without having to wait for a customer service representative to come online.

Another aspect of immediacy is the pacing of the chat interaction itself. Although 67% of the subjects who participated in the chats were satisfied with the pacing of the interactions, 32% commented about the slowness and awkwardness of the interaction timing. For example, a subject entered a question in the chat window and waited anywhere up to 60 seconds for a response. In summary, subject dissatisfaction appeared to increase when there is no interaction (silence), when they did not know whether the representative is connected or when the customer's actual wait time and their place in the queue were not indicated. This mirrors many customer service telephone based interactions, however. Subjects responded more positively to the silence when a representative typed, “Please hold while I check that for you,” or “I will get to you within a few minutes.” Researchers speculate that these silences may be due to workload issues with the chat customer service representatives. Unlike telephone customer service interactions where representatives handle one customer at a time once connected, chat software providers estimate that representatives can handle up to four chats simultaneously. Companies that offer chat might consider limiting representatives to fewer simultaneous chats to quicken the pace of the individual chat sessions and reduce periods of silence. It would be interesting to study the similarities and differences between waiting for online customer service chat interaction and waiting to speak with a customer service representative using the telephone. Post-test questions comparing customer service telephone and online chat wait time reactions were not asked in this study.

Sociability Design Problems

Sociability defines the human aspects of online interaction that create common ground, reciprocity and other aspects of interaction that build trust among people communicating online (Preece, 2000). Cooperative, friendly behavior can be strengthened with social interaction and can be exhibited through words that engender commitment, excitement and optimism (Jarvenpaa, 1998). The results of this study indicate that sociability design problems are rooted in the technical design problems as well as in the customer ser ice representative’s communication interaction itself (Figure 5). This study showed that sociability-related design mistakes created suspicion, a sense of distrust in the mind of some subjects, and a lack of confidence in the company behind the website. The sociability problems included:

  • Asking for personal information that subjects did not consider needed to chat (e.g., email address, mailing address, full name)

  • Missing or hidden privacy statements

  • Referring the subject to a URL page instead of answering the question that was asked. For example, based on observations in this study, subjects were surprised and confused when this was done in lieu of answering consumers’ questions via the customer service chat session.

  • Lack of personalized interactions. For example, subjects believed some answers were generic or canned answers. Some subjects felt that the representative should have made suggestions or shared their insights. Other subjects resented brief cryptic answers that kept a conversation that subjects would characterize as “real” from developing.

  • Lack of politeness and etiquette. For examples, subjects were offended by typographic errors in responses, incomplete sentences as responses, typing in capitals letters, not alerting the subject to what was happening, not saying “Please” and “Thank you” or requiring the entry of personal data to initiate the chat.

Three out of the five problems relate to the use of textual language by the customer service representatives during the chat. A poorly worded response negatively affected the test subjects’ opinions of the chat, even if the subjects received the information they needed. Some subjects demonstrated a desire to “connect” with a customer service representative. Subjects who gave the highest chat scores tended to feel that they were communicating with an actual person. For example, one subject was extremely pleased that a customer service representative said, “I would wear it in spring” and “This might look good on your body type.” Some of the most memorable chat experiences occurred with subjects who had conversations, however brief, about topics unrelated to the product being discussed. For instance, one subject asked a customer service representative whether she usually conducted several chats at once [the answer was “yes”]; another asked a representative if she was in the same state as the retailer’s main location [the answer was “yes”]. The representatives answered these questions directly, which impressed the subjects. As one subject commented, “I like to know I’m talking to a real person.” In other cases, subjects thanked the customer service representative at the end of the chat, and waited for a response, even though they already had the information they needed. When the researcher asked one subject why he was waiting for a response, he said, “To see if she’s nice.” Another expressed satisfaction after the chat when the customer service representative said, “Thank you, Jeff.” The subject, who had complained about the slow pace of the chat just a few minutes before, now seemed satisfied. “She was personal. I like that, even though she didn’t tell me what I wanted to know.” On the post-test survey, this subject indicated that the final customer service interaction positively impacted his overall feelings about the chat experience as often occurs in a face-to-face interaction. Again, comparative research on expectations surrounding telephone-based customer service interactions and chat interactions is needed to understand whether past experiences with telephone based customer service set expectations for chat customer service, or whether expectations about the internet itself override transference of expectations.

Observations of subjects during the usability tests also suggest that a single “right” way to interact may not be the most appropriate. Subjects tended to interpret customer service representative words differently. For example, one subject perceived it as rude when two customer service representatives on different web sites ended the chat, instead of allowing the subject to do so. Some subjects also interpreted short answers from customer service representatives as curt or rude, while others appreciated them. Likewise, some subjects liked receiving extra information about a product during the chat, while others saw this as an unnecessary waste of their time. These findings support the conclusions of Reynolds and Arnold (2000), who found that personalized customer service encounters could be important determinants of perceived service quality, satisfaction and purchase intention. This result also points to the important role that verbal and visual cues place in managing the customer service experience. It appears that a lack of cues or a lack of training in this medium can degrade the overall customer service experience.

Technical Design Problems

As mentioned above, technical design problems can create or exacerbate sociability problems (Figure 5). The more complex the customer service chat's technical design, the more difficult it was for subjects to use it. Subjects were consistently confused by one website that used one window for the chat set-up and three additional windows for interaction during the chats. Technical design problems frequently frustrated subjects. These included:

  • Small or limited data entry areas (text boxes)

  • Inaccurate, cryptic or untimely messages such as “Please wait. A service representative will be with you in approximately 2 minutes” which was posted for 10 minutes before and remained displayed throughout the chat

  • Inability to chat and view product pages simultaneously. Some subjects became confused when the chat window floated over the product pages

  • Being automatically directed to email when the initial chat connection failed

  • Misplacement of icons (e.g., sending a message or ending the chat)

  • Requiring answers to questions before chat is initiated

  • Lack of information concerning what was happening while waiting for a response

Website Usability Problems

The three websites with the poorest technical customer service chat performance also had additional website usability problems. These problems appeared to aggravate subject suspicions about the company behind the website and their ability to trust it. The worst of these problems included poor, missing or failed image displays, confusing terminology, limited product selection, inconsistent page design and navigation. From observing the participants, it became obvious to the researchers that unless clear visual cues are provided to direct a user through the flow of a website, users become lost or frustrated. This included obscuring or hiding access to customer service chat icons.

Chat Experience and Buying Intention

In 33% of the 35 completed customer service chat encounters, subjects said the experience exceeded expectations, 31% of the encounters, subjects said it performed “as expected”, and 36% said is performed “worse” than expected. Comparative baseline expectations about customer service chat were not captured in the pre-test. It could be that subjects had limited definable expectations going into the study regarding customer service chat since they had never experienced the customer service chat prior to the usability test. In approximately 2/3 of the completed customer service chat encounters, subjects said they would buy from the website based on their customer service chat experience and one-third would not buy from the website. Combined with the 58% failures to reach the customer service chat at all, there appear to be major obstacles to be overcome before customer service chat can contribute to increase sales at e-commerce websites. When subjects tried to use the feature and it did not function, many subjects noted they would either abandon the chat, attempt a different method of contact or abandon the website altogether to find the item on another website.

When asked directly for their reasons for buying at a website after experiencing the customer service chat, subjects consistently and most frequently stated five reasons for a positive buying intention: (1) overall positive experience with the website (of which the chat is a part); (2) trust in the retailer; (3) confidence in product (brand); (4) price acceptability; and (5) low risk of purchase. This finding supports the research of Jarvenpaa, Tractinsky, and Vitale (1999), who found that consumers recognized differences in size and reputation among Internet stores and that these differences influenced consumers’ assessments of store trustworthiness and their perception of risk, as well as their willingness to patronize the store. This result implies that previous reputation and established positive brand experience can mitigate some negative online experience.

However, from this study it appears that performance and established reputation (trust) are related in some very concrete ways. Two of the websites that were new online website ventures, and had little established reputation or brand recognition by the study subjects. Three websites were extensions of currently well-known off-line retailers. The new ventures delivered the worst customer service chat performance. Only 1 subject was able to connect on the arts/posters website and only 4 subjects were able to connect to the wedding gifts website. Subjects clearly indicated that they would not buy items from these websites because of their overall experience and lack of trust in these retailers. Conversely, most subjects knew the brand and reputation of the other e-commerce retailers. Of the subjects who said they would not buy from the well-known retailer websites, only 20% stated non-trust issues (e.g., technical design problems) as underlying reasons. The implication is that a good reputation can help keep consumer trust from eroding when a technical problem occurs with customer service chat. However, the opposite does not appear to be the case. A lack of reputation and poor customer service chat technical performance appears to be potent obstacles to acquiring new online customers.

Results Summary

This study attempted to answer three questions regarding the usability of customer service chat. The first question was "Does customer service chat increase responsiveness to customer questions, social interaction of the shopping experience and personalization of the shopping experience?" From this study it appears that customer service chat has only the potential for increasing responsiveness, social interaction and personalization of the shopping experience. Getting connected is a major obstacle. Forty-two percent of the attempts to use customer service chat failed and in those instances a majority of subjects noted they would either abandon the chat attempt to try another contact method or abandon the website altogether. If the subject did connect to the customer service chat and finally interact with a representative, the chat experience was often a positive one in terms of increasing responsiveness to the customer and socialization and personalization of the shopping experience.

The second question was "Do operational problems interfere with effective application of customer service chat?" This study identified four categories of operational problems that interfere with the effective application of customer service chat. These include access, sociability design, technical design, and website performance. Until these operational problems are solved, it appears that customer service chat will not decrease customers’ risk perceptions of buying goods online. Based on the results of this study people should remember these key points in designing and implementing customer service chat functionality: (1) Customer service chat begins before the connection is made. Online shoppers measure their experience with customer service chat when they look for the link/icon to it on the retailer’s website, not after the connection is established. Access reliability and immediacy in making the connection are very critical. Standards for consistent icon positioning on every page above the fold are recommended. (2) People expect to be able to reach customer service whenever they are on the Internet. Online shoppers make the assumption that if it's on the Internet, it should be supported at all times or at least during times when people interested in those products are most likely to shop online. Physical store hours do not translate to the Internet. (3) People expect to interact with an actual individual in customer service chat. Although their expectations on how the interaction should take place may vary, people appear to have consistent expectations that the experience should be personalized, private, and use appropriate grammar and language. (4) Interactions must be tailored to the communications needs of the customer. People have different expectations concerning how the conversation should proceed and end. These expectations, not whether their questions were answered, may be drivers to customer satisfaction with customer service chat.

The third question asked, "How does the usability of customer service chat affect shoppers’ inclination to purchase?" Although no conclusive evidence of actual purchase increase can be offered from this study, satisfaction with customer service chat did result in several purchases. In addition, approximately 67% of the 35 completed customer service chat subjects said they would buy from the website based on their customer service chat experience. Dissatisfaction with customer service chat consistently resulted in negative attitudes in website/retailer trust and willingness to purchase. There was some indication that poor usability may be mitigated by positive trust in an established company or brand, however in this study good customer service chat performance was best on those established retail and brand websites.

Study Limitations

Although this study tested five different software applications of customer service chat across a variety of customer platforms and browsers technologies, only five websites that currently offer customer service chat were tested and the subject base for the chat was small. Therefore, the results of this study should not be generalized to all websites having the application or to the general population. However, the study results do indicate some of the problems most likely to be encountered when people attempt to use customer service chat and can be used to guide the design, development, operation and evaluation of customer service chat effectiveness. Research across a broader range of e-commerce websites would be welcome along with research to better understand the drivers to customer service chat satisfaction and its relationship to other types of customer service experiences.

A Model for Measuring Customer Service Chat Satisfaction

It is clear from this small usability study that the customer service chat experience is more complex and, from the customer's perspective, the experience is much more than the interactive chat itself. The experience begins with the search for the chat icon/link and ends only after the last line of text has been sent and received and the connection is severed. There are factors that may mitigate the overall experience (e.g., website performance, reputation of the retailer), but the experience itself is a major driver to overall customer satisfaction. A model for that experience is proposed. It includes five distinct phases, each of which contributes to the overall satisfaction with the customer service chat experience (Figure 6).

Phase 1: Icon Identification begins when the customer recognizes the need to talk with a customer service representative and ends when the customer has successfully found and clicked on the icon/link that takes him/her to the customer service chat page/window. This "access findability" could be measured in terms of time required to find the icon. The primary performance issue in this phase is the placement of the icon for easy recognition by the customer.

Phase 2: Service Connection begins when the customer arrives at the customer service page and ends when the service connection has been successful. The customer should not be prompted for data entry during this phase. Performance issues in this phase involve the number of attempts required to establish a successful connection. This phase should be automatically performed and immediately performed when the icon/link is clicked on. This "access immediacy" can be measured in terms of the time required to establish a successful connection as well as the number of attempts required to complete the connection.

Phase 3: Representative Connection begins once the connection to the service is operational and ends with the recognition of the customer by the service representative. Performance issues in this phase include queuing and wait time for a representative for become available, requirements for registration and question entry before the connection with the representative is completed. The performance for this phase could be measured in terms of "access immediacy," the time to establish a connection with the representative and the number of fields/keystrokes required before the connection is made.

Phase 4: Chat is the actual interactive communication by the customer and the service representative. It begins with the recognition of the customer by the service representative and ends with when the customer perceives no further need for interaction. Performance issues in this phase include the technical design and service representative sociability. The quality of the conversation could be measured in terms of customer perceived "satisfaction with the conversation," the number of technical difficulties and, lastly, whether or not the customer questions were answered. Factors contributing to conversation satisfaction could include response time intervals, language appropriateness, response appropriateness, and status signaling. Factors contributing to technical difficulties could include number of lost connections, complexity of windowing, and maintenance of reference relationships (e.g., see product and conduct conversation at the same time).

Phase 5: Closure begins when the customer perceives no further need for interaction and ends when the service connection is broken. Performance issues in this phase focus on representative sociability. The quality of the closure could be measured in terms of whether the customer was allowed to end the conversation, whether the representative asked if there were more questions, and language appropriateness.

This model, which should be validated in future research, provides a guide to online retailers who face a dichotomy when it comes to customer service. Many companies are attempting to reduce the labor-intensive costs of customer service by automating services and employing new technologies such as customer service chat (Brady, 2000). However, as this study and others demonstrate (Bitner, Brown, and Meuter, 2000), this has not as yet lead to service quality improvements. Twenty-five percent (25%) of Internet shoppers will not return to shop at a web site at which they had a purchase failure or were unable to obtain help (Vaczek, 2000). This study showed an even higher "refuse to return" rate of 42% of those able to complete a customer service chat. These findings suggest that offering customer service chat with less than optimal human resources or poor technical software implementation is detrimental to a retailer's reputation and ability to acquire and keep customers.

In this usability study of five sites is indicative of the larger e-commerce world, customer service chat does not appear to be ready for use as a primary mechanism for delivering customer service. However, as this study demonstrates, if operational usability problems can be corrected, customer service chat can assist in reinforcing the intention to buy and as a result in reducing abandonment rates. The proposed model and a supporting evaluation guide (figure 7) provide practitioners with an end-user customer defined set of “must have” customer service chat features. For researchers, there is much to be explored in understanding how customer service experiences influence customer expectations, buying intention and use of customer service chat online.


Bannan, K. (2000, October 23). Chatting up a Sale. The Wall Street Journal. [On-Line]. Available: http://interactive.wsj.com/public/current/articles/SB971800147663017706.htm

Bauer, R.A. (1960). Consumer Behavior as Risk Taking. In R.S. Hancock (Ed.), Dynamic Marketing for a Changing World (p. 389). Chicago: American Marketing Association.

Bitner, M.J., Brown, S.W, & Meuter, M.L. (2000). Technology Infusion in Service Encounters. Academy of Marketing Science Journal, 28 (1), 138-149.

Brady, D. (2000, October 23). Why Service Stinks. Business Week. [On-Line]. Available: http://www.businessweek.com/2000/00_43/b3704001.htm

Brucks, M., Zeithaml, V.A., & Naylor, G. (2000). Price and Brand Name as Indicators of Quality Dimensions for Consumer Durables. Academy of Marketing Science Journal, 28 (3), 359-64.

Brynjolfsoon, E. & Smith. M.D. (1999). Frictionless Commerce? A Comparison of Internet and Conventional Retailers. Management Science, 46 (4), 563-585. Available: http://ebusiness.mit.edu/papers/friction.

Chaudhuri, A. (2000). A Macro Analysis of the Relationship of Product Involvement and Information Search: The Role of Risk. Journal of Marketing Theory and Practice, 8 (1), 1-15.

Cox, D.F. and S.U. Rich (1964). Perceived Risk and Consumer Decision Making: The Case of Telephone Shopping. Journal of Marketing Research, 1, 32-39.

Culnan, M. J., & Armstrong, P.K. (1999). Information Privacy Concerns, Procedural Fairness, and Impersonal Trust: An Empirical Investigation. Organization Science, 10 (1), 104-115.

Greenberg, P.A. (2001, February 21). E-Grocers: Express Line to Oblivion. E-Commerce Times. [On-Line]. Available: http://www.ecommercetimes.com/perl/story/?id=7647

Greenfield Online. (1999). Shopping 2000: A Digital Consumer Study. [On-Line]. Available: http://www.greenfieldcentral.com/research_findings/Shopping%202000/shopping_2000.htm

Gronhaug, K. (1972). Buying Situation and Buyer’s Information Behaviour. European Marketing Research Review, 7, 3348.

Hanrahan, T. (1998, December 7). Lessons Learned: A Guide to What Does and Doesn’t Sell Online. The Wall Street Journal. [On-Line]. Available: http://interactive.wsj.com/public/current/articles/SB912728853262774500.htm

B. Heggenstaller, Vice President of Operations for Woolrich, Inc. (Telephone interview, March 6, 2001).

C. Hodge. (Message posted to the Netpreneur Ad-Marketing Listserv, January 23, 2000). http://www. netpreneur.org

Hoffman, D.L., Novak, T.P., & Peralta, M. (1999). Building Consumer Trust Online. Communications of the ACM, 42 (4), 80-85.

Hollman, L. (2000, December 5). Web Surf’s Up for Call Centers. Call Center. [On-Line]. Available: http://www.callcentermagazine.com/article/CCM20001129S0006/3

Hulme, G.V. (2000). Help! Sales and Marketing Management, 152 (2), 78-84.

Jarvenpaa, S.L., Tractinsky, N., & Vitale, M. (1999). Consumer Trust in an Internet Store. Information Technology in Management, 1, (1-2), 45-71.

Jarvenpaa, S.L. (1998) Communication and trust in global virtual teams. Journal of Computer Mediated Communication, 3, (4), Available: http://www.ascusc.org/vol3/issue4/jarvenpaa.html.

Johmann, J.M. (2000, September 19). The Wedding List Online Sales Surge After LivePerson Implementation. Press release on LivePerson web site. [On-Line]. Available: http://ir.liveperson.com/ireye/ir_site.zhtml?ticker=lpsn&script=410&layout=6&item_id=117676

Johmann, J.M., & Rafferty, A. (2000, January). Half of Online Shoppers Want Better Customer Service or the Ability to Speak or Chat With a Sales Representative. Press release on LivePerson web site. [On-Line]. Available: http://ir.liveperson.com/ireye/ir_site.zhtml?ticker=lpsn&script=410&layout=6&item_id=146242

Kirsner, S. (1999, August 1). Complex Commerce. CIO Magazine. [On-Line]. Available: http://www.vio.com/archive/webbusiness/080199_main_content.html.

Kwon, Y.H., Paek, S.L., & Arzeni, M. (1991). Catalog vs. Non-Catalog Shoppers Apparel: Perceived Risks, Shopping Orientations, Demographics, and Motivations. Clothing and Textiles Research Journal, 10 (1), 13-19.

Lal, R., & Sarvary. M. (1998). Does the Internet Always Intensify Price Competition? Stanford Research Paper 1457R (April).

Levitt, T. (1986). Marketing Intangible Products and Product Intangibles. In Levitt, T. (Ed.), The Marketing Imagination (new, expanded edition) (p. 95-110). New York: The Free Press.

LivePerson customer service white paper. (November 2000). Creating a Service-Driven Web Site: The Value of Customer Service to an Online Brand.

Lohse, G.L., Bellman, S., & Johnson, E.J. (2000). Consumer Buying Behavior on the Internet: Findings from Panel Data. Journal of Interactive Marketing 14 (1) (Winter). [On-Line]. Available: http://ecom.gsb.columbia.edu/Papers/99wvtm2.pdf

Luce, M.F., Payne, J.W., & Bettman, J.R. (1999). Emotional Trade-Off Difficulty and Choice. Journal of Marketing Research, 36 (2), 143-159.

(a) Nielsen, J. (2000). Designing Web Usability: The Practice of Simplicity. New Riders Publishing: Indianapolis.

(b) Nielsen, J. (2000, March 19). Why You Only Need to Test With 5 Users. In Nielsen’s online Alertbox column at www.useit.com. [On-Line]. Available: http://useit.com/alertbox/20000319.html

Oberndorf, S. (1996). A New Breed of Catalogers. Catalog Age, 13 (12), 560.

Preece, Jenny (2000). Online Communities: Designing Usability, Enabling Sociability. NewYork: John Wiley & Sons.

Preece, J., Rogers, Y., Sharp, H., & Benyon, D. (1994). Human-Computer Interaction. New York: Addison-Wesley.

Puente, M. (2000, December 6). Customer Service With a :-). USA Today. Available in USA Today online archive, http://www.usatoday.com.

Reynolds, K.E., & Arnold, M.J. (2000). Customer Loyalty to the Salesperson and the Store: Examining Relationship Customers in an Upscale Retail Context. The Journal of Personal Selling and Sales Management, 20 (2), 89-98.

de Ruyter, K., & Wetzels. M.G.M. (2000). The Impact of Perceived Listening Behavior in Voice-to-Voice Service Encounters. Journal of Service Research, 2 (3), 276-284.

Live Customer Assistance: Live Chat Tools Can Turn Browsers Into Buyers. (2000). Smart Computing magazine, 6 (12), 45-47.

Spence, H.E., Engel, J.F., & Blackwell, R.D. (1970). Perceived Risk in Mail-Order and Retail Store Buying. Journal of Marketing Research, 7, 364-369.

Standley, A., & Chu, J. (2000). Live Web-Based Customer Assistance: Using Service to Increase Sales. Mainspring report.

Swaminathan, V., Lepkowska-White, E., & Rao, B.P. (1999). Browsers or Buyers in Cyberspace? An Investigation of Factors Influencing Electronic Exchange. Journal of Computer-Mediated Communication, 5 (2). [On-Line]. Available: http://www.ascusc.org/jcmc/vol5/issue2/swaminathan.htm.

Tehrani, R. (2001). Apres Le Deluge: Or Click, Bam… Thank You, Ma’am. Customer Inter@ction Solutions, 19 (7), 12-16.

Underhill, P. (1999). Why We Buy: The Science of Shopping. Simon and Schuster: New York.

Vaczek, D. (2000, December 8). Talking Up Voice and Text. eMarketing Magazine. [On-Line]. Available: http://www.emarketingmag.com/articles/dec00/dec-8.cfm

Waltner, C. (2000, December 4). Live Internet Service Set to Capture Customer Attention. InformationWeek.com. [On-Line]. Available: http://www.informationweek.com/815/live.htm

Wasserman, L. (2001). Live Interaction: What’s Needed on the Web. Customer Inter@ction Solutions, 19 (7), 58-60.

J. White, Internet strategist for White and Associates. (Telephone interview, October 11, 2000).

Zetlin, M. (2000). E-Customer Service Gets Real. Computerworld, 34 (44), 56-57.

Appendix 1: Pre-test Survey

Pre-test survey

Note: This questionnaire has been compressed for publication. The survey was completed for each chat experience.

Before we begin the usability exercise, please take a minute to fill out the questions below.

  1. What is your age?
    ___ 21-35       ___ 36-50      ___ 51-65      ___ 66 or older

  2. What is your gender?
    __ Female      __ Male

  3. How comfortable are you using the Internet to gather information about products?
    __ Extremely      __ Moderately      __ Not at all      __ I don't use the Internet to gather product information

  4. How comfortable are you buying products via the Internet?
    __ Extremely      __ Moderately      __ Not at all      __ I don't use the Internet to buy products

  5. How likely are you to buy these products on the Internet?

    • Compact Disc (CD):
      __ Very likely      __ Somewhat likely      __ Unlikely      __ I would not buy these on the Internet

    • Have you bought CDs, books, videotapes or DVDs, computer software or computer hardware via the Internet before?
      __ Yes      __ No

    • If you answered yes to question 6, estimate the number of times during the past 6 months that you have bought CDs, books, videotapes or DVDs, computer software or computer hardware on the Internet.
      __ 0      __ 1-5      __ 6-10      __ 11-15      __ 16 or more

    • How likely are you to buy items via the Internet that you might usually examine in person prior to purchase? Examples include jewelry, electronics over $100, clothing, shoes, perfume or cologne, cars, or furniture that you have not examined in person in a store prior to purchase:
      __ Very likely      __ Somewhat likely       __ Unlikely      __ I would not buy these on the Internet

    • Have you bought the types of items mentioned in question number 8 via the Internet?
      __ Yes      __ No

    • If you answered yes to question 9, estimate the number of times during the past 6 months that you have bought the types of items mentioned in question 9 on the Internet:
      __ 0      __ 1-5      __ 6-10      __ 11-15      __ 16 or more

    • If you did not buy the types of items mentioned in question number 9 during the past 6 months, why not? (please explain)

Appendix 2: Questions for Customer Service Representatives

Questions for customer service representatives

Item attributes:

Size, texture, or color

  • Does a size run larger or smaller than normal?

  • How do I know if it will fit me?

  • Is the color I see on the screen the "true" color?

  • What would you recommend for someone with my body shape/hair color/skin tone?

Type, feel, or durability of the item

  • What kind of fabric is the item made out of?

  • How durable is it?

  • Can I put it in the washing machine/dish washer, or should it be hand-washed?

  • Can you send me a fabric sample in the mail?

Smell/taste of an item

  • If it's a perfume, can you describe the smell?

  • Can you send me a swatch or sample of the item?

How fast you can get the item:

Order tracking

  • How do I track when my order has been shipped?


  • How long will it take this product to arrive?

  • Is it available now, or is it back ordered?

  • Will I have to wait to receive the item? If so, how long?

Pricing and billing:


  • I'm looking for an item, but I don't want to spend more than X amount. Can you recommend something?

Shipping and handling

  • How much would it cost to ship to me?

  • Can this product be sent overseas?


  • Do I have to pay with a credit card, or can I be billed and pay by check?

What if I don't like the item:

Return policy/customer satisfaction policy/guarantees

  • If I want to return an item, who pays for shipping?

  • Can I return an item for any reason?

  • Is there a time limit of returns?

  • What if a product is defective or arrives damaged?

  • Can I cancel an order before it ships?

Customer services:

Help completing a transaction

  • I'm trying to buy X, but I can't figure out how. (I got an error message, the site isn't working as it should, etc.)

Gift wrapping

  • Can the item be gift wrapped?

  • Is there a charge for wrapping?

  • Can I include a card?

  • If so, is there a word limit to my message?

  • If it is a gift, can it be shipped to somebody else without a receipt in it?

Appendix 3: Post-test Survey

Post-test survey

Note: This questionnaire has been compressed for publication. The survey was completed for each chat experience.

     Have you used this Web site before?      __ Yes      __ No

If you answered “No” to this question, please skip to question number 6.

  1. If so, how many times?
    __ 1-5 times       __ 6-10 times      __ 11-15 times      __ 16 or more

  2. What did you use the site for? (check all that apply)
    __ Browsing      __ Buying      __ Price comparison      __ Gathering information on product      __ Other

  3. Before today, have you used the chat function on this Web site?
    __ Yes      __ No

  4. If so, what did you use it for? (please describe)

In this section, please answer only the questions that apply to your experience.

Note: For these questions the answer scale is:

     __ Strongly agree      __Agree      __ Neither agree nor disagree      __Disagree      __Strongly disagree

  1. The chat application was easy to find on the Web site?

  2. The chat application was easy to launch:

  3. The chat function was available when I wanted to use it:

  4. The waiting time before the chat started was satisfactory to me:

If you did not use the chat function on this site, skip this section and go on to question 20.

Note: For these questions the answer scale is:

     __ Strongly agree      __Agree      __ Neither agree nor disagree      __Disagree      __Strongly disagree

  1. The chat function on this site was easy to use:

  2. The company used terminology I understood:

  3. The information I received was helpful:

  4. The customer service interaction was courteous:

  5. Questions were answered to my satisfaction:

  6. The customer service interaction felt personalized:

  7. The company recommended other items I might consider buying:

  8. The customer service interaction moved at a pace that was satisfactory to me:

  9. Overall, this chat encounter was:
         __ Better than expected      __ About what expected      __ Worse than expected

  10. If you were actually to consider buying the item you chatted about, based on this customer service experience, would you buy the item from this Web site?
         __ Yes      __ No

    1. If yes, why? (Check all that apply)
      __ Good overall experience
      __ I have confidence that the product would meet my expectations
      __ This would be a low-risk purchase about which I do not have concerns
      __ I trust the retailer to guarantee that I am satisfied with the product once it arrives
      __ The price of the item is acceptable to me.
      __ I did not want to compare the price of this product on different sites.
      __ Other _________________________________________________________

    2. If no, why not? (Check all that apply)
      __ Poor overall experience
      __ I have no confidence that the product would meet my expectations
      __ This would be a high-risk purchase about which I have concerns
      __ I do not trust the retailer to guarantee that I am satisfied with the product once it arrives
      __ The price of the product is unacceptable to me.
      __ I want to compare the price of this product on competing Web sites.
      __ Other _______________________________________________________________

Please take a moment to reflect on this chat experience, and fill out the questions below.

  1. What did you like about the customer interaction experience on this site? Please be specific.

  2. What did you dislike about the customer interaction experience on this site? Please be specific.

  3. Your comments about your experience.

Appendix 4: Usability Test Results Summary

Usability Test Subject Experience     n=12

  #1 Art #2 Toys #3 Clothing #4 Clothing #5 Gifts
Subject Previous Test Website Experience
Customer Svc. Chat 0 0 0 0 0
Browse/gather only 4 1 1 3 0
Purchase and browse 0 1 3 0 0
No experience 8 10 8 9 12
(1) Finding/launching chat
1st attempt 1 6** 9 11 2*
Reqt. 2nd attempt     3 1 2
Got email instead   4**      
Failed to connect 5 2      
Not available to connect 6       8
Summary of Test Opportunities
Total opportunities 60 100%      
Success chat tests 35 58%      
Non-completions 25 42%      

* Although 4 connected eventually, only two chatted, 2 were dropped and unable to reconnect
** 10 connected, but only 6 chatted; 4 were transferred to email entry page

Post-Test Survey Results #1 Art #2 Toys #3 Clothing #4 Clothing #5 Gifts
Easy to find n=12 n=12 n=12 n=12 n=12
Agreed 2 7 8 6 11
Neutral 2 1 0 1 1
Disagreed 7 4 4 5 0
No answer 1 0 0 0 0
Easy to launch          
Agreed 2 5 9 10 8
Neutral 2 1 1 1 1
Disagreed 2 5 2 1 0
No answer 6 1 0 0 3
Available when needed          
Yes 1 5 12 10 4
Neutral 0 1 0 1 1
No 11 6 0 1 7
Met expectations n=1 n=10** n=12 n=12 n=4*
Better   3 4 6  
As expected   3 6 3 2
Worse 1 4 2 3 2
Connect Wait (0-5 min)
Satisfactory 1 3 7 7 2
Neutral   1 0 1 2
Unsatisfactory   6 5 4  
Pace/response speed
Satisfactory   6 7 7 1
Neutral   0 2 3  
Unsatisfactory 1 4 3 2 1
Easy to Use once connected
Strongly Agreed/Agreed 1 6 8 10 2
Neutral   2 1 2  
Strongly Disagree/Disagreed   2 3   1
Information helpful
Strongly Agreed/Agreed   8 12 8 2
Neutral   2 0 1  
Strongly Disagree/Disagreed 1 0 0 1  
Terminology OK
Strongly Agreed/Agreed 1 6 11 10 2
Neutral   2 1 1  
Strongly Disagree/Disagreed   2   1  
Personalized Interaction
Strongly Agreed/Agreed 1 8 8 7 1
Neutral   0 2 1 1
Strongly Disagree/Disagreed   2 1 3  
Strongly Agreed/Agreed 1 8 11 9 1
Neutral   2 1 2 1
Strongly Disagree/Disagreed   0   1  
Questions Answered
Strongly Agreed/Agreed   8 11 10 2
Neutral   2 1 0  
Strongly Disagree/Disagreed 1 0   2  

* Although 4 connected eventually, only two chatted, 2 were dropped and unable to connect
** 10 connected, but only 6 chatted; 4 were transferred to email entry page 0

  #1 Art #2 Toys #3 Clothing #4 Clothing #5 Gifts
  n=1 n=6 n=12 n=12 n=2
Based on chat, subject would buy
Yes   3 10 10  
No 1 3 2 2 2

Reasons for buying Overall experience (at least half subjects)
Trust in retailer
Price acceptable
Low risk purchase
Product Confidence