884 resultados para web content


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the 1980s, industries and researchers have sought to better understand the quality of services due to the rise in their importance (Brogowicz, Delene and Lyth 1990). More recent developments with online services, coupled with growing recognition of service quality (SQ) as a key contributor to national economies and as an increasingly important competitive differentiator, amplify the need to revisit our understanding of SQ and its measurement. Although ‘SQ’ can be broadly defined as “a global overarching judgment or attitude relating to the overall excellence or superiority of a service” (Parasuraman, Berry and Zeithaml 1988), the term has many interpretations. There has been considerable progress on how to measure SQ perceptions, but little consensus has been achieved on what should be measured. There is agreement that SQ is multi-dimensional, but little agreement as to the nature or content of these dimensions (Brady and Cronin 2001). For example, within the banking sector, there exist multiple SQ models, each consisting of varying dimensions. The existence of multiple conceptions and the lack of a unifying theory bring the credibility of existing conceptions into question, and beg the question of whether it is possible at some higher level to define SQ broadly such that it spans all service types and industries. This research aims to explore the viability of a universal conception of SQ, primarily through a careful re-visitation of the services and SQ literature. The study analyses the strengths and weaknesses of the highly regarded and widely used global SQ model (SERVQUAL) which reflects a single-level approach to SQ measurement. The SERVQUAL model states that customers evaluate SQ (of each service encounter) based on five dimensions namely reliability, assurance, tangibles, empathy and responsibility. SERVQUAL, however, failed to address what needs to be reliable, assured, tangible, empathetic and responsible. This research also addresses a more recent global SQ model from Brady and Cronin (2001); the B&C (2001) model, that has potential to be the successor of SERVQUAL in that it encompasses other global SQ models and addresses the ‘what’ questions that SERVQUAL didn’t. The B&C (2001) model conceives SQ as being multidimensional and multi-level; this hierarchical approach to SQ measurement better reflecting human perceptions. In-line with the initial intention of SERVQUAL, which was developed to be generalizable across industries and service types, this research aims to develop a conceptual understanding of SQ, via literature and reflection, that encompasses the content/nature of factors related to SQ; and addresses the benefits and weaknesses of various SQ measurement approaches (i.e. disconfirmation versus perceptions-only). Such understanding of SQ seeks to transcend industries and service types with the intention of extending our knowledge of SQ and assisting practitioners in understanding and evaluating SQ. The candidate’s research has been conducted within, and seeks to contribute to, the ‘IS-Impact’ research track of the IT Professional Services (ITPS) Research Program at QUT. The vision of the track is “to develop the most widely employed model for benchmarking Information Systems in organizations for the joint benefit of research and practice.” The ‘IS-Impact’ research track has developed an Information Systems (IS) success measurement model, the IS-Impact Model (Gable, Sedera and Chan 2008), which seeks to fulfill the track’s vision. Results of this study will help future researchers in the ‘IS-Impact’ research track address questions such as: • Is SQ an antecedent or consequence of the IS-Impact model or both? • Has SQ already been addressed by existing measures of the IS-Impact model? • Is SQ a separate, new dimension of the IS-Impact model? • Is SQ an alternative conception of the IS? Results from the candidate’s research suggest that SQ dimensions can be classified at a higher level which is encompassed by the B&C (2001) model’s 3 primary dimensions (interaction, physical environment and outcome). The candidate also notes that it might be viable to re-word the ‘physical environment quality’ primary dimension to ‘environment quality’ so as to better encompass both physical and virtual scenarios (E.g: web sites). The candidate does not rule out the global feasibility of the B&C (2001) model’s nine sub-dimensions, however, acknowledges that more work has to be done to better define the sub-dimensions. The candidate observes that the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions are supportive representations of the ‘interaction’, physical environment’ and ‘outcome’ primary dimensions respectively. The latter statement suggests that customers evaluate each primary dimension (or each higher level of SQ classification) namely ‘interaction’, physical environment’ and ‘outcome’ based on the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions respectively. The ability to classify SQ dimensions at a higher level coupled with support for the measures that make up this higher level, leads the candidate to propose the B&C (2001) model as a unifying theory that acts as a starting point to measuring SQ and the SQ of IS. The candidate also notes, in parallel with the continuing validation and generalization of the IS-Impact model, that there is value in alternatively conceptualizing the IS as a ‘service’ and ultimately triangulating measures of IS SQ with the IS-Impact model. These further efforts are beyond the scope of the candidate’s study. Results from the candidate’s research also suggest that both the disconfirmation and perceptions-only approaches have their merits and the choice of approach would depend on the objective(s) of the study. Should the objective(s) be an overall evaluation of SQ, the perceptions-only approached is more appropriate as this approach is more straightforward and reduces administrative overheads in the process. However, should the objective(s) be to identify SQ gaps (shortfalls), the (measured) disconfirmation approach is more appropriate as this approach has the ability to identify areas that need improvement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Web 1.0 referred to the early, read-only internet; Web 2.0 refers to the ‘read-write web’ in which users actively contribute to as well as consume online content; Web 3.0 is now being used to refer to the convergence of mobile and Web 2.0 technologies and applications. One of the most important developments in mobile 3.0 is geography: with many mobile phones now equipped with GPS, mobiles promise to “bring the internet down to earth” through geographically-aware, or locative media. The internet was earlier heralded as “the death of geography” with predictions that with anyone able to access information from anywhere, geography would no longer matter. But mobiles are disproving this. GPS allows the location of the user to be pinpointed, and the mobile internet allows the user to access locally-relevant information, or to upload content which is geotagged to the specific location. It also allows locally-specific content to be sent to the user when the user enters a specific space. Location-based services are one of the fastest-growing segments of the mobile internet market: the 2008 AIMIA report indicates that user access of local maps increased by 347% over the previous 12 months, and restaurant guides/reviews increased by 174%. The central tenet of cultural geography is that places are culturally-constructed, comprised of the physical space itself, culturally-inflected perceptions of that space, and people’s experiences of the space (LeFebvre 1991). This paper takes a cultural geographical approach to locative media, anatomising the various spaces which have emerged through locative media, or “the geoweb” (Lake 2004). The geoweb is such a new concept that to date, critical discourse has treated it as a somewhat homogenous spatial formation. In order to counter this, and in order to demonstrate the dynamic complexity of the emerging spaces of the geoweb, the paper provides a topography of different types of locative media space: including the personal/aesthetic in which individual users geotag specific physical sites with their own content and meanings; the commercial, like the billboards which speak to individuals as they pass in Minority Report; and the social, in which one’s location is defined by the proximity of friends rather than by geography.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the filed of semantic grid, QoS-based Web service scheduling for workflow optimization is an important problem.However, in semantic and service rich environment like semantic grid, the emergence of context constraints on Web services is very common making the scheduling consider not only quality properties of Web services, but also inter service dependencies which are formed due to the context constraints imposed on Web services. In this paper, we present a repair genetic algorithm, namely minimal-conflict hill-climbing repair genetic algorithm, to address scheduling optimization problems in workflow applications in the presence of domain constraints and inter service dependencies. Experimental results demonstrate the scalability and effectiveness of the genetic algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Query reformulation is a key user behavior during Web search. Our research goal is to develop predictive models of query reformulation during Web searching. This article reports results from a study in which we automatically classified the query-reformulation patterns for 964,780 Web searching sessions, composed of 1,523,072 queries, to predict the next query reformulation. We employed an n-gram modeling approach to describe the probability of users transitioning from one query-reformulation state to another to predict their next state. We developed first-, second-, third-, and fourth-order models and evaluated each model for accuracy of prediction, coverage of the dataset, and complexity of the possible pattern set. The results show that Reformulation and Assistance account for approximately 45% of all query reformulations; furthermore, the results demonstrate that the first- and second-order models provide the best predictability, between 28 and 40% overall and higher than 70% for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports findings from a study investigating the effect of integrating sponsored and nonsponsored search engine links into a single web listing. The premise underlying this research is that web searchers are chiefly interested in relevant results. Given the reported negative bias that web searchers have concerning sponsored links, separate listings may be a disservice to web searchers as it might not direct them to relevant websites. Some web meta-search engines integrate sponsored and nonsponsored links into a single listing. Using a web search engine log of over 7 million interactions from hundreds of thousands of users from a major web meta-search engine, we analysed the click-through patterns for both sponsored and nonsponsored links. We also classified web queries as informational, navigational and transactional based on the expected type of content and analysed the click-through patterns of each classification. The findings show that for more than 35% of queries, there are no clicks on any result. More than 80% of web queries are informational in nature and approximately 10% are transactional, and 10% navigational. Sponsored links account for approximately 15% of all clicks. Integrating sponsored and nonsponsored links does not appear to increase the clicks on sponsored listings. We discuss how these research results could enhance future sponsored search platforms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we use time series analysis to evaluate predictive scenarios using search engine transactional logs. Our goal is to develop models for the analysis of searchers’ behaviors over time and investigate if time series analysis is a valid method for predicting relationships between searcher actions. Time series analysis is a method often used to understand the underlying characteristics of temporal data in order to make forecasts. In this study, we used a Web search engine transactional log and time series analysis to investigate users’ actions. We conducted our analysis in two phases. In the initial phase, we employed a basic analysis and found that 10% of searchers clicked on sponsored links. However, from 22:00 to 24:00, searchers almost exclusively clicked on the organic links, with almost no clicks on sponsored links. In the second and more extensive phase, we used a one-step prediction time series analysis method along with a transfer function method. The period rarely affects navigational and transactional queries, while rates for transactional queries vary during different periods. Our results show that the average length of a searcher session is approximately 2.9 interactions and that this average is consistent across time periods. Most importantly, our findings shows that searchers who submit the shortest queries (i.e., in number of terms) click on highest ranked results. We discuss implications, including predictive value, and future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports preliminary results from a study modeling the interplay between multitasking, cognitive coordination, and cognitive shifts during Web search. Study participants conducted three Web searches on personal information problems. Data collection techniques included pre- and post-search questionnaires; think-aloud protocols, Web search logs, observation, and post-search interviews. Key findings include: (1) users Web searches included multitasking, cognitive shifting and cognitive coordination processes, (2) cognitive coordination is the hinge linking multitasking and cognitive shifting that enables Web search construction, (3) cognitive shift levels determine the process of cognitive coordination, and (4) cognitive coordination is interplay of task, mechanism and strategy levels that underpin multitasking and task switching. An initial model depicts the interplay between multitasking, cognitive coordination, and cognitive shifts during Web search. Implications of the findings and further research are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years greater emphasis has been placed by many Law Schools on teaching not only the substantive content of the law but also the skills needed for the practice of the law. Negotiation is one such skill. However, effective teaching of negotiation may be problematic in the context of large numbers of students studying in a variety of modes and often juggling other time commitments. This paper examines the Air Gondwana program, a blended learning environment designed to address these challenges. The program demonstrates that ICT can be used to create an authentic learning experience which engages and stimulates students.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The artwork describes web as a network environment and a space where people are connected and as a result, it can reshape you as an interactive participant who is able to regenerate an object as a new form through a truly collaborative and cooperative interactions with others. The artwork has been created based on the research findings of characteristic of web: 1) Participatory (Slater 2002, p.536), 2) Communicational (Rheingold 1993), 3) Connected (Jordan 1999, 80), and 4) Stylising (Jordan 1999, 69). The artwork has conceptualised and visualised those characteristics of web based on principles of graphic design and visual communication.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Web design elements are significantly important for web designers to understand target users in terms of effective communication design and to develop a successful web site. However, web design elements generally known are broad and various that are hardly conceived and classified, so many practitioners and design researchers approach to web design elements based on graphic and visual design that mainly focus on print media design. This paper discusses about web design elements in terms of online user experience, as web media certainly differs from print media. It aims to propose a fundamentally new concept, called 'UEDUs: User Experience Design Units' which enables web designers to define web design elements and conceptualise user experience depending on the purpose of web site development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We argue that web service discovery technology should help the user navigate a complex problem space by providing suggestions for services which they may not be able to formulate themselves as (s)he lacks the epistemic resources to do so. Free text documents in service environments provide an untapped source of information for augmenting the epistemic state of the user and hence their ability to search effectively for services. A quantitative approach to semantic knowledge representation is adopted in the form of semantic space models computed from these free text documents. Knowledge of the user’s agenda is promoted by associational inferences computed from the semantic space. The inferences are suggestive and aim to promote human abductive reasoning to guide the user from fuzzy search goals into a better understanding of the problem space surrounding the given agenda. Experimental results are discussed based on a complex and realistic planning activity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

XML document clustering is essential for many document handling applications such as information storage, retrieval, integration and transformation. An XML clustering algorithm should process both the structural and the content information of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. This paper introduces a novel approach that first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. The proposed method reduces the high dimensionality of input data by using only the structure-constrained content. The empirical analysis reveals that the proposed method can effectively cluster even very large XML datasets and outperform other existing methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The emergent field of practice-led research is a unique research paradigm that situates creative practice as both a driver and outcome of the research process. The exegesis that accompanies the creative practice in higher research degrees remains open to experimentation and discussion around what content should be included, how it should be structured, and its orientations. This paper contributes to this discussion by reporting on a content analysis of a large, local sample of exegeses. We have observed a broad pattern in contents and structure within this sample. Besides the introduction and conclusion, it has three main parts: situating concepts (conceptual definitions and theories), practical contexts (precedents in related practices), and new creations (the creative process, the artifacts produced and their value as research). This model appears to combine earlier approaches to the exegesis, which oscillated between academic objectivity in providing a context for the practice and personal reflection or commentary upon the creative practice. We argue that this hybrid or connective model assumes both orientations and so allows the researcher to effectively frame the practice as a research contribution to a wider field while doing justice to its invested poetics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The social tags in web 2.0 are becoming another important information source to profile users' interests and preferences for making personalized recommendations. However, the uncontrolled vocabulary causes a lot of problems to profile users accurately, such as ambiguity, synonyms, misspelling, low information sharing etc. To solve these problems, this paper proposes to use popular tags to represent the actual topics of tags, the content of items, and also the topic interests of users. A novel user profiling approach is proposed in this paper that first identifies popular tags, then represents users’ original tags using the popular tags, finally generates users’ topic interests based on the popular tags. A collaborative filtering based recommender system has been developed that builds the user profile using the proposed approach. The user profile generated using the proposed approach can represent user interests more accurately and the information sharing among users in the profile is also increased. Consequently the neighborhood of a user, which plays a crucial role in collaborative filtering based recommenders, can be much more accurately determined. The experimental results based on real world data obtained from Amazon.com show that the proposed approach outperforms other approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the size and state of the Internet today, a good quality approach to organizing this mass of information is of great importance. Clustering web pages into groups of similar documents is one approach, but relies heavily on good feature extraction and document representation as well as a good clustering approach and algorithm. Due to the changing nature of the Internet, resulting in a dynamic dataset, an incremental approach is preferred. In this work we propose an enhanced incremental clustering approach to develop a better clustering algorithm that can help to better organize the information available on the Internet in an incremental fashion. Experiments show that the enhanced algorithm outperforms the original histogram based algorithm by up to 7.5%.