798 resultados para applicazione web, semantic web, semantic publishing, angularJS, user experience, usabilità
Resumo:
This thesis targets on a challenging issue that is to enhance users' experience over massive and overloaded web information. The novel pattern-based topic model proposed in this thesis can generate high-quality multi-topic user interest models technically by incorporating statistical topic modelling and pattern mining. We have successfully applied the pattern-based topic model to both fields of information filtering and information retrieval. The success of the proposed model in finding the most relevant information to users mainly comes from its precisely semantic representations to represent documents and also accurate classification of the topics at both document level and collection level.
Resumo:
Purpose Following the perspective of frustration theory customer frustration incidents lead to frustration behavior such as protest (negative word‐of‐mouth). On the internet customers can express their emotions verbally and non‐verbally in numerous web‐based review platforms. The purpose of this study is to investigate online dysfunctional customer behavior, in particular negative “word‐of‐web” (WOW) in online feedback forums, among customers who participate in frequent‐flier programs in the airline industry. Design/methodology/approach The study employs a variation of the critical incident technique (CIT) referred to as the critical internet feedback technique (CIFT). Qualitative data of customer reviews of 13 different frequent‐flier programs posted on the internet were collected and analyzed with regard to frustration incidents, verbal and non‐verbal emotional effects and types of dysfunctional word‐of‐web customer behavior. The sample includes 141 negative customer reviews based on non‐recommendations and low program ratings. Findings Problems with loyalty programs evoke negative emotions that are expressed in a spectrum of verbal and non‐verbal negative electronic word‐of‐mouth. Online dysfunctional behavior can vary widely from low ratings and non‐recommendations to voicing switching intentions to even stronger forms such as manipulation of others and revenge intentions. Research limitations/implications Results have to be viewed carefully due to methodological challenges with regard to the measurement of emotions, in particular the accuracy of self‐report techniques and the quality of online data. Generalization of the results is limited because the study utilizes data from only one industry. Further research is needed with regard to the exact differentiation of frustration from related constructs. In addition, large‐scale quantitative studies are necessary to specify and test the relationships between frustration incidents and subsequent dysfunctional customer behavior expressed in negative word‐of‐web. Practical implications The study yields important implications for the monitoring of the perceived quality of loyalty programs. Management can obtain valuable information about program‐related and/or relationship‐related frustration incidents that lead to online dysfunctional customer behavior. A proactive response strategy should be developed to deal with severe cases, such as sabotage plans. Originality/value This study contributes to knowledge regarding the limited research of online dysfunctional customer behavior as well as frustration incidents of loyalty programs. Also, the article presents a theoretical “customer frustration‐defection” framework that describes different levels of online dysfunctional behavior in relation to the level of frustration sensation that customers have experienced. The framework extends the existing perspective of the “customer satisfaction‐loyalty” framework developed by Heskett et al.
Resumo:
Background Psychotic-like experiences (PLEs) are subclinical delusional ideas and perceptual disturbances that have been associated with a range of adverse mental health outcomes. This study reports a qualitative and quantitative analysis of the acceptability, usability and short term outcomes of Get Real, a web program for PLEs in young people. Methods Participants were twelve respondents to an online survey, who reported at least one PLE in the previous 3 months, and were currently distressed. Ratings of the program were collected after participants trialled it for a month. Individual semi-structured interviews then elicited qualitative feedback, which was analyzed using Consensual Qualitative Research (CQR) methodology. PLEs and distress were reassessed at 3 months post-baseline. Results User ratings supported the program's acceptability, usability and perceived utility. Significant reductions in the number, frequency and severity of PLE-related distress were found at 3 months follow-up. The CQR analysis identified four qualitative domains: initial and current understandings of PLEs, responses to the program, and context of its use. Initial understanding involved emotional reactions, avoidance or minimization, limited coping skills and non-psychotic attributions. After using the program, participants saw PLEs as normal and common, had greater self-awareness and understanding of stress, and reported increased capacity to cope and accept experiences. Positive responses to the program focused on its normalization of PLEs, usefulness of its strategies, self-monitoring of mood, and information putting PLEs into perspective. Some respondents wanted more specific and individualized information, thought the program would be more useful for other audiences, or doubted its effectiveness. The program was mostly used in low-stress situations. Conclusions The current study provided initial support for the acceptability, utility and positive short-term outcomes of Get Real. The program now requires efficacy testing in randomized controlled trials.
Resumo:
We present a novel framework and algorithms for the analysis of Web service interfaces to improve the efficiency of application integration in wide-spanning business networks. Our approach addresses the notorious issue of large and overloaded operational signatures, which are becoming increasingly prevalent on the Internet and being opened up for third-party service aggregation. Extending upon existing techniques used to refactor service interfaces based on derived artefacts of applications, namely business entities, we propose heuristics for deriving relations between business entities, and in turn, deriving permissible orders in which operations are invoked. As a result, service operations are refactored on business entity CRUD which then leads to behavioural protocols generated, thus supportive of fine-grained and flexible service discovery, composition and interaction. A prototypical implementation and analysis of web services, including those of commercial logistic systems (Fedex), are used to validate the algorithms and open up further insights into service interface synthesis.
Resumo:
In this paper, we first describe a framework to model the sponsored search auction on the web as a mechanism design problem. Using this framework, we describe two well-known mechanisms for sponsored search auction-Generalized Second Price (GSP) and Vickrey-Clarke-Groves (VCG). We then derive a new mechanism for sponsored search auction which we call optimal (OPT) mechanism. The OPT mechanism maximizes the search engine's expected revenue, while achieving Bayesian incentive compatibility and individual rationality of the advertisers. We then undertake a detailed comparative study of the mechanisms GSP, VCG, and OPT. We compute and compare the expected revenue earned by the search engine under the three mechanisms when the advertisers are symmetric and some special conditions are satisfied. We also compare the three mechanisms in terms of incentive compatibility, individual rationality, and computational complexity. Note to Practitioners-The advertiser-supported web site is one of the successful business models in the emerging web landscape. When an Internet user enters a keyword (i.e., a search phrase) into a search engine, the user gets back a page with results, containing the links most relevant to the query and also sponsored links, (also called paid advertisement links). When a sponsored link is clicked, the user is directed to the corresponding advertiser's web page. The advertiser pays the search engine in some appropriate manner for sending the user to its web page. Against every search performed by any user on any keyword, the search engine faces the problem of matching a set of advertisers to the sponsored slots. In addition, the search engine also needs to decide on a price to be charged to each advertiser. Due to increasing demands for Internet advertising space, most search engines currently use auction mechanisms for this purpose. These are called sponsored search auctions. A significant percentage of the revenue of Internet giants such as Google, Yahoo!, MSN, etc., comes from sponsored search auctions. In this paper, we study two auction mechanisms, GSP and VCG, which are quite popular in the sponsored auction context, and pursue the objective of designing a mechanism that is superior to these two mechanisms. In particular, we propose a new mechanism which we call the OPT mechanism. This mechanism maximizes the search engine's expected revenue subject to achieving Bayesian incentive compatibility and individual rationality. Bayesian incentive compatibility guarantees that it is optimal for each advertiser to bid his/her true value provided that all other agents also bid their respective true values. Individual rationality ensures that the agents participate voluntarily in the auction since they are assured of gaining a non-negative payoff by doing so.
Resumo:
Distributed Collaborative Computing services have taken over centralized computing platforms allowing the development of distributed collaborative user applications. These applications enable people and computers to work together more productively. Multi-Agent System (MAS) has emerged as a distributed collaborative environment which allows a number of agents to cooperate and interact with each other in a complex environment. We want to place our agents in problems whose solutions require the collation and fusion of information, knowledge or data from distributed and autonomous information sources. In this paper we present the design and implementation of an agent based conference planner application that uses collaborative effort of agents which function continuously and autonomously in a particular environment. The application also enables the collaborative use of services deployed geographically wide in different technologies i.e. Software Agents, Grid computing and Web service. The premise of the application is that it allows autonomous agents interacting with web and grid services to plan a conference as a proxy to their owners (humans). © 2005 IEEE.
Resumo:
LiteSteel beam (LSB) is a hollow flange channel made from cold-formed steel using a patented manufacturing process involving simultaneous cold-forming and dual electric resistance welding. LSBs are currently used as floor joists and bearers in buildings. However, there are no appropriate design standards available due to its unique hollow flange geometry, residual stress characteristics and initial geometric imperfections arising from manufacturing processes. Recent research studies have focused on investigating the structural behaviour of LSBs under pure bending, predominant shear and combined actions. However, web crippling behaviour and strengths of LSBs still need to be examined. Therefore, an experimental study was undertaken to investigate the web crippling behaviour and strengths of LSBs under EOF (End One Flange) and IOF (Interior One Flange) load cases. A total of 23 web crippling tests were performed and the results were compared with the current AS/NZS 4600 and AISI S100 design standards, which showed that the cold-formed steel design rules predicted the web crippling capacity of LSB sections very conservatively under EOF and IOF load cases. Therefore, suitably improved design equations were proposed to determine the web crippling capacity of LSBs based on experimental results. In addition, new design equations were also developed under the Direct Strength Method format. This paper presents the details of this experimental study on the web crippling behaviour and strengths of LiteSteel beams under EOF and IOF load cases and the results.
Resumo:
The identification of sequence (amino acids or nucleotides) motifs in a particular order in biological sequences has proved to be of interest. This paper describes a computing server, SSMBS, which can locate anddisplay the occurrences of user-defined biologically important sequence motifs (a maximum of five) present in a specific order in protein and nucleotide sequences. While the server can efficiently locate motifs specified using regular expressions, it can also find occurrences of long and complex motifs. The computation is carried out by an algorithm developed using the concepts of quantifiers in regular expressions. The web server is available to users around the clock at http://dicsoft1.physics.iisc.ernet.in/ssmbs/.
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform’s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
In this paper we propose a new method of data handling for web servers. We call this method Network Aware Buffering and Caching (NABC for short). NABC facilitates reduction of data copies in web server's data sending path, by doing three things: (1) Layout the data in main memory in a way that protocol processing can be done without data copies (2) Keep a unified cache of data in kernel and ensure safe access to it by various processes and kernel and (3) Pass only the necessary meta data between processes so that bulk data handling time spent during IPC can be reduced. We realize NABC by implementing a set of system calls and an user library. The end product of the implementation is a set of APIs specifically designed for use by the web servers. We port an in house web server called SWEET, to NABC APIs and evaluate performance using a range of workloads both simulated and real. The results show a very impressive gain of 12% to 21% in throughput for static file serving and 1.6 to 4 times gain in throughput for lightweight dynamic content serving for a server using NABC APIs over the one using UNIX APIs.
Resumo:
CDS/ISIS, an advanced non-numerical information storage and retrieval software was developed by UNESCO. With the emergence of WWW technology, most of the information activities are becoming Web-centric. Libraries and information providers are taking advantage of these Internet developments to provide access to their resources/information on the Web. A number of tools are now available for publishing CDS/ISIS databases on the Internet. One such tool is the WWWISIS Web gateway software, developed by BIREME, Brazil. This paper illustrates porting of sample records from a bibliographic database into CDS/ISIS, and then publishing this database on the Internet using WWWISIS.
Resumo:
PDB Goodies is a web-based graphical user interface (GUI) to manipulate the Protein Data Bank file containing the three-dimensional atomic coordinates of protein structures. The program also allows users to save the manipulated three-dimensional atomic coordinate file on their local client system. These fragments are used in various stages of structure elucidation and analysis. This software is incorporated with all the three-dimensional protein structures available in the Protein Data Bank, which presently holds approximately 18 000 structures. In addition, this program works on a three-dimensional atomic coordinate file (Protein Data Bank format) uploaded from the client machine. The program is written using CGI/PERL scripts and is platform independent. The program PDB Goodies can be accessed over the World Wide Web at http:// 144.16.71.11/pdbgoodies/.