982 resultados para user behavior


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Special collections, because of the issues associated with conservation and use, a feature they share with archives, tend to be the most digitized areas in libraries. The Nineteenth Century Schoolbooks collection is a collection of 9000 rarely held nineteenth-century schoolbooks that were painstakingly collected over a lifetime of work by Prof. John A. Nietz, and donated to the Hillman Library at the University of Pittsburgh in 1958, which has since grown to 15,000. About 140 of these texts are completely digitized and showcased in a publicly accessible website through the University of Pittsburgh’s Library, along with a searchable bibliography of the entire collection, which expanded the awareness of this collection and its user base to beyond the academic community. The URL for the website is http://digital.library.pitt.edu/nietz/. The collection is a rich resource for researchers studying the intellectual, educational, and textbook publishing history of the United States. In this study, we examined several existing records collected by the Digital Research Library at the University of Pittsburgh in order to determine the identity and searching behaviors of the users of this collection. Some of the records examined include: 1) The results of a 3-month long user survey, 2) User access statistics including search queries for a period of one year, a year after the digitized collection became publicly available in 2001, and 3) E-mail input received by the website over 4 years from 2000-2004. The results of the study demonstrate the differences in online retrieval strategies used by academic researchers and historians, archivists, avocationists, and the general public, and the importance of facilitating the discovery of digitized special collections through the use of electronic finding aids and an interactive interface with detailed metadata.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Companies have looked for many new ways to communicate with their customers. In the current scenario, Facebook has proven to be an efficient communication tool between consumers and businesses. This study aims to understand the differences in the complaint messages sent to companies, through an experiment that measured the emotional tone and the lack of formality in each message received by the website and the Facebook page of the company. As expected, people are more informal on Facebook. However, contrary to our intuition, participants tended to display more emotions on the company website. The social norms theory and the impression management contributed to explain the phenomena found.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a complex neural network model of user behavior in distributed systems. The model reflects both dynamical and statistical features of user behavior and consists of three components: on-line and off-line models and change detection module. On-line model reflects dynamical features by predicting user actions on the basis of previous ones. Off-line model is based on the analysis of statistical parameters of user behavior. In both cases neural networks are used to reveal uncharacteristic activity of users. Change detection module is intended for trends analysis in user behavior. The efficiency of complex model is verified on real data of users of Space Research Institute of NASU-NSAU.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is proposed an agent approach for creation of intelligent intrusion detection system. The system allows detecting known type of attacks and anomalies in user activity and computer system behavior. The system includes different types of intelligent agents. The most important one is user agent based on neural network model of user behavior. Proposed approach is verified by experiments in real Intranet of Institute of Physics and Technologies of National Technical University of Ukraine "Kiev Polytechnic Institute”.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ever-increasing number and severity of cybersecurity breaches makes it vital to understand the factors that make organizations vulnerable. Since humans are considered the weakest link in the cybersecurity chain of an organization, this study evaluates users’ individual differences (demographic factors, risk-taking preferences, decision-making styles and personality traits) to understand online security behavior. This thesis studies four different yet tightly related online security behaviors that influence organizational cybersecurity: device securement, password generation, proactive awareness and updating. A survey (N=369) of students, faculty and staff in a large mid-Atlantic U.S. public university identifies individual characteristics that relate to online security behavior and characterizes the higher-risk individuals that pose threats to the university’s cybersecurity. Based on these findings and insights from interviews with phishing victims, the study concludes with recommendations to help similat organizations increase end-user cybersecurity compliance and mitigate the risks caused by humans in the organizational cybersecurity chain.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Personalised social matching systems can be seen as recommender systems that recommend people to others in the social networks. However, with the rapid growth of users in social networks and the information that a social matching system requires about the users, recommender system techniques have become insufficiently adept at matching users in social networks. This paper presents a hybrid social matching system that takes advantage of both collaborative and content-based concepts of recommendation. The clustering technique is used to reduce the number of users that the matching system needs to consider and to overcome other problems from which social matching systems suffer, such as cold start problem due to the absence of implicit information about a new user. The proposed system has been evaluated on a dataset obtained from an online dating website. Empirical analysis shows that accuracy of the matching process is increased, using both user information (explicit data) and user behavior (implicit data).

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The article focuses on how the information seeker makes decisions about relevance. It will employ a novel decision theory based on quantum probabilities. This direction derives from mounting research within the field of cognitive science showing that decision theory based on quantum probabilities is superior to modelling human judgements than standard probability models [2, 1]. By quantum probabilities, we mean decision event space is modelled as vector space rather than the usual Boolean algebra of sets. In this way,incompatible perspectives around a decision can be modelled leading to an interference term which modifies the law of total probability. The interference term is crucial in modifying the probability judgements made by current probabilistic systems so they align better with human judgement. The goal of this article is thus to model the information seeker user as a decision maker. For this purpose, signal detection models will be sketched which are in principle applicable in a wide variety of information seeking scenarios.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Rowley, J.& Urquhart, C. (2007). Understanding student information behavior in relation to electronic information services: lessons from longitudinal monitoring and evaluation Part 1. Journal of the American Society for Information Science and Technology, 58(8), 1162-1174. Sponsorship: JISC

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Urquhart, C. & Rowley, J. (2007). Understanding student information behavior in relation to electronic information services: lessons from longitudinal monitoring and evaluation Part 2. Journal of the American Society for Information Science and Technology, 58(8), 1188-1197. Sponsorship: JISC

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The central objective of research in Information Retrieval (IR) is to discover new techniques to retrieve relevant information in order to satisfy an Information Need. The Information Need is satisfied when relevant information can be provided to the user. In IR, relevance is a fundamental concept which has changed over time, from popular to personal, i.e., what was considered relevant before was information for the whole population, but what is considered relevant now is specific information for each user. Hence, there is a need to connect the behavior of the system to the condition of a particular person and his social context; thereby an interdisciplinary sector called Human-Centered Computing was born. For the modern search engine, the information extracted for the individual user is crucial. According to the Personalized Search (PS), two different techniques are necessary to personalize a search: contextualization (interconnected conditions that occur in an activity), and individualization (characteristics that distinguish an individual). This movement of focus to the individual's need undermines the rigid linearity of the classical model overtaken the ``berry picking'' model which explains that the terms change thanks to the informational feedback received from the search activity introducing the concept of evolution of search terms. The development of Information Foraging theory, which observed the correlations between animal foraging and human information foraging, also contributed to this transformation through attempts to optimize the cost-benefit ratio. This thesis arose from the need to satisfy human individuality when searching for information, and it develops a synergistic collaboration between the frontiers of technological innovation and the recent advances in IR. The search method developed exploits what is relevant for the user by changing radically the way in which an Information Need is expressed, because now it is expressed through the generation of the query and its own context. As a matter of fact the method was born under the pretense to improve the quality of search by rewriting the query based on the contexts automatically generated from a local knowledge base. Furthermore, the idea of optimizing each IR system has led to develop it as a middleware of interaction between the user and the IR system. Thereby the system has just two possible actions: rewriting the query, and reordering the result. Equivalent actions to the approach was described from the PS that generally exploits information derived from analysis of user behavior, while the proposed approach exploits knowledge provided by the user. The thesis went further to generate a novel method for an assessment procedure, according to the "Cranfield paradigm", in order to evaluate this type of IR systems. The results achieved are interesting considering both the effectiveness achieved and the innovative approach undertaken together with the several applications inspired using a local knowledge base.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The design of interfaces to facilitate user search has become critical for search engines, ecommercesites, and intranets. This study investigated the use of targeted instructional hints to improve search by measuring the quantitative effects of users' performance and satisfaction. The effects of syntactic, semantic and exemplar search hints on user behavior were evaluated in an empirical investigation using naturalistic scenarios. Combining the three search hint components, each with two levels of intensity, in a factorial design generated eight search engine interfaces. Eighty participants participated in the study and each completed six realistic search tasks. Results revealed that the inclusion of search hints improved user effectiveness, efficiency and confidence when using the search interfaces, but with complex interactions that require specific guidelines for search interface designers. These design guidelines will allow search designers to create more effective interfaces for a variety of searchapplications.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Query reformulation is a key user behavior during Web search. Our research goal is to develop predictive models of query reformulation during Web searching. This article reports results from a study in which we automatically classified the query-reformulation patterns for 964,780 Web searching sessions, composed of 1,523,072 queries, to predict the next query reformulation. We employed an n-gram modeling approach to describe the probability of users transitioning from one query-reformulation state to another to predict their next state. We developed first-, second-, third-, and fourth-order models and evaluated each model for accuracy of prediction, coverage of the dataset, and complexity of the possible pattern set. The results show that Reformulation and Assistance account for approximately 45% of all query reformulations; furthermore, the results demonstrate that the first- and second-order models provide the best predictability, between 28 and 40% overall and higher than 70% for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance.