974 resultados para user behavior
Resumo:
Most current computer systems authorise the user at the start of a session and do not detect whether the current user is still the initial authorised user, a substitute user, or an intruder pretending to be a valid user. Therefore, a system that continuously checks the identity of the user throughout the session is necessary without being intrusive to end-user and/or effectively doing this. Such a system is called a continuous authentication system (CAS). Researchers have applied several approaches for CAS and most of these techniques are based on biometrics. These continuous biometric authentication systems (CBAS) are supplied by user traits and characteristics. One of the main types of biometric is keystroke dynamics which has been widely tried and accepted for providing continuous user authentication. Keystroke dynamics is appealing for many reasons. First, it is less obtrusive, since users will be typing on the computer keyboard anyway. Second, it does not require extra hardware. Finally, keystroke dynamics will be available after the authentication step at the start of the computer session. Currently, there is insufficient research in the CBAS with keystroke dynamics field. To date, most of the existing schemes ignore the continuous authentication scenarios which might affect their practicality in different real world applications. Also, the contemporary CBAS with keystroke dynamics approaches use characters sequences as features that are representative of user typing behavior but their selected features criteria do not guarantee features with strong statistical significance which may cause less accurate statistical user-representation. Furthermore, their selected features do not inherently incorporate user typing behavior. Finally, the existing CBAS that are based on keystroke dynamics are typically dependent on pre-defined user-typing models for continuous authentication. This dependency restricts the systems to authenticate only known users whose typing samples are modelled. This research addresses the previous limitations associated with the existing CBAS schemes by developing a generic model to better identify and understand the characteristics and requirements of each type of CBAS and continuous authentication scenario. Also, the research proposes four statistical-based feature selection techniques that have highest statistical significance and encompasses different user typing behaviors which represent user typing patterns effectively. Finally, the research proposes the user-independent threshold approach that is able to authenticate a user accurately without needing any predefined user typing model a-priori. Also, we enhance the technique to detect the impostor or intruder who may take over during the entire computer session.
Resumo:
The cross-sections of the Social Web and the Semantic Web has put folksonomy in the spot light for its potential in overcoming knowledge acquisition bottleneck and providing insight for "wisdom of the crowds". Folksonomy which comes as the results of collaborative tagging activities has provided insight into user's understanding about Web resources which might be useful for searching and organizing purposes. However, collaborative tagging vocabulary poses some challenges since tags are freely chosen by users and may exhibit synonymy and polysemy problem. In order to overcome these challenges and boost the potential of folksonomy as emergence semantics we propose to consolidate the diverse vocabulary into a consolidated entities and concepts. We propose to extract a tag ontology by ontology learning process to represent the semantics of a tagging community. This paper presents a novel approach to learn the ontology based on the widely used lexical database WordNet. We present personalization strategies to disambiguate the semantics of tags by combining the opinion of WordNet lexicographers and users’ tagging behavior together. We provide empirical evaluations by using the semantic information contained in the ontology in a tag recommendation experiment. The results show that by using the semantic relationships on the ontology the accuracy of the tag recommender has been improved.
Resumo:
Background: General practitioners (GPs) and nurses are ideally placed to address the significant unmet demand for the treatment of cannabis-related problems given the numbers of people who regularly seek their care. The aim of this study was to evaluate differences between GPs and nurses’ perceived knowledge, beliefs, and behaviors toward cannabis use and its screening and management. Methods: This study involved 161 nurses and 503 GPs who completed a survey distributed via conference satchels to delegates of Healthed seminars focused on topics relevant to women and children’s health. Differences between GPs and nurses were analyzed using χ2- tests and two-sample t-tests, while logistic regression examined predictors of service provision. Results: GPs were more likely than nurses to have engaged in cannabis-related service provision, but also more frequently reported barriers related to time, interest, and having more important issues to address. Nurses reported less knowledge, skills, and role legitimacy. Perceived screening skills predicted screening and referral to alcohol and other drug (AOD) services, while knowing a regular user increased the likelihood of referrals only. Conclusions: Approaches to increase cannabis-related screening and intervention may be improved by involving nurses, and by leveraging the relationship between nurses and doctors, in primary care.
Resumo:
There are different ways to authenticate humans, which is an essential prerequisite for access control. The authentication process can be subdivided into three categories that rely on something someone i) knows (e.g. password), and/or ii) has (e.g. smart card), and/or iii) is (biometric features). Besides classical attacks on password solutions and the risk that identity-related objects can be stolen, traditional biometric solutions have their own disadvantages such as the requirement of expensive devices, risk of stolen bio-templates etc. Moreover, existing approaches provide the authentication process usually performed only once initially. Non-intrusive and continuous monitoring of user activities emerges as promising solution in hardening authentication process: iii-2) how so. behaves. In recent years various keystroke dynamic behavior-based approaches were published that are able to authenticate humans based on their typing behavior. The majority focuses on so-called static text approaches, where users are requested to type a previously defined text. Relatively few techniques are based on free text approaches that allow a transparent monitoring of user activities and provide continuous verification. Unfortunately only few solutions are deployable in application environments under realistic conditions. Unsolved problems are for instance scalability problems, high response times and error rates. The aim of this work is the development of behavioral-based verification solutions. Our main requirement is to deploy these solutions under realistic conditions within existing environments in order to enable a transparent and free text based continuous verification of active users with low error rates and response times.
Resumo:
Previous studies have shown that users’ cognitive styles play an important role during Web searching. However, only limited studies have showed the relationship between cognitive styles and Web search behavior. Most importantly, it is not clear which components of Web search behavior are influenced by cognitive styles. This paper examines the relationships between users’ cognitive styles and their Web searching and develops a model that portrays the relationship. The study uses qualitative and quantitative analyses to inform the study results based on data gathered from 50 participants. A questionnaire was utilised to collect participants’ demographic information, and Riding’s (1991) Cognitive Style Analysis (CSA) test to assess their cognitive styles. Results show that users’ cognitive styles influenced their information searching strategies, query reformulation behaviour, Web navigational styles and information processing approaches. The user model developed in this study depicts the fundamental relationships between users’ Web search behavior and their cognitive styles. Modeling Web search behavior with a greater understanding of user’s cognitive styles can help information science researchers and information systems designers to bridge the semantic gap between the user and the systems. Implications of the research for theory and practice, and future work are discussed.
Resumo:
In this thesis we study a series of multi-user resource-sharing problems for the Internet, which involve distribution of a common resource among participants of multi-user systems (servers or networks). We study concurrently accessible resources, which for end-users may be exclusively accessible or non-exclusively. For all kinds we suggest a separate algorithm or a modification of common reputation scheme. Every algorithm or method is studied from different perspectives: optimality of protocols, selfishness of end users, fairness of the protocol for end users. On the one hand the multifaceted analysis allows us to select the most suited protocols among a set of various available ones based on trade-offs of optima criteria. On the other hand, the future Internet predictions dictate new rules for the optimality we should take into account and new properties of the networks that cannot be neglected anymore. In this thesis we have studied new protocols for such resource-sharing problems as the backoff protocol, defense mechanisms against Denial-of-Service, fairness and confidentiality for users in overlay networks. For backoff protocol we present analysis of a general backoff scheme, where an optimization is applied to a general-view backoff function. It leads to an optimality condition for backoff protocols in both slot times and continuous time models. Additionally we present an extension for the backoff scheme in order to achieve fairness for the participants in an unfair environment, such as wireless signal strengths. Finally, for the backoff algorithm we suggest a reputation scheme that deals with misbehaving nodes. For the next problem -- denial-of-service attacks, we suggest two schemes that deal with the malicious behavior for two conditions: forged identities and unspoofed identities. For the first one we suggest a novel most-knocked-first-served algorithm, while for the latter we apply a reputation mechanism in order to restrict resource access for misbehaving nodes. Finally, we study the reputation scheme for the overlays and peer-to-peer networks, where resource is not placed on a common station, but spread across the network. The theoretical analysis suggests what behavior will be selected by the end station under such a reputation mechanism.
Resumo:
Human age is surrounded by assumed set of rules and behaviors imposed by local culture and the society they live in. This paper introduces software that counts the presence of a person on the Internet and examines the activities he/she conducts online. The paper answers questions such as how "old" are you on the Internet? How soon will a newbie be exposed to adult websites? How long will it take for a new Internet user to know about social networking sites? And how many years a user has to surf online to celebrate his/her first "birthday" of Internet presence? Paper findings from a database of 105 school and university students containing their every click of first 24 hours of Internet usage are presented. The findings provide valuable insights for Internet Marketing, ethics, Internet business and the mapping of Internet life with real life. Privacy and ethical issues related to the study have been discussed at the end. © Springer Science+Business Media B.V. 2010.
Resumo:
Assessment of seismic performance and estimation of permanent displacements for submerged slopes require the accurate description of the soil's stress-strain-strength relationship under irregular cyclic loading. The geological profile of submerged slopes on the continental shelf typically consists of normally to lightly overconsolidated clays with depths ranging from a few meters to a few hundred meters and very low slope angles. This paper describes the formulation of a simplified effective-stress-based model, which is able to capture the key aspects of the cyclic behavior of normally consolidated clays. The proposed constitutive law incorporates anisotropic hardening and bounding surface principles to allow the user to simulate different shear strain and stress reversal histories as well as provide realistic descriptions of the accumulation of plastic shear strains and excess pore pressure during successive loading cycles. (C) 2000 Published by Elsevier Science Ltd. | Assessment of seismic performance and estimation of permanent displacements for submerged slopes require the accurate description of the soil's stress-strain-strength relationship under irregular cyclic loading. The geological profile of submerged slopes on the continental shelf typically consists of normally to lightly overconsolidated clays with depths ranging from a few meters to a few hundred meters and very low slope angles. This paper describes the formulation of a simplified effective-stress-based model, which is able to capture the key aspects of the cyclic behavior of normally consolidated clays. The proposed constitutive law incorporates anisotropic hardening and bounding surface principles to allow the user to simulate different shear strain and stress reversal histories as well as provide realistic descriptions of the accumulation of plastic shear strains and excess pore pressures during successive loading cycles.
Resumo:
Server performance has become a crucial issue for improving the overall performance of the World-Wide Web. This paper describes Webmonitor, a tool for evaluating and understanding server performance, and presents new results for a realistic workload. Webmonitor measures activity and resource consumption, both within the kernel and in HTTP processes running in user space. Webmonitor is implemented using an efficient combination of sampling and event-driven techniques that exhibit low overhead. Our initial implementation is for the Apache World-Wide Web server running on the Linux operating system. We demonstrate the utility of Webmonitor by measuring and understanding the performance of a Pentium-based PC acting as a dedicated WWW server. Our workload uses a file size distribution with a heavy tail. This captures the fact that Web servers must concurrently handle some requests for large audio and video files, and a large number of requests for small documents, containing text or images. Our results show that in a Web server saturated by client requests, over 90% of the time spent handling HTTP requests is spent in the kernel. Furthermore, keeping TCP connections open, as required by TCP, causes a factor of 2-9 increase in the elapsed time required to service an HTTP request. Data gathered from Webmonitor provide insight into the causes of this performance penalty. Specifically, we observe a significant increase in resource consumption along three dimensions: the number of HTTP processes running at the same time, CPU utilization, and memory utilization. These results emphasize the important role of operating system and network protocol implementation in determining Web server performance.
Resumo:
Effective engineering of the Internet is predicated upon a detailed understanding of issues such as the large-scale structure of its underlying physical topology, the manner in which it evolves over time, and the way in which its constituent components contribute to its overall function. Unfortunately, developing a deep understanding of these issues has proven to be a challenging task, since it in turn involves solving difficult problems such as mapping the actual topology, characterizing it, and developing models that capture its emergent behavior. Consequently, even though there are a number of topology models, it is an open question as to how representative the topologies they generate are of the actual Internet. Our goal is to produce a topology generation framework which improves the state of the art and is based on design principles which include representativeness, inclusiveness, and interoperability. Representativeness leads to synthetic topologies that accurately reflect many aspects of the actual Internet topology (e.g. hierarchical structure, degree distribution, etc.). Inclusiveness combines the strengths of as many generation models as possible in a single generation tool. Interoperability provides interfaces to widely-used simulation and visualization applications such as ns and SSF. We call such a tool a universal topology generator. In this paper we discuss the design, implementation and usage of the BRITE universal topology generation tool that we have built. We also describe the BRITE Analysis Engine, BRIANA, which is an independent piece of software designed and built upon BRITE design goals of flexibility and extensibility. The purpose of BRIANA is to act as a repository of analysis routines along with a user–friendly interface that allows its use on different topology formats.
Resumo:
The Feeding Experiments End-user Database (FEED) is a research tool developed by the Mammalian Feeding Working Group at the National Evolutionary Synthesis Center that permits synthetic, evolutionary analyses of the physiology of mammalian feeding. The tasks of the Working Group are to compile physiologic data sets into a uniform digital format stored at a central source, develop a standardized terminology for describing and organizing the data, and carry out a set of novel analyses using FEED. FEED contains raw physiologic data linked to extensive metadata. It serves as an archive for a large number of existing data sets and a repository for future data sets. The metadata are stored as text and images that describe experimental protocols, research subjects, and anatomical information. The metadata incorporate controlled vocabularies to allow consistent use of the terms used to describe and organize the physiologic data. The planned analyses address long-standing questions concerning the phylogenetic distribution of phenotypes involving muscle anatomy and feeding physiology among mammals, the presence and nature of motor pattern conservation in the mammalian feeding muscles, and the extent to which suckling constrains the evolution of feeding behavior in adult mammals. We expect FEED to be a growing digital archive that will facilitate new research into understanding the evolution of feeding anatomy.
Resumo:
Polymer nanocomposites offer the potential of enhanced properties such as increased modulus and barrier properties to the end user. Much work has been carried out on the effects of extrusion conditions on melt processed nanocomposites but very little research has been conducted on the use of polymer nanocomposites in semi-solid forming processes such as thermoforming and injection blow molding. These processes are used to make much of today’s packaging, and any improvements in performance such as possible lightweighting due to increased modulus would bring signi?cant bene?ts both economically and environmentally. The work described here looks at the biaxial deformation of polypropylene–clay nanocomposites under industrial forming conditions in order to determine if the presence of clay affects processability, structure and mechanical properties of the stretched material. Melt compounded polypropylene/clay composites in sheet form were biaxially stretched at a variety of processing conditions to examine the effect of high temperature, high strain and high strain rate processing on sheet structure
and properties.
A biaxial test rig was used to carry out the testing which imposed conditions on the sheet that are representative of those applied in injection blow molding and thermoforming. Results show that the presence of clay increases the yield stress relative to the un?lled material at typical processing temperatures and that the sensitivity of the yield stress to temperature is greater for the ?lled material. The stretching process is found to have a signi?cant effect on the delamination and alignment of clay particles (as observed by TEM) and on yield stress and elongation at break of the stretched sheet.
Resumo:
In this paper we present a complete interactive system en- abled to detect human laughs and respond appropriately, by integrating the information of the human behavior and the context. Furthermore, the impact of our autonomous laughter-aware agent on the humor experience of the user and interaction between user and agent is evaluated by sub- jective and objective means. Preliminary results show that the laughter-aware agent increases the humor experience (i.e., felt amusement of the user and the funniness rating of the film clip), and creates the notion of a shared social experience, indicating that the agent is useful to elicit posi- tive humor-related affect and emotional contagion.
Resumo:
The technique of externally bonding fiber-reinforced polymer (FRP) composites has become very popular worldwide for retrofitting existing reinforced concrete (RC) structures. Debonding of FRP from the concrete substrate is a typical failure mode in such strengthened structures. The bond behavior between FRP and concrete thus plays a crucial role in these structures. The FRP-to-concrete bond behavior has been extensively investigated experimentally, commonly using a single or double shear test of the FRP-to-concrete bonded joint. Comparatively, much less research has been concerned with numerical simulation, chiefly due to difficulties in the accurate modeling of the complex behavior of concrete. This paper presents a simple but robust finite-element (FE) model for simulating the bond behavior in the entire debonding process for the single shear test. A concrete damage plasticity model is proposed to capture the concrete-to-FRP bond behavior. Numerical results are in close agreement with test data, validating the model. In addition to accuracy, the model has two further advantages: it only requires the basic material parameters (i.e., no arbitrary user-defined parameter such as the shear retention factor is required) and it can be directly implemented in the FE software ABAQUS.
Resumo:
The ISO norm line 9241 states some criteria for ergonomics of human system interaction. In markets with a huge variety of offers and little possibility of differentiation, providers can gain a decisive competitive advantage by user oriented interfaces. A precondition for this is that relevant information can be obtained for entrepreneurial decisions in this regard. To test how users of universal search result pages use those pages and pay attention to different elements, an eye tracking experiment with a mixed design has been developed. Twenty subjects were confronted with search engine result pages (SERPs) and were instructed to make a decision while conditions “national vs. international city” and “with vs. without miniaturized Google map” were used. Different parameters like fixation count, duration and time to first fixation were computed from the eye tracking raw data and supplemented by click rate data as well as data from questionnaires. Results of this pilot study revealed some remarkable facts like a vampire effect on miniaturized Google maps. Furthermore, Google maps did not shorten the process of decision making, Google ads were not fixated, visual attention on SERPs was influenced by position of the elements on the SERP and by the users’ familiarity with the search target. These results support the theory of Amount of Invested Mental Effort (AIME) and give providers empirical evidence to take users’ expectations into account. Furthermore, the results indicated that the task oriented goal mode of participants was a moderator for the attention spent on ads. Most important, SERPs with images attracted the viewers’ attention much longer than those without images. This unique selling proposition may lead to a distortion of competition on markets.