885 resultados para implicit authentication
Resumo:
Publisher PDF
Resumo:
In the context of products from certain regions or countries being banned because of an identified or non-identified hazard, proof of geographical origin is essential with regard to feed and food safety issues. Usually, the product labeling of an affected feed lot shows origin, and the paper documentation shows traceability. Incorrect product labeling is common in embargo situations, however, and alternative analytical strategies for controlling feed authenticity are therefore needed. In this study, distillers' dried grains and solubles (DDGS) were chosen as the product on which to base a comparison of analytical strategies aimed at identifying the most appropriate one. Various analytical techniques were investigated for their ability to authenticate DDGS, including spectroscopic and spectrometric techniques combined with multivariate data analysis, as well as proven techniques for authenticating food, such as DNA analysis and stable isotope ratio analysis. An external validation procedure (called the system challenge) was used to analyze sample sets blind and to compare analytical techniques. All the techniques were adapted so as to be applicable to the DDGS matrix. They produced positive results in determining the botanical origin of DDGS (corn vs. wheat), and several of them were able to determine the geographical origin of the DDGS in the sample set. The maintenance and extension of the databanks generated in this study through the analysis of new authentic samples from a single location are essential in order to monitor developments and processing that could affect authentication.
Resumo:
In recent years, the adaptation of Wireless Sensor Networks (WSNs) to application areas requiring mobility increased the security threats against confidentiality, integrity and privacy of the information as well as against their connectivity. Since, key management plays an important role in securing both information and connectivity, a proper authentication and key management scheme is required in mobility enabled applications where the authentication of a node with the network is a critical issue. In this paper, we present an authentication and key management scheme supporting node mobility in a heterogeneous WSN that consists of several low capabilities sensor nodes and few high capabilities sensor nodes. We analyze our proposed solution by using MATLAB (analytically) and by simulation (OMNET++ simulator) to show that it has less memory requirement and has good network connectivity and resilience against attacks compared to some existing schemes. We also propose two levels of secure authentication methods for the mobile sensor nodes for secure authentication and key establishment.
Resumo:
This paper presents an integer programming model for developing optimal shift schedules while allowing extensive flexibility in terms of alternate shift starting times, shift lengths, and break placement. The model combines the work of Moondra (1976) and Bechtold and Jacobs (1990) by implicitly matching meal breaks to implicitly represented shifts. Moreover, the new model extends the work of these authors to enable the scheduling of overtime and the scheduling of rest breaks. We compare the new model to Bechtold and Jacobs' model over a diverse set of 588 test problems. The new model generates optimal solutions more rapidly, solves problems with more shift alternatives, and does not generate schedules violating the operative restrictions on break timing.
Resumo:
This qualitative study explores the subjective experience of being led by investigating the impact of their Implicit Leadership Theories (ILTs) on followers’ cognitive processes, affective responses and behavioural intentions towards leadership-claimants. The study explores how such responses influence the quality of hierarchical work-place relationships using a framework based on Leader-Member Exchange (LMX) Theory. The research uses focus groups to elicit descriptions of ILTs held by forty final year undergraduate Business and Management students. The data was then analysed using an abductive process permitting an interpretative understanding of the meanings participants attach to their past experiences and future expectations. This research addresses a perceived gap by making a theoretical contribution to knowledge and understanding in this field, focusing on how emotional responses affect their behaviour, how this impacts on organisational outcomes, and what the implications are for HRD practitioners. The findings support previous research into the content and structure of ILTs but extend these by examining the impact of affect on workplace behaviour. Findings demonstrate that where follower ILT needs are met then positive outcomes ensued for participants, their superiors, and their organisations. Conversely, where follower ILT needs are not matched, various negative effects emerged ranging from poor performance and impaired well-being, to withdrawal behaviour and outright rebellion. The research findings suggest dynamic reciprocal links amongst outcomes, behaviours, and LMX, and demonstrate an alignment of cognitive, emotional and behavioural responses that correspond to either high-LMX or low-LMX relationships, with major impacts on job satisfaction, commitment and well-being. Copyright
Resumo:
This work presents a tool to support authentication studies of paintings attributed to the modernist Portuguese artist Amadeo de Souza-Cardoso (1887-1918). The strategy adopted was to quantify and combine the information extracted from the analysis of the brushstroke with information on the pigments present in the paintings. The brushstroke analysis was performed combining Gabor filter and Scale Invariant Feature Transform. Hyperspectral imaging and elemental analysis were used to compare the materials in the painting with those present in a database of oil paint tubes used by the artist. The outputs of the tool are a quantitative indicator for authenticity, and a mapping image that indicates the areas where materials not coherent with Amadeo's palette were detected, if any. This output is a simple and effective way of assessing the results of the system. The method was tested in twelve paintings obtaining promising results.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
After a crime has occurred, one of the most pressing objectives for investigators is to identify and interview any eyewitness that can provide information about the crime. Depending on his or her training, the investigative interviewer will use (to varying degrees) mostly yes/no questions, some cued and multiple-choice questions, with few open-ended questions. When the witness cannot generate any more details about the crime, one assumes the eyewitness’ memory for the critical event has been exhausted. However, given what we know about memory, is this a safe assumption? In line with the extant literature on human cognition, if one assumes (a) an eyewitness has more available memories of the crime than he or she has accessible and (b) only explicit probes have been used to elicit information, then one can argue this eyewitness may still be able to provide additional information via implicit memory tests. In accordance with these notions, the present study had two goals: demonstrate that (1) eyewitnesses can reveal memory implicitly for a detail-rich event and (2) particularly for brief crimes, eyewitnesses can reveal memory for event details implicitly that were inaccessible when probed for explicitly. Undergraduates (N = 227) participated in a psychological experiment in exchange for research credit. Participants were presented with one of three stimulus videos (brief crime vs. long crime vs. irrelevant video). Then, participants either completed a series of implicit memory tasks or worked on a puzzle for 5 minutes. Lastly, participants were interviewed explicitly about the previous video via free recall and recognition tasks. Findings indicated that participants who viewed the brief crime provided significantly more crime-related details implicitly than those who viewed the long crime. The data also showed participants who viewed the long crime provided marginally more accurate details during free recall than participants who viewed the brief crime. Furthermore, participants who completed the implicit memory tasks provided significantly less accurate information during the explicit interview than participants who were not given implicit memory tasks. This study was the first to investigate implicit memory for eyewitnesses of a crime. To determine its applied value, additional empirical work is required.
Resumo:
We survey articles covering how hedge fund returns are explained, using largely non-linear multifactor models that examine the non-linear pay-offs and exposures of hedge funds. We provide an integrated view of the implicit factor and statistical factor models that are largely able to explain the hedge fund return-generating process. We present their evolution through time by discussing pioneering studies that made a significant contribution to knowledge, and also recent innovative studies that examine hedge fund exposures using advanced econometric methods. This is the first review that analyzes very recent studies that explain a large part of hedge fund variation. We conclude by presenting some gaps for future research.
Resumo:
Interactions in mobile devices normally happen in an explicit manner, which means that they are initiated by the users. Yet, users are typically unaware that they also interact implicitly with their devices. For instance, our hand pose changes naturally when we type text messages. Whilst the touchscreen captures finger touches, hand movements during this interaction however are unused. If this implicit hand movement is observed, it can be used as additional information to support or to enhance the users’ text entry experience. This thesis investigates how implicit sensing can be used to improve existing, standard interaction technique qualities. In particular, this thesis looks into enhancing front-of-device interaction through back-of-device and hand movement implicit sensing. We propose the investigation through machine learning techniques. We look into problems on how sensor data via implicit sensing can be used to predict a certain aspect of an interaction. For instance, one of the questions that this thesis attempts to answer is whether hand movement during a touch targeting task correlates with the touch position. This is a complex relationship to understand but can be best explained through machine learning. Using machine learning as a tool, such correlation can be measured, quantified, understood and used to make predictions on future touch position. Furthermore, this thesis also evaluates the predictive power of the sensor data. We show this through a number of studies. In Chapter 5 we show that probabilistic modelling of sensor inputs and recorded touch locations can be used to predict the general area of future touches on touchscreen. In Chapter 7, using SVM classifiers, we show that data from implicit sensing from general mobile interactions is user-specific. This can be used to identify users implicitly. In Chapter 6, we also show that touch interaction errors can be detected from sensor data. In our experiment, we show that there are sufficient distinguishable patterns between normal interaction signals and signals that are strongly correlated with interaction error. In all studies, we show that performance gain can be achieved by combining sensor inputs.