859 resultados para Human Computer Cryptography


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation establishes a novel system for human face learning and recognition based on incremental multilinear Principal Component Analysis (PCA). Most of the existing face recognition systems need training data during the learning process. The system as proposed in this dissertation utilizes an unsupervised or weakly supervised learning approach, in which the learning phase requires a minimal amount of training data. It also overcomes the inability of traditional systems to adapt to the testing phase as the decision process for the newly acquired images continues to rely on that same old training data set. Consequently when a new training set is to be used, the traditional approach will require that the entire eigensystem will have to be generated again. However, as a means to speed up this computational process, the proposed method uses the eigensystem generated from the old training set together with the new images to generate more effectively the new eigensystem in a so-called incremental learning process. In the empirical evaluation phase, there are two key factors that are essential in evaluating the performance of the proposed method: (1) recognition accuracy and (2) computational complexity. In order to establish the most suitable algorithm for this research, a comparative analysis of the best performing methods has been carried out first. The results of the comparative analysis advocated for the initial utilization of the multilinear PCA in our research. As for the consideration of the issue of computational complexity for the subspace update procedure, a novel incremental algorithm, which combines the traditional sequential Karhunen-Loeve (SKL) algorithm with the newly developed incremental modified fast PCA algorithm, was established. In order to utilize the multilinear PCA in the incremental process, a new unfolding method was developed to affix the newly added data at the end of the previous data. The results of the incremental process based on these two methods were obtained to bear out these new theoretical improvements. Some object tracking results using video images are also provided as another challenging task to prove the soundness of this incremental multilinear learning method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fueled by increasing human appetite for high computing performance, semiconductor technology has now marched into the deep sub-micron era. As transistor size keeps shrinking, more and more transistors are integrated into a single chip. This has increased tremendously the power consumption and heat generation of IC chips. The rapidly growing heat dissipation greatly increases the packaging/cooling costs, and adversely affects the performance and reliability of a computing system. In addition, it also reduces the processor's life span and may even crash the entire computing system. Therefore, dynamic thermal management (DTM) is becoming a critical problem in modern computer system design. Extensive theoretical research has been conducted to study the DTM problem. However, most of them are based on theoretically idealized assumptions or simplified models. While these models and assumptions help to greatly simplify a complex problem and make it theoretically manageable, practical computer systems and applications must deal with many practical factors and details beyond these models or assumptions. The goal of our research was to develop a test platform that can be used to validate theoretical results on DTM under well-controlled conditions, to identify the limitations of existing theoretical results, and also to develop new and practical DTM techniques. This dissertation details the background and our research efforts in this endeavor. Specifically, in our research, we first developed a customized test platform based on an Intel desktop. We then tested a number of related theoretical works and examined their limitations under the practical hardware environment. With these limitations in mind, we developed a new reactive thermal management algorithm for single-core computing systems to optimize the throughput under a peak temperature constraint. We further extended our research to a multicore platform and developed an effective proactive DTM technique for throughput maximization on multicore processor based on task migration and dynamic voltage frequency scaling technique. The significance of our research lies in the fact that our research complements the current extensive theoretical research in dealing with increasingly critical thermal problems and enabling the continuous evolution of high performance computing systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Age-related macular degeneration (AMD) is the leading cause of blindness inAmerica. The fact that AMD wreaks most of the damage in the center of the retina raises the question of whether light, integrated over long periods, is more concentrated in the macula. A method, based on eye-tracking, was developed to measure the distribution of light in the retina under natural viewing conditions. The hypothesis was that integrated over time, retinal illumination peaked in the macula. Additionally a possible relationship between age and retinal illumination was investigated. The eye tracker superimposed the subject's gaze position on a video recorded by a scene camera. Five informed subjects were employed in feasibility tests, and 58 naïve subjects participated in 5 phases. In phase 1 the subjects viewed a gray-scale image. In phase 2, they observed a sequence of photographic images. In phase 3 they viewed a video. In phase 4, they worked on a computer; in phase 5, the subjects walked around freely. The informed subjects were instructed to gaze at bright objects in the field of view and then at dark objects. Naïve subjects were allowed to gaze freely for all phases. Using the subject's gaze coordinates, and the video provided by the scene camera, the cumulative light distribution on the retina was calculated for ∼15° around the fovea. As expected for control subjects, cumulative retinal light distributions peaked and dipped in the fovea when they gazed at bright or dark objects respectively. The light distribution maps obtained from the naïve subjects presented a tendency to peak in the macula for phases 1, 2, and 3, a consistent tendency in phase 4 and a variable tendency in phase 5. The feasibility of using an eye-tracker system to measure the distribution of light in the retina was demonstrated, thus helping to understand the role played by light exposure in the etiology of AMD. Results showed that a tendency for light to peak in the macula is a characteristic of some individuals and of certain tasks. In these situations, risk of AMD could be increased. No significant difference was observed based on age.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the development of a viable quantum computer nears, existing widely used public-key cryptosystems, such as RSA, will no longer be secure. Thus, significant effort is being invested into post-quantum cryptography (PQC). Lattice-based cryptography (LBC) is one such promising area of PQC, which offers versatile, efficient, and high performance security services. However, the vulnerabilities of these implementations against side-channel attacks (SCA) remain significantly understudied. Most, if not all, lattice-based cryptosystems require noise samples generated from a discrete Gaussian distribution, and a successful timing analysis attack can render the whole cryptosystem broken, making the discrete Gaussian sampler the most vulnerable module to SCA. This research proposes countermeasures against timing information leakage with FPGA-based designs of the CDT-based discrete Gaussian samplers with constant response time, targeting encryption and signature scheme parameters. The proposed designs are compared against the state-of-the-art and are shown to significantly outperform existing implementations. For encryption, the proposed sampler is 9x faster in comparison to the only other existing time-independent CDT sampler design. For signatures, the first time-independent CDT sampler in hardware is proposed. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]Automatic detection systems do not perform as well as human observers, even on simple detection tasks. A potential solution to this problem is training vision systems on appropriate regions of interests (ROIs), in contrast to training on predefined and arbitrarily selected regions. Here we focus on detecting pedestrians in static scenes. Our aim is to answer the following question: Can automatic vision systems for pedestrian detection be improved by training them on perceptually-defined ROIs?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]Enabling natural human-robot interaction using computer vision based applications requires fast and accurate hand detection. However, previous works in this field assume different constraints, like a limitation in the number of detected gestures, because hands are highly complex objects difficult to locate. This paper presents an approach which integrates temporal coherence cues and hand detection based on wrists using a cascade classifier. With this approach, we introduce three main contributions: (1) a transparent initialization mechanism without user participation for segmenting hands independently of their gesture, (2) a larger number of detected gestures as well as a faster training phase than previous cascade classifier based methods and (3) near real-time performance for hand pose detection in video streams.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose Drafting in cycling influences collective behaviour of pelotons. Whilst evidence for collective behaviour in competitive running events exists, it is not clear if this results from energetic savings conferred by drafting. This study modelled the effects of drafting on behavior in elite 10,000 m runners. Methods Using performance data from a men’s elite 10,000 m track running event, computer simulations were constructed using Netlogo 5.1 to test the effects of three different drafting quantities on collective behaviour: no drafting, drafting to 3m behind with up to ~8% energy savings (a realistic running draft); and drafting up to 3m behind with up to 38% energy savings (a realistic cycling draft). Three measures of collective behaviour were analysed in each condition; mean speed, mean group stretch (distance between first and last placed runner), and Runner Convergence Ratio (RCR) which represents the degree of drafting benefit obtained by the follower in a pair of coupled runners. Results Mean speeds were 6.32±0.28m.s-1, 5.57±0.18 m.s-1, and 5.51±0.13 m.s-1 in the cycling draft, runner draft, and no draft conditions respectively (all P<0.001). RCR was lower in the cycling draft condition, but did not differ between the other two. Mean stretch did not differ between conditions. Conclusions Collective behaviours observed in running events cannot be fully explained through energetic savings conferred by realistic drafting benefits. They may therefore result from other, possibly psychological, processes. The benefits or otherwise of engaging in such behavior are, as yet, unclear.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this extended abstract, we discuss recent research at Worcester into the inclusion of AI into ‘Serious Games’. Serious Games research intends to harness the power of computer game technology to produce educational and training materials. We prefer the name ‘Immersive Environments’ (IEs) since this emphasises the human psychological dimension. Creation of compelling and convincing learning software requires a rich engagement of the learner, and a convincing learning experience. We believe that various aspects of the AI tradition can inform the production of such learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At a recent conference on games in education, we made a radical decision to transform our standard presentation of PowerPoint slides and computer game demonstrations into a unified whole, inserting the PowerPoint presentation to the computer game. This opened up various questions relating to learning and teaching theories, which were debated by the conference delegates. In this paper, we reflect on these discussions, we present our initial experiment, and relate this to various theories of learning and teaching. In particular, we consider the applicability of “concept maps” to inform the construction of educational materials, especially their topological, geometrical and pedagogical significance. We supplement this “spatial” dimension with a theory of the dynamic, temporal dimension, grounded in a context of learning processes, such as Kolb’s learning cycle. Finally, we address the multi-player aspects of computer games, and relate this to the theories of social and collaborative learning. This paper attempts to explore various theoretical bases, and so support the development of a new learning and teaching virtual reality approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Decision-making is often dependent on uncertain data, e.g. data associated with confidence scores or probabilities. We present a comparison of different informa- tion presentations for uncertain data and, for the first time, measure their effects on human decision-making. We show that the use of Natural Language Genera- tion (NLG) improves decision-making un- der uncertainty, compared to state-of-the- art graphical-based representation meth- ods. In a task-based study with 442 adults, we found that presentations using NLG lead to 24% better decision-making on av- erage than the graphical presentations, and to 44% better decision-making when NLG is combined with graphics. We also show that women achieve significantly better re- sults when presented with NLG output (an 87% increase on average compared to graphical presentations).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.