888 resultados para Scleral Search Coils
Resumo:
High-fidelity eye tracking is combined with a perceptual grouping task to provide insight into the likely mechanisms underlying the compensation of retinal image motion caused by movement of the eyes. The experiments describe the covert detection of minute temporal and spatial offsets incorporated into a test stimulus. Analysis of eye motion on individual trials indicates that the temporal offset sensitivity is actually due to motion of the eye inducing artificial spatial offsets in the briefly presented stimuli. The results have strong implications for two popular models of compensation for fixational eye movements, namely efference copy and image-based models. If an efference copy model is assumed, the results place constraints on the spatial accuracy and source of compensation. If an image-based model is assumed then limitations are placed on the integration time window over which motion estimates are calculated. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents an analytical modelling approach for the Brushless Doubly-Fed Machine (BDFM) taking iron saturation into account. A generalised coupled-circuit model is developed which considers stator and rotor teeth saturation effects. A method of calculating the machine inductance parameters is presented which can be implemented in time-stepping simulations. The model has been implemented in MATLAB/Simulink and verified by Finite Element analysis and experimental tests. The tests are carried out on a 180 frame size BDFM. Flux search coils have been utilised to measure airgap and teeth flux densities. © 2010 IEEE.
Resumo:
This document introduces the planned new search for the neutron Electric Dipole Moment at the Spallation Neutron Source at the Oak Ridge National Laboratory. A spin precession measurement is to be carried out using Ultracold neutrons diluted in a superfluid Helium bath at T = 0.5 K, where spin polarized 3He atoms act as detector of the neutron spin polarization. This manuscript describes some of the key aspects of the planned experiment with the contributions from Caltech to the development of the project.
Techniques used in the design of magnet coils for Nuclear Magnetic Resonance were adapted to the geometry of the experiment. Described is an initial design approach using a pair of coils tuned to shield outer conductive elements from resistive heat loads, while inducing an oscillating field in the measurement volume. A small prototype was constructed to test the model of the field at room temperature.
A large scale test of the high voltage system was carried out in a collaborative effort at the Los Alamos National Laboratory. The application and amplification of high voltage to polished steel electrodes immersed in a superfluid Helium bath was studied, as well as the electrical breakdown properties of the electrodes at low temperatures. A suite of Monte Carlo simulation software tools to model the interaction of neutrons, 3He atoms, and their spins with the experimental magnetic and electric fields was developed and implemented to further the study of expected systematic effects of the measurement, with particular focus on the false Electric Dipole Moment induced by a Geometric Phase akin to Berry’s phase.
An analysis framework was developed and implemented using unbinned likelihood to fit the time modulated signal expected from the measurement data. A collaborative Monte Carlo data set was used to test the analysis methods.
Resumo:
This paper seeks to address the widespread call in the literature for the cross-cultural examination ( and validation) of accepted concepts within consumer behaviour, such as consumer risk perceptions and information search. The findings of the study provide support for a number of accepted relationships, whilst identifying distinct cross cultural differences in external information search and willingness to buy genetically modified (GM) food products by consumers.
Resumo:
For the most part, the literature base for Integrated Marketing Communication (IMC) has developed from an applied or tactical level rather than from an intellectual or theoretical one. Since industry, practitioner and even academic studies have provided little insight into what IMC is and how it operates, our approach has been to investigate that other IMC community, that is, the academic or instructional group responsible for disseminating IMC knowledge. We proposed that the people providing course instruction and directing research activities have some basis for how they organize, consider and therefore instruct in the area of IMC. A syllabi analysis of 87 IMC units in six countries investigated the content of the unit, its delivery both physically and conceptually, and defined the audience of the unit. The study failed to discover any type of latent theoretical foundation that might be used as a base for understanding IMC. The students who are being prepared to extend, expand and enhance IMC concepts do not appear to be well-served by the curriculum we found in our research. The study concludes with a model for further IMC curriculum development.
Resumo:
Search engines have forever changed the way people access and discover knowledge, allowing information about almost any subject to be quickly and easily retrieved within seconds. As increasingly more material becomes available electronically the influence of search engines on our lives will continue to grow. This presents the problem of how to find what information is contained in each search engine, what bias a search engine may have, and how to select the best search engine for a particular information need. This research introduces a new method, search engine content analysis, in order to solve the above problem. Search engine content analysis is a new development of traditional information retrieval field called collection selection, which deals with general information repositories. Current research in collection selection relies on full access to the collection or estimations of the size of the collections. Also collection descriptions are often represented as term occurrence statistics. An automatic ontology learning method is developed for the search engine content analysis, which trains an ontology with world knowledge of hundreds of different subjects in a multilevel taxonomy. This ontology is then mined to find important classification rules, and these rules are used to perform an extensive analysis of the content of the largest general purpose Internet search engines in use today. Instead of representing collections as a set of terms, which commonly occurs in collection selection, they are represented as a set of subjects, leading to a more robust representation of information and a decrease of synonymy. The ontology based method was compared with ReDDE (Relevant Document Distribution Estimation method for resource selection) using the standard R-value metric, with encouraging results. ReDDE is the current state of the art collection selection method which relies on collection size estimation. The method was also used to analyse the content of the most popular search engines in use today, including Google and Yahoo. In addition several specialist search engines such as Pubmed and the U.S. Department of Agriculture were analysed. In conclusion, this research shows that the ontology based method mitigates the need for collection size estimation.