939 resultados para Quantum information processing
Resumo:
This thesis examines children's consumer choice behaviour using an information processing perspective, with the fundamental goal of applying academic research to practical marketing and commercial problems. Proceeding a preface, which describes the academic and commercial terms of reference within which this interdisciplinary study is couched, the thesis comprises four discernible parts. Initially, the rationale inherent in adopting an information processing perspective is justified and the diverse array of topics which have bearing on children's consumer processing and behaviour are aggregated. The second part uses this perspective as a springboard to appraise the little explored role of memory, and especially memory structure, as a central cognitive component in children's consumer choice processing. The main research theme explores the ease with which 10 and 11 year olds retrieve contemporary consumer information from subjectively defined memory organisations. Adopting a sort-recall paradigm, hierarchical retrieval processing is stimulated and it is contended that when two items, known to be stored proximally in the memory organisation are not recalled adjacently, this discrepancy is indicative of retrieval processing ease. Results illustrate the marked influence of task conditions and orientation of memory structure on retrieval; these conclusions are accounted for in terms of input and integration failure. The third section develops the foregoing interpellations in the marketing context. A straightforward methodology for structuring marketing situations is postulated, a basis for segmenting children's markets using processing characteristics is adopted, and criteria for communicating brand support information to children are discussed. A taxonomy of market-induced processing conditions is developed. Finally, a case study with topical commercial significance is described. The development, launch and marketing of a new product in the confectionery market is outlined, the aetiology of its subsequent demise identified and expounded, and prescriptive guidelines are put forward to help avert future repetition of marketing misjudgements.
Resumo:
For over 30. years information-processing approaches to leadership and more specifically Implicit Leadership Theories (ILTs) research has contributed a significant body of knowledge on leadership processes in applied settings. A new line of research on Implicit Followership Theories (IFTs) has re-ignited interest in information-processing and socio-cognitive approaches to leadership and followership. In this review, we focus on organizational research on ILTs and IFTs and highlight their practical utility for the exercise of leadership and followership in applied settings. We clarify common misperceptions regarding the implicit nature of ILTs and IFTs, review both direct and indirect measures, synthesize current and ongoing research on ILTs and IFTs in organizational settings, address issues related to different levels of analysis in the context of leadership and follower schemas and, finally, propose future avenues for organizational research. © 2013 Elsevier Inc.
Resumo:
Three studies tested the impact of properties of behavioral intention on intention-behavior consistency, information processing, and resistance. Principal components analysis showed that properties of intention formed distinct factors. Study 1 demonstrated that temporal stability, but not the other intention attributes, moderated intention-behavior consistency. Study 2 found that greater stability of intention was associated with improved memory performance. In Study 3, participants were confronted with a rating scale manipulation designed to alter their intention scores. Findings showed that stable intentions were able to withstand attack. Overall, the present research findings suggest that different properties of intention are not simply manifestations of a single underlying construct ("intention strength"), and that temporal stability exhibits superior resistance and impact compared to other intention attributes. © 2013 Wiley Periodicals, Inc.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
The paper describes education complex "Multi-agent Technologies for Parallel and Distributed Information Processing in Telecommunication Networks".
Resumo:
The system of the automated carrying out of examinations is described. Opportunities of system, its architecture and structure of communications between functional subsystems are resulted. Features of the subsystems making system are described. Types of questions which can be used at carrying out of examination are listed. In the near future the working version of system will be ready to input in commercial operation.
Resumo:
A model of the cognitive process of natural language processing has been developed using the formalism of generalized nets. Following this stage-simulating model, the treatment of information inevitably includes phases, which require joint operations in two knowledge spaces – language and semantics. In order to examine and formalize the relations between the language and the semantic levels of treatment, the language is presented as an information system, conceived on the bases of human cognitive resources, semantic primitives, semantic operators and language rules and data. This approach is applied for modeling a specific grammatical rule – the secondary predication in Russian. Grammatical rules of the language space are expressed as operators in the semantic space. Examples from the linguistics domain are treated and several conclusions for the semantics of the modeled rule are made. The results of applying the information system approach to the language turn up to be consistent with the stages of treatment modeled with the generalized net.
Resumo:
The problems and methods for adaptive control and multi-agent processing of information in global telecommunication and computer networks (TCN) are discussed. Criteria for controllability and communication ability (routing ability) of dataflows are described. Multi-agent model for exchange of divided information resources in global TCN has been suggested. Peculiarities for adaptive and intelligent control of dataflows in uncertain conditions and network collisions are analyzed.
Resumo:
A number of factors influence the information processing needs of organizations, particularly with respect to the coordination and control mechanisms within a hotel. The authors use a theoretical framework to illustrate alternative mechanisms that can be used to coordinate and control hotel operations.
Resumo:
We theoretically study the resonance fluorescence spectrum of a three-level quantum emitter coupled to a spherical metallic nanoparticle. We consider the case that the quantum emitter is driven by a single laser field along one of the optical transitions. We show that the development of the spectrum depends on the relative orientation of the dipole moments of the optical transitions in relation to the metal nanoparticle. In addition, we demonstrate that the location and width of the peaks in the spectrum are strongly modified by the exciton-plasmon coupling and the laser detuning, allowing to achieve controlled strongly subnatural spectral line. A strong antibunching of the fluorescent photons along the undriven transition is also obtained. Our results may be used for creating a tunable source of photons which could be used for a probabilistic entanglement scheme in the field of quantum information processing.
Resumo:
Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.
Resumo:
While a great amount of attention is being given to the development of nanodevices, both through academic research and private industry, the field is still on the verge. Progress hinges upon the development of tools and components that can precisely control the interaction between light and matter, and that can be efficiently integrated into nano-devices. Nanofibers are one of the most promising candidates for such purposes. However, in order to fully exploit their potential, a more intimate knowledge of how nanofibers interact with single neutral atoms must be gained. As we learn more about the properties of nanofiber modes, and the way they interface with atoms, and as the technology develops that allows them to be prepared with more precisely known properties, they become more and more adaptable and effective. The work presented in this thesis touches on many topics, which is testament to the broad range of applications and high degree of promise that nanofibers hold. For immediate use, we need to fully grasp how they can be best implemented as sensors, filters, detectors, and switches in existing nano-technologies. Areas of interest also include how they might be best exploited for probing atom-surface interactions, single-atom detection and single photon generation. Nanofiber research is also motivated by their potential integration into fundamental cold atom quantum experiments, and the role they can play there. Combining nanofibers with existing optical and quantum technologies is a powerful strategy for advancing areas like quantum computation, quantum information processing, and quantum communication. In this thesis I present a variety of theoretical work, which explores a range of the applications listed above. The first work presented concerns the use of the evanescent fields around a nanofiber to manipulate an existing trapping geometry and therefore influence the centre-of-mass dynamics of the atom. The second work presented explores interesting trapping geometries that can be achieved in the vicinity of a fiber in which just four modes are allowed to propagate. In a third study I explore the use of a nanofiber as a detector of small numbers of photons by calculating the rate of emission into the fiber modes when the fiber is moved along next to a regularly separated array of atoms. Also included are some results from a work in progress, where I consider the scattered field that appears along the nanofiber axis when a small number of atoms trapped along that axis are illuminated orthogonally; some interesting preliminary results are outlined. Finally, in contrast with the rest of the thesis, I consider some interesting physics that can be done in one of the trapping geometries that can be created around the fiber, here I explore the ground states of a phase separated two-component superfluid Bose-Einstein condensate trapped in a toroidal potential.
Resumo:
Résumé : Une définition opérationnelle de la dyslexie qui est adéquate et pertinente à l'éducation n'a pu être identifiée suite à une recension des écrits. Les études sur la dyslexie se retrouvent principalement dans trois champs: la neurologie, la neurolinguistique et la génétique. Les résultats de ces recherches cependant, se limitent au domaine médical et ont peu d'utilité pour une enseignante ou un enseignant. La classification de la dyslexie de surface et la dyslexie profonde est la plus appropriée lorsque la dyslexie est définie comme trouble de lecture dans le contexte de l'éducation. L'objectif de ce mémoire était de développer un cadre conceptuel théorique dans lequel les troubles de lecture chez les enfants dyslexiques sont dû à une difficulté en résolution de problèmes dans le traitement de l'information. La validation du cadre conceptuel a été exécutée à l'aide d'un expert en psychologie cognitive, un expert en dyslexie et une enseignante. La perspective de la résolution de problèmes provient du traitement de l'information en psychologie cognitive. Le cadre conceptuel s'adresse uniquement aux troubles de lectures qui sont manifestés par les enfants dyslexiques.||Abstract : An extensive literature review failed to uncover an adequate operational definition of dyslexia applicable to education. The predominant fields of research that have produced most of the studies on dyslexia are neurology, neurolinguistics and genetics. Their perspectives were shown to be more pertinent to medical experts than to teachers. The categorization of surface and deep dyslexia was shown to be the best description of dyslexia in an educational context. The purpose of the present thesis was to develop a theoretical conceptual framework which describes a link between dyslexia, a text-processing model and problem solving. This conceptual framework was validated by three experts specializing in a specific field (either cognitive psychology, dyslexia or teaching). The concept of problem solving was based on information-processing theories in cognitive psychology. This framework applies specifically to reading difficulties which are manifested by dyslexic children.
Resumo:
Des interventions ciblant l’amélioration cognitive sont de plus en plus à l’intérêt dans nombreux domaines, y compris la neuropsychologie. Bien qu'il existe de nombreuses méthodes pour maximiser le potentiel cognitif de quelqu’un, ils sont rarement appuyé par la recherche scientifique. D’abord, ce mémoire examine brièvement l'état des interventions d'amélioration cognitives. Il décrit premièrement les faiblesses observées dans ces pratiques et par conséquent il établit un modèle standard contre lequel on pourrait et devrait évaluer les diverses techniques ciblant l'amélioration cognitive. Une étude de recherche est ensuite présenté qui considère un nouvel outil de l'amélioration cognitive, une tâche d’entrainement perceptivo-cognitive : 3-dimensional multiple object tracking (3D-MOT). Il examine les preuves actuelles pour le 3D-MOT auprès du modèle standard proposé. Les résultats de ce projet démontrent de l’augmentation dans les capacités d’attention, de mémoire de travail visuel et de vitesse de traitement d’information. Cette étude représente la première étape dans la démarche vers l’établissement du 3D-MOT comme un outil d’amélioration cognitive.