916 resultados para multiple discrepancies theory
Resumo:
The alternate combinational approach of genetic algorithm and neural network (AGANN) has been presented to correct the systematic error of the density functional theory (DFT) calculation. It treats the DFT as a black box and models the error through external statistical information. As a demonstration, the AGANN method has been applied in the correction of the lattice energies from the DFT calculation for 72 metal halides and hydrides. Through the AGANN correction, the mean absolute value of the relative errors of the calculated lattice energies to the experimental values decreases from 4.93% to 1.20% in the testing set. For comparison, the neural network approach reduces the mean value to 2.56%. And for the common combinational approach of genetic algorithm and neural network, the value drops to 2.15%. The multiple linear regression method almost has no correction effect here.
Resumo:
In this paper, a novel mathematical model of neuron-Double Synaptic Weight Neuron (DSWN)(l) is presented. The DSWN can simulate many kinds of neuron architectures, including Radial-Basis-Function (RBF), Hyper Sausage and Hyper Ellipsoid models, etc. Moreover, this new model has been implemented in the new CASSANN-II neurocomputer that can be used to form various types of neural networks with multiple mathematical models of neurons. The flexibility of the DSWN has also been described in constructing neural networks. Based on the theory of Biomimetic Pattern Recognition (BPR) and high-dimensional space covering, a recognition system of omni directionally oriented rigid objects on the horizontal surface and a face recognition system had been implemented on CASSANN-II neurocomputer. In these two special cases, the result showed DSWN neural network had great potential in pattern recognition.
Resumo:
A self-consistent calculation of the subband energy levels of n-doped quantum wells is studied. A comparison is made between theoretical results and experimental data. In order to account for the deviations between them, the ground-state electron-electron exchange interactions, the ground-state direct Coulomb interactions, the depolarization effect, and the exciton-like effect are considered in the simulations. The agreement between theory and experiment is greatly improved when all these aspects are taken into account. The ground-to-excited-state energy difference increases by 8 meV from its self-consistent value if one considers the depolarization effect and the exciton-like effect only. It appears that the electron-electron exchange interactions account for most of the observed residual blueshift for the infrared intersubband absorbance in AlxGa1-xN/GaN multiple quantum wells. It seems that electrons on the surface of the k-space Fermi gas make the main contribution to the electron-electron exchange interactions, while for electrons further inside the Fermi gas it is difficult to exchange their positions. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Based on the introduction of the traditional mathematical models of neurons in general-purpose neurocomputer, a novel all-purpose mathematical model-Double synaptic weight neuron (DSWN) is presented, which can simulate all kinds of neuron architectures, including Radial-Basis-Function (RBF) and Back-propagation (BP) models, etc. At the same time, this new model is realized using hardware and implemented in the new CASSANN-II neurocomputer that can be used to form various types of neural networks with multiple mathematical models of neurons. In this paper, the flexibility of the new model has also been described in constructing neural networks and based on the theory of Biomimetic pattern recognition (BPR) and high-dimensional space covering, a recognition system of omni directionally oriented rigid objects on the horizontal surface and a face recognition system had been implemented on CASSANN-H neurocomputer. The result showed DSWN neural network has great potential in pattern recognition.
Resumo:
At present, in order to image complex structures more accurately, the seismic migration methods has been developed from isotropic media to the anisotropic media. This dissertation develops a prestack time migration algorithm and application aspects for complex structures systematically. In transversely isotropic media with a vertical symmetry axis (VTI media), the dissertation starts from the theory that the prestack time migration is an approximation of the prestack depth migration, based on the one way wave equation and VTI time migration dispersion relation, by combining the stationary-phase theory gives a wave equation based VTI prestack time migration algorithm. Based on this algorithm, we can analytically obtain the travel time and amplitude expression in VTI media, as while conclude how the anisotropic parameter influence the time migration, and by analyzing the normal moveout of the far offset seismic data and lateral inhomogeneity of velocity, we can update the velocity model and estimate the anisotropic parameter model through the time migration. When anisotropic parameter is zero, this algorithm degenerates to the isotropic time migration algorithm naturally, so we can propose an isotopic processing procedure for imaging. This procedure may keep the main character of time migration such as high computational efficiency and velocity estimation through the migration, and, additionally, partially compensate the geometric divergence by adopting the deconvolution imaging condition of wave equation migration. Application of this algorithm to the complicated synthetic dataset and field data demonstrates the effectiveness of the approach. In the dissertation we also present an approach for estimating the velocity model and anisotropic parameter model. After analyzing the velocity and anisotropic parameter impaction on the time migration, and based on the normal moveout of the far offset seismic data and lateral inhomogeneity of velocity, through migration we can update the velocity model and estimate the anisotropic parameter model by combining the advantages of velocity analysis in isotropic media and anisotropic parameter estimation in VTI media. Testing on the synthetic and field data, demonstrates the method is effective and very steady. Massive synthetic dataset、2D sea dataset and 3D field datasets are used for VTI prestack time migration and compared to the stacked section after NMO and prestack isotropic time migration stacked section to demonstrate that VTI prestack time migration method in this paper can obtain better focusing and less positioning errors of complicated dip reflectors. When subsurface is more complex, primaries and multiples could not be separated in the Radon domain because they can no longer be described with simple functions (parabolic). We propose an attenuating multiple method in the image domain to resolve this problem. For a given velocity model,since time migration takes the complex structures wavefield propagation in to account, primaries and multiples have different offset-domain moveout discrepancies, then can be separated using techniques similar to the prior migration with Radon transform. Since every individual offset-domain common-reflection point gather incorporates complex 3D propagation effects, our method has the advantage of working with 3D data and complicated geology. Testing on synthetic and real data, we demonstrate the power of the method in discriminating between primaries and multiples after prestack time migration, and multiples can be attenuated in the image space considerably.
Resumo:
A fundamental understanding of the information carrying capacity of optical channels requires the signal and physical channel to be modeled quantum mechanically. This thesis considers the problems of distributing multi-party quantum entanglement to distant users in a quantum communication system and determining the ability of quantum optical channels to reliably transmit information. A recent proposal for a quantum communication architecture that realizes long-distance, high-fidelity qubit teleportation is reviewed. Previous work on this communication architecture is extended in two primary ways. First, models are developed for assessing the effects of amplitude, phase, and frequency errors in the entanglement source of polarization-entangled photons, as well as fiber loss and imperfect polarization restoration, on the throughput and fidelity of the system. Second, an error model is derived for an extension of this communication architecture that allows for the production and storage of three-party entangled Greenberger-Horne-Zeilinger states. A performance analysis of the quantum communication architecture in qubit teleportation and quantum secret sharing communication protocols is presented. Recent work on determining the channel capacity of optical channels is extended in several ways. Classical capacity is derived for a class of Gaussian Bosonic channels representing the quantum version of classical colored Gaussian-noise channels. The proof is strongly mo- tivated by the standard technique of whitening Gaussian noise used in classical information theory. Minimum output entropy problems related to these channel capacity derivations are also studied. These single-user Bosonic capacity results are extended to a multi-user scenario by deriving capacity regions for single-mode and wideband coherent-state multiple access channels. An even larger capacity region is obtained when the transmitters use non- classical Gaussian states, and an outer bound on the ultimate capacity region is presented
Resumo:
Fitzgerald, S., Simon, B., and Thomas, L. 2005. Strategies that students use to trace code: an analysis based in grounded theory. In Proceedings of the First international Workshop on Computing Education Research (Seattle, WA, USA, October 01 - 02, 2005). ICER '05. ACM, New York, NY, 69-80
Resumo:
Visual search data are given a unified quantitative explanation by a model of how spatial maps in the parietal cortex and object recognition categories in the inferotemporal cortex deploy attentional resources as they reciprocally interact with visual representations in the prestriate cortex. The model visual representations arc organized into multiple boundary and surface representations. Visual search in the model is initiated by organizing multiple items that lie within a given boundary or surface representation into a candidate search grouping. These items arc compared with object recognition categories to test for matches or mismatches. Mismatches can trigger deeper searches and recursive selection of new groupings until a target object io identified. This search model is algorithmically specified to quantitatively simulate search data using a single set of parameters, as well as to qualitatively explain a still larger data base, including data of Aks and Enns (1992), Bravo and Blake (1990), Chellazzi, Miller, Duncan, and Desimone (1993), Egeth, Viri, and Garbart (1984), Cohen and Ivry (1991), Enno and Rensink (1990), He and Nakayarna (1992), Humphreys, Quinlan, and Riddoch (1989), Mordkoff, Yantis, and Egeth (1990), Nakayama and Silverman (1986), Treisman and Gelade (1980), Treisman and Sato (1990), Wolfe, Cave, and Franzel (1989), and Wolfe and Friedman-Hill (1992). The model hereby provides an alternative to recent variations on the Feature Integration and Guided Search models, and grounds the analysis of visual search in neural models of preattentive vision, attentive object learning and categorization, and attentive spatial localization and orientation.
Resumo:
A key goal of computational neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how laminar neocortical circuits give rise to biological intelligence. These circuits embody two new and revolutionary computational paradigms: Complementary Computing and Laminar Computing. Circuit properties include a novel synthesis of feedforward and feedback processing, of digital and analog processing, and of pre-attentive and attentive processing. This synthesis clarifies the appeal of Bayesian approaches but has a far greater predictive range that naturally extends to self-organizing processes. Examples from vision and cognition are summarized. A LAMINART architecture unifies properties of visual development, learning, perceptual grouping, attention, and 3D vision. A key modeling theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. It is noted how higher-order attentional constraints can influence multiple cortical regions, and how spatial and object attention work together to learn view-invariant object categories. In particular, a form-fitting spatial attentional shroud can allow an emerging view-invariant object category to remain active while multiple view categories are associated with it during sequences of saccadic eye movements. Finally, the chapter summarizes recent work on the LIST PARSE model of cognitive information processing by the laminar circuits of prefrontal cortex. LIST PARSE models the short-term storage of event sequences in working memory, their unitization through learning into sequence, or list, chunks, and their read-out in planned sequential performance that is under volitional control. LIST PARSE provides a laminar embodiment of Item and Order working memories, also called Competitive Queuing models, that have been supported by both psychophysical and neurobiological data. These examples show how variations of a common laminar cortical design can embody properties of visual and cognitive intelligence that seem, at least on the surface, to be mechanistically unrelated.
Resumo:
This work is a critical introduction to Alfred Schutz’s sociology of the multiple reality and an enterprise that seeks to reassess and reconstruct the Schutzian project. In the first part of the study, I inquire into Schutz’s biographical context that surrounds the germination of this conception and I analyse the main texts of Schutz where he has dealt directly with ‘finite provinces of meaning.’ On the basis of this analysis, I suggest and discuss, in Part II, several solutions to the shortcomings of the theoretical system that Schutz drew upon the sociological problem of multiple reality. Specifically, I discuss problems related to the structure, the dynamics, and the interrelationing of finite provinces of meaning as well as the way they relate to the questions of narrativity, experience, space, time, and identity.
Resumo:
The analytic advantages of central concepts from linguistics and information theory, and the analogies demonstrated between them, for understanding patterns of retrieval from full-text indexes to documents are developed. The interaction between the syntagm and the paradigm in computational operations on written language in indexing, searching, and retrieval is used to account for transformations of the signified or meaning between documents and their representation and between queries and documents retrieved. Characteristics of the message, and messages for selection for written language, are brought to explain the relative frequency of occurrence of words and multiple word sequences in documents. The examples given in the companion article are revisited and a fuller example introduced. The signified of the sequence stood for, the term classically used in the definitions of the sign, as something standing for something else, can itself change rapidly according to its syntagm. A greater than ordinary discourse understanding of patterns in retrieval is obtained.
Resumo:
Purpose The aim of this paper is to explore the issues involved in developing and applying performance management approaches within a large UK public sector department using a multiple stakeholder perspective and an accompanying theoretical framework. Design/methodology/approach An initial short questionnaire was used to determine perceptions about the implementation and effectiveness of the new performance management system across the organisation. In total, 700 questionnaires were distributed. Running concurrently with an ethnographic approach, and informed by the questionnaire responses, was a series of semi-structured interviews and focus groups. Findings Staff at all levels had an understanding of the new system and perceived it as being beneficial. However, there were concerns that the approach was not continuously managed throughout the year and was in danger of becoming an annual event, rather than an ongoing process. Furthermore, the change process seemed to have advanced without corresponding changes to appraisal and reward and recognition systems. Thus, the business objectives were not aligned with motivating factors within the organisation. Research limitations/implications Additional research to test the validity and usefulness of the theoretical model, as discussed in this paper, would be beneficial. Practical implications The strategic integration of the stakeholder performance measures and scorecards was found to be essential to producing an overall stakeholder-driven strategy within the case study organisation. Originality/value This paper discusses in detail the approach adopted and the progress made by one large UK public sector organisation, as it attempts to develop better relationships with all of its stakeholders and hence improve its performance. This paper provides a concerted attempt to link theory with practice.
Resumo:
This research published in the foremost international journal in information theory and shows interplay between complex random matrix and multiantenna information theory. Dr T. Ratnarajah is leader in this area of research and his work has been contributed in the development of graduate curricula (course reader) in Massachusetts Institute of Technology (MIT), USA, By Professor Alan Edelman. The course name is "The Mathematics and Applications of Random Matrices", see http://web.mit.edu/18.338/www/projects.html