937 resultados para high-order reasoning
Resumo:
The need for high performance, high precision, and energy saving in rotating machinery demands an alternative solution to traditional bearings. Because of the contactless operation principle, the rotating machines employing active magnetic bearings (AMBs) provide many advantages over the traditional ones. The advantages such as contamination-free operation, low maintenance costs, high rotational speeds, low parasitic losses, programmable stiffness and damping, and vibration insulation come at expense of high cost, and complex technical solution. All these properties make the use of AMBs appropriate primarily for specific and highly demanding applications. High performance and high precision control requires model-based control methods and accurate models of the flexible rotor. In turn, complex models lead to high-order controllers and feature considerable computational burden. Fortunately, in the last few years the advancements in signal processing devices provide new perspective on the real-time control of AMBs. The design and the real-time digital implementation of the high-order LQ controllers, which focus on fast execution times, are the subjects of this work. In particular, the control design and implementation in the field programmable gate array (FPGA) circuits are investigated. The optimal design is guided by the physical constraints of the system for selecting the optimal weighting matrices. The plant model is complemented by augmenting appropriate disturbance models. The compensation of the force-field nonlinearities is proposed for decreasing the uncertainty of the actuator. A disturbance-observer-based unbalance compensation for canceling the magnetic force vibrations or vibrations in the measured positions is presented. The theoretical studies are verified by the practical experiments utilizing a custom-built laboratory test rig. The test rig uses a prototyping control platform developed in the scope of this work. To sum up, the work makes a step in the direction of an embedded single-chip FPGA-based controller of AMBs.
Resumo:
The prediction filters are well known models for signal estimation, in communications, control and many others areas. The classical method for deriving linear prediction coding (LPC) filters is often based on the minimization of a mean square error (MSE). Consequently, second order statistics are only required, but the estimation is only optimal if the residue is independent and identically distributed (iid) Gaussian. In this paper, we derive the ML estimate of the prediction filter. Relationships with robust estimation of auto-regressive (AR) processes, with blind deconvolution and with source separation based on mutual information minimization are then detailed. The algorithm, based on the minimization of a high-order statistics criterion, uses on-line estimation of the residue statistics. Experimental results emphasize on the interest of this approach.
Resumo:
The linear prediction coding of speech is based in the assumption that the generation model is autoregresive. In this paper we propose a structure to cope with the nonlinear effects presents in the generation of the speech signal. This structure will consist of two stages, the first one will be a classical linear prediction filter, and the second one will model the residual signal by means of two nonlinearities between a linear filter. The coefficients of this filter are computed by means of a gradient search on the score function. This is done in order to deal with the fact that the probability distribution of the residual signal still is not gaussian. This fact is taken into account when the coefficients are computed by a ML estimate. The algorithm based on the minimization of a high-order statistics criterion, uses on-line estimation of the residue statistics and is based on blind deconvolution of Wiener systems [1]. Improvements in the experimental results with speech signals emphasize on the interest of this approach.
Resumo:
A diagnostic instrument was developed to evaluate the basic chemistry concepts held by freshmen students of the three Chemistry undergraduate courses offered by the University of São Paulo. The instrument minimizes the use of algorithms or memorization by students and values high-order cognitive skills. Analysis of the students' performances reveals systematic use of "displacement reaction" as an algorithm and a mechanical use of Le Chatelier's Principle. Failure in comprehending the chemical equation and chemical language drives students to alternative models for chemical reactions in aqueous solution. For instance, reaction would occur between "ionic pairs" and/or between species situated in separate compartments.
Resumo:
The work presents the concept of a structural universal and the criticisms that have been leveled against it. A structural universal is a property had by an individual due to the nature of its proper parts and due to the relations obtaining between those parts. Mellor has argued that there is no reason to accept such universal in addition to the basic universals that compose them. David Lewis has argued -on the other hand- that it has not been satisfactorily explained how universals are composed by other universals. The composition by which a structural universal is given cannot be a set-theoretical construction or a mereological sum. Several proposals to explain the nature of structural universals are discussed. Finally it is argued that a structural universal should be understood as a complexion of higher-order universals.
Resumo:
Communication, the flow of ideas and information between individuals in a social context, is the heart of educational experience. Constructivism and constructivist theories form the foundation for the collaborative learning processes of creating and sharing meaning in online educational contexts. The Learning and Collaboration in Technology-enhanced Contexts (LeCoTec) course comprised of 66 participants drawn from four European universities (Oulu, Turku, Ghent and Ramon Llull). These participants were split into 15 groups with the express aim of learning about computer-supported collaborative learning (CSCL). The Community of Inquiry model (social, cognitive and teaching presences) provided the content and tools for learning and researching the collaborative interactions in this environment. The sampled comments from the collaborative phase were collected and analyzed at chain-level and group-level, with the aim of identifying the various message types that sustained high learning outcomes. Furthermore, the Social Network Analysis helped to view the density of whole group interactions, as well as the popular and active members within the highly collaborating groups. It was observed that long chains occur in groups having high quality outcomes. These chains were also characterized by Social, Interactivity, Administrative and Content comment-types. In addition, high outcomes were realized from the high interactive cases and high-density groups. In low interactive groups, commenting patterned around the one or two central group members. In conclusion, future online environments should support high-order learning and develop greater metacognition and self-regulation. Moreover, such an environment, with a wide variety of problem solving tools, would enhance interactivity.
Resumo:
The quantitative component of this study examined the effect of computerassisted instruction (CAI) on science problem-solving performance, as well as the significance of logical reasoning ability to this relationship. I had the dual role of researcher and teacher, as I conducted the study with 84 grade seven students to whom I simultaneously taught science on a rotary-basis. A two-treatment research design using this sample of convenience allowed for a comparison between the problem-solving performance of a CAI treatment group (n = 46) versus a laboratory-based control group (n = 38). Science problem-solving performance was measured by a pretest and posttest that I developed for this study. The validity of these tests was addressed through critical discussions with faculty members, colleagues, as well as through feedback gained in a pilot study. High reliability was revealed between the pretest and the posttest; in this way, students who tended to score high on the pretest also tended to score high on the posttest. Interrater reliability was found to be high for 30 randomly-selected test responses which were scored independently by two raters (i.e., myself and my faculty advisor). Results indicated that the form of computer-assisted instruction (CAI) used in this study did not significantly improve students' problem-solving performance. Logical reasoning ability was measured by an abbreviated version of the Group Assessment of Lx)gical Thinking (GALT). Logical reasoning ability was found to be correlated to problem-solving performance in that, students with high logical reasoning ability tended to do better on the problem-solving tests and vice versa. However, no significant difference was observed in problem-solving improvement, in the laboratory-based instruction group versus the CAI group, for students varying in level of logical reasoning ability.Insignificant trends were noted in results obtained from students of high logical reasoning ability, but require further study. It was acknowledged that conclusions drawn from the quantitative component of this study were limited, as further modifications of the tests were recommended, as well as the use of a larger sample size. The purpose of the qualitative component of the study was to provide a detailed description ofmy thesis research process as a Brock University Master of Education student. My research journal notes served as the data base for open coding analysis. This analysis revealed six main themes which best described my research experience: research interests, practical considerations, research design, research analysis, development of the problem-solving tests, and scoring scheme development. These important areas ofmy thesis research experience were recounted in the form of a personal narrative. It was noted that the research process was a form of problem solving in itself, as I made use of several problem-solving strategies to achieve desired thesis outcomes.
Resumo:
Our objective is to develop a diffusion Monte Carlo (DMC) algorithm to estimate the exact expectation values, ($o|^|^o), of multiplicative operators, such as polarizabilities and high-order hyperpolarizabilities, for isolated atoms and molecules. The existing forward-walking pure diffusion Monte Carlo (FW-PDMC) algorithm which attempts this has a serious bias. On the other hand, the DMC algorithm with minimal stochastic reconfiguration provides unbiased estimates of the energies, but the expectation values ($o|^|^) are contaminated by ^, an user specified, approximate wave function, when A does not commute with the Hamiltonian. We modified the latter algorithm to obtain the exact expectation values for these operators, while at the same time eliminating the bias. To compare the efficiency of FW-PDMC and the modified DMC algorithms we calculated simple properties of the H atom, such as various functions of coordinates and polarizabilities. Using three non-exact wave functions, one of moderate quality and the others very crude, in each case the results are within statistical error of the exact values.
Resumo:
La présentation antigénique par les molécules de classe II du complexe majeur d’histocompatibilité (CMH II) est un mécanisme essentiel au contrôle des pathogènes par le système immunitaire. Le CMH II humain existe en trois isotypes, HLA-DP, DQ et DR, tous des hétérodimères composés d’une chaîne α et d’une chaîne β. Le CMH II est entre autres exprimé à la surface des cellules présentatrices d’antigènes (APCs) et des cellules épithéliales activées et a pour fonction de présenter des peptides d’origine exogène aux lymphocytes T CD4+. L’oligomérisation et le trafic intracellulaire du CMH II sont largement facilités par une chaperone, la chaîne invariante (Ii). Il s’agit d’une protéine non-polymorphique de type II. Après sa biosynthèse dans le réticulum endoplasmique (ER), Ii hétéro- ou homotrimérise, puis interagit via sa région CLIP avec le CMH II pour former un complexe αβIi. Le complexe sort du ER pour entamer son chemin vers différents compartiments et la surface cellulaire. Chez l’homme, quatre isoformes d’Ii sont répertoriées : p33, p35, p41 et p43. Les deux isoformes exprimées de manière prédominante, Iip33 et p35, diffèrent par une extension N-terminale de 16 acides aminés portée par Iip35. Cette extension présente un motif de rétention au réticulum endoplasmique (ERM) composé des résidus RXR. Ce motif doit être masqué par la chaîne β du CMH II pour permettre au complexe de quitter le ER. Notre groupe s’est intéressé au mécanisme du masquage et au mode de sortie du ER des complexes αβIi. Nous montrons ici que l’interaction directe, ou en cis, entre la chaîne β du CMH II et Iip35 dans une structure αβIi est essentielle pour sa sortie du ER, promouvant la formation de structures de haut niveau de complexité. Par ailleurs, nous démontrons que NleA, un facteur de virulence bactérien, permet d’altérer le trafic de complexes αβIi comportant Iip35. Ce phénotype est médié par l’interaction entre p35 et les sous-unités de COPII. Bref, Iip35 joue un rôle central dans la formation des complexes αβIi et leur transport hors du ER. Ceci fait d’Iip35 un régulateur clef de la présentation antigénique par le CMH II.
Resumo:
Organic crystals possess extremely large optical nonlinearity compared to inorganic crystals. Also organic compounds have the amenability for synthesis and scope for introducing desirable characteristics by inclusions. A wide variety of organic materials having electron donor and acceptor groups, generate high order of nonlinearity. In the present work, a new nonlinear optical crystal, L-citrulline oxalate (LCO) based on the aminoacid L-citrulline was grown using slow evaporation technique. Structural characterization was carried out by single crystal XRD. It crystallizes in the noncentrosymmetric, orthorhombic structure with space group P21 P21 P21. Functional groups present in the sample were identified by Fourier transform infra red (FTIR) and FT-Raman spectral analysis. On studying the FTIR and Raman spectra of the precursors L-citrulline and oxalic acid, used for growing L-citrulline oxalate crystal, it is found that the significant peaks of the precursors are present in the spectra of the L-citrulline oxalate crystal . This observation along with the presence of NH3 + group in the spectra of L-citrulline oxalate, confirms the formation of the charge transfer complex
Resumo:
This is a Named Entity Based Question Answering System for Malayalam Language. Although a vast amount of information is available today in digital form, no effective information access mechanism exists to provide humans with convenient information access. Information Retrieval and Question Answering systems are the two mechanisms available now for information access. Information systems typically return a long list of documents in response to a user’s query which are to be skimmed by the user to determine whether they contain an answer. But a Question Answering System allows the user to state his/her information need as a natural language question and receives most appropriate answer in a word or a sentence or a paragraph. This system is based on Named Entity Tagging and Question Classification. Document tagging extracts useful information from the documents which will be used in finding the answer to the question. Question Classification extracts useful information from the question to determine the type of the question and the way in which the question is to be answered. Various Machine Learning methods are used to tag the documents. Rule-Based Approach is used for Question Classification. Malayalam belongs to the Dravidian family of languages and is one of the four major languages of this family. It is one of the 22 Scheduled Languages of India with official language status in the state of Kerala. It is spoken by 40 million people. Malayalam is a morphologically rich agglutinative language and relatively of free word order. Also Malayalam has a productive morphology that allows the creation of complex words which are often highly ambiguous. Document tagging tools such as Parts-of-Speech Tagger, Phrase Chunker, Named Entity Tagger, and Compound Word Splitter are developed as a part of this research work. No such tools were available for Malayalam language. Finite State Transducer, High Order Conditional Random Field, Artificial Immunity System Principles, and Support Vector Machines are the techniques used for the design of these document preprocessing tools. This research work describes how the Named Entity is used to represent the documents. Single sentence questions are used to test the system. Overall Precision and Recall obtained are 88.5% and 85.9% respectively. This work can be extended in several directions. The coverage of non-factoid questions can be increased and also it can be extended to include open domain applications. Reference Resolution and Word Sense Disambiguation techniques are suggested as the future enhancements
Resumo:
This thesis investigated the potential use of Linear Predictive Coding in speech communication applications. A Modified Block Adaptive Predictive Coder is developed, which reduces the computational burden and complexity without sacrificing the speech quality, as compared to the conventional adaptive predictive coding (APC) system. For this, changes in the evaluation methods have been evolved. This method is as different from the usual APC system in that the difference between the true and the predicted value is not transmitted. This allows the replacement of the high order predictor in the transmitter section of a predictive coding system, by a simple delay unit, which makes the transmitter quite simple. Also, the block length used in the processing of the speech signal is adjusted relative to the pitch period of the signal being processed rather than choosing a constant length as hitherto done by other researchers. The efficiency of the newly proposed coder has been supported with results of computer simulation using real speech data. Three methods for voiced/unvoiced/silent/transition classification have been presented. The first one is based on energy, zerocrossing rate and the periodicity of the waveform. The second method uses normalised correlation coefficient as the main parameter, while the third method utilizes a pitch-dependent correlation factor. The third algorithm which gives the minimum error probability has been chosen in a later chapter to design the modified coder The thesis also presents a comparazive study beh-cm the autocorrelation and the covariance methods used in the evaluaiicn of the predictor parameters. It has been proved that the azztocorrelation method is superior to the covariance method with respect to the filter stabf-it)‘ and also in an SNR sense, though the increase in gain is only small. The Modified Block Adaptive Coder applies a switching from pitch precitzion to spectrum prediction when the speech segment changes from a voiced or transition region to an unvoiced region. The experiments cont;-:ted in coding, transmission and simulation, used speech samples from .\£=_‘ajr2_1a:r1 and English phrases. Proposal for a speaker reecgnifion syste: and a phoneme identification system has also been outlized towards the end of the thesis.
Resumo:
Die Ionisation von H2 in intensiven Laserpulsen wird mit Hilfe der numerischen Integration der zeitabhängigen Schrödingergleichung für ein Einelektronenmodell untersucht, das die Vibrationsbewegung berücksichtigt. Die Spektren der kinetischen Elektronenenergie hängen stark von der Vibrationsquantenzahl des erzeugten H2+ Ions ab. Für bestimmte Vibrationszustände ist die Ausbeute der Elektronen in der Mitte des Plateaus stark erhöht. Der Effekt wird "channel closings" zugeschrieben, die in Atomen durch Variation der Laserintensität beobachtet wurden. The ionization of H2 in intense laser pulses is studied by numerical integration of the time-dependent Schrödinger equation for a single-active-electron model including the vibrational motion. The electron kinetic energy spectra in high-order above-threshold ionization are strongly dependent on the vibrational quantum number of the created H2+ ion. For certain vibrational states, the electron yield in the mid-plateau region is strongly enhanced. The effect is attributed to channel closings, which were previously observed in atoms by varying the laser intensity.
Resumo:
The interaction of short intense laser pulses with atoms/molecules produces a multitude of highly nonlinear processes requiring a non-perturbative treatment. Detailed study of these highly nonlinear processes by numerically solving the time-dependent Schrodinger equation becomes a daunting task when the number of degrees of freedom is large. Also the coupling between the electronic and nuclear degrees of freedom further aggravates the computational problems. In the present work we show that the time-dependent Hartree (TDH) approximation, which neglects the correlation effects, gives unreliable description of the system dynamics both in the absence and presence of an external field. A theoretical framework is required that treats the electrons and nuclei on equal footing and fully quantum mechanically. To address this issue we discuss two approaches, namely the multicomponent density functional theory (MCDFT) and the multiconfiguration time-dependent Hartree (MCTDH) method, that go beyond the TDH approximation and describe the correlated electron-nuclear dynamics accurately. In the MCDFT framework, where the time-dependent electronic and nuclear densities are the basic variables, we discuss an algorithm to calculate the exact Kohn-Sham (KS) potentials for small model systems. By simulating the photodissociation process in a model hydrogen molecular ion, we show that the exact KS potentials contain all the many-body effects and give an insight into the system dynamics. In the MCTDH approach, the wave function is expanded as a sum of products of single-particle functions (SPFs). The MCTDH method is able to describe the electron-nuclear correlation effects as the SPFs and the expansion coefficients evolve in time and give an accurate description of the system dynamics. We show that the MCTDH method is suitable to study a variety of processes such as the fragmentation of molecules, high-order harmonic generation, the two-center interference effect, and the lochfrass effect. We discuss these phenomena in a model hydrogen molecular ion and a model hydrogen molecule. Inclusion of absorbing boundaries in the mean-field approximation and its consequences are discussed using the model hydrogen molecular ion. To this end, two types of calculations are considered: (i) a variational approach with a complex absorbing potential included in the full many-particle Hamiltonian and (ii) an approach in the spirit of time-dependent density functional theory (TDDFT), including complex absorbing potentials in the single-particle equations. It is elucidated that for small grids the TDDFT approach is superior to the variational approach.
Resumo:
This thesis develops a model for the topological structure of situations. In this model, the topological structure of space is altered by the presence or absence of boundaries, such as those at the edges of objects. This allows the intuitive meaning of topological concepts such as region connectivity, function continuity, and preservation of topological structure to be modeled using the standard mathematical definitions. The thesis shows that these concepts are important in a wide range of artificial intelligence problems, including low-level vision, high-level vision, natural language semantics, and high-level reasoning.