997 resultados para Bootstrapping techniques
Resumo:
In a Communication Bootstrapping system, peer components with different perceptual worlds invent symbols and syntax based on correlations between their percepts. I propose that Communication Bootstrapping can also be used to acquire functional definitions of words and causal reasoning knowledge. I illustrate this point with several examples, then sketch the architecture of a system in progress which attempts to execute this task.
Resumo:
Carbonaceous deposits formed during the temperature-programmed surface reaction (TPSR) of methane dehydro-aromatization (MDA) over Mo/HZSM-5 catalysts have been investigated by TPH, TPCO2 and TPO, in combination with thermal gravimetric analysis (TG). The TPO profiles of the coked catalyst after TPSR of MDA show two temperature peaks: one is at about 776 K and the other at about 865 K. The succeeding TPH experiments only resulted in the diminishing of the area of the high-temperature peak, and had no effect on the area of the low-temperature peak. On the other hand, the TPO profiles of the coked catalyst after succeeding TPCO2 experiments exhibited obvious reduction in the areas of both the high-and low-temperature peaks, particularly in the area of the low-temperature peak. On the basis of TPSR, TPR and TPCO2 experiments and the corresponding TG analysis, quantitative analysis of the coke and the kinetics of its burning-off process have been studied. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
This report explores the relation between image intensity and object shape. It is shown that image intensity is related to surface orientation and that a variation in image intensity is related to surface curvature. Computational methods are developed which use the measured intensity variation across surfaces of smooth objects to determine surface orientation. In general, surface orientation is not determined locally by the intensity value recorded at each image point. Tools are needed to explore the problem of determining surface orientation from image intensity. The notion of gradient space , popularized by Huffman and Mackworth, is used to represent surface orientation. The notion of a reflectance map, originated by Horn, is used to represent the relation between surface orientation image intensity. The image Hessian is defined and used to represent surface curvature. Properties of surface curvature are expressed as constraints on possible surface orientations corresponding to a given image point. Methods are presented which embed assumptions about surface curvature in algorithms for determining surface orientation from the intensities recorded in a single view. If additional images of the same object are obtained by varying the direction of incident illumination, then surface orientation is determined locally by the intensity values recorded at each image point. This fact is exploited in a new technique called photometric stereo. The visual inspection of surface defects in metal castings is considered. Two casting applications are discussed. The first is the precision investment casting of turbine blades and vanes for aircraft jet engines. In this application, grain size is an important process variable. The existing industry standard for estimating the average grain size of metals is implemented and demonstrated on a sample turbine vane. Grain size can be computed form the measurements obtained in an image, once the foreshortening effects of surface curvature are accounted for. The second is the green sand mold casting of shuttle eyes for textile looms. Here, physical constraints inherent to the casting process translate into these constraints, it is necessary to interpret features of intensity as features of object shape. Both applications demonstrate that successful visual inspection requires the ability to interpret observed changes in intensity in the context of surface topography. The theoretical tools developed in this report provide a framework for this interpretation.
Resumo:
C.R. Bull, R. Zwiggelaar and R.D. Speller, 'Review of inspection techniques based on the elastic and inelastic scattering of X-rays and their potential in the food and agricultural industry', Journal of Food Engineering 33 (1-2), 167-179 (1997)
Resumo:
Accurate knowledge of traffic demands in a communication network enables or enhances a variety of traffic engineering and network management tasks of paramount importance for operational networks. Directly measuring a complete set of these demands is prohibitively expensive because of the huge amounts of data that must be collected and the performance impact that such measurements would impose on the regular behavior of the network. As a consequence, we must rely on statistical techniques to produce estimates of actual traffic demands from partial information. The performance of such techniques is however limited due to their reliance on limited information and the high amount of computations they incur, which limits their convergence behavior. In this paper we study strategies to improve the convergence of a powerful statistical technique based on an Expectation-Maximization iterative algorithm. First we analyze modeling approaches to generating starting points. We call these starting points informed priors since they are obtained using actual network information such as packet traces and SNMP link counts. Second we provide a very fast variant of the EM algorithm which extends its computation range, increasing its accuracy and decreasing its dependence on the quality of the starting point. Finally, we study the convergence characteristics of our EM algorithm and compare it against a recently proposed Weighted Least Squares approach.
Resumo:
Training data for supervised learning neural networks can be clustered such that the input/output pairs in each cluster are redundant. Redundant training data can adversely affect training time. In this paper we apply two clustering algorithms, ART2 -A and the Generalized Equality Classifier, to identify training data clusters and thus reduce the training data and training time. The approach is demonstrated for a high dimensional nonlinear continuous time mapping. The demonstration shows six-fold decrease in training time at little or no loss of accuracy in the handling of evaluation data.
Resumo:
A massive change is currently taking place in the manner in which power networks are operated. Traditionally, power networks consisted of large power stations which were controlled from centralised locations. The trend in modern power networks is for generated power to be produced by a diverse array of energy sources which are spread over a large geographical area. As a result, controlling these systems from a centralised controller is impractical. Thus, future power networks will be controlled by a large number of intelligent distributed controllers which must work together to coordinate their actions. The term Smart Grid is the umbrella term used to denote this combination of power systems, artificial intelligence, and communications engineering. This thesis focuses on the application of optimal control techniques to Smart Grids with a focus in particular on iterative distributed MPC. A novel convergence and stability proof for iterative distributed MPC based on the Alternating Direction Method of Multipliers is derived. Distributed and centralised MPC, and an optimised PID controllers' performance are then compared when applied to a highly interconnected, nonlinear, MIMO testbed based on a part of the Nordic power grid. Finally, a novel tuning algorithm is proposed for iterative distributed MPC which simultaneously optimises both the closed loop performance and the communication overhead associated with the desired control.
Resumo:
There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.
Resumo:
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.
Resumo:
Modern neuroscience relies heavily on sophisticated tools that allow us to visualize and manipulate cells with precise spatial and temporal control. Transgenic mouse models, for example, can be used to manipulate cellular activity in order to draw conclusions about the molecular events responsible for the development, maintenance and refinement of healthy and/or diseased neuronal circuits. Although it is fairly well established that circuits respond to activity-dependent competition between neurons, we have yet to understand either the mechanisms underlying these events or the higher-order plasticity that synchronizes entire circuits. In this thesis we aimed to develop and characterize transgenic mouse models that can be used to directly address these outstanding biological questions in different ways. We present SLICK-H, a Cre-expressing mouse line that can achieve drug-inducible, widespread, neuron-specific manipulations in vivo. This model is a clear improvement over existing models because of its particularly strong, widespread, and even distribution pattern that can be tightly controlled in the absence of drug induction. We also present SLICK-V::Ptox, a mouse line that, through expression of the tetanus toxin light chain, allows long-term inhibition of neurotransmission in a small subset (<1%) of fluorescently labeled pyramidal cells. This model, which can be used to study how a silenced cell performs in a wildtype environment, greatly facilitates the in vivo study of activity-dependent competition in the mammalian brain. As an initial application we used this model to show that tetanus toxin-expressing CA1 neurons experience a 15% - 19% decrease in apical dendritic spine density. Finally, we also describe the attempt to create additional Cre-driven mouse lines that would allow conditional alteration of neuronal activity either by hyperpolarization or inhibition of neurotransmission. Overall, the models characterized in this thesis expand upon the wealth of tools available that aim to dissect neuronal circuitry by genetically manipulating neurons in vivo.
Resumo:
In this thesis I theoretically study quantum states of ultracold atoms. The majority of the Chapters focus on engineering specific quantum states of single atoms with high fidelity in experimentally realistic systems. In the sixth Chapter, I investigate the stability and dynamics of new multidimensional solitonic states that can be created in inhomogeneous atomic Bose-Einstein condensates. In Chapter three I present two papers in which I demonstrate how the coherent tunnelling by adiabatic passage (CTAP) process can be implemented in an experimentally realistic atom chip system, to coherently transfer the centre-of-mass of a single atom between two spatially distinct magnetic waveguides. In these works I also utilise GPU (Graphics Processing Unit) computing which offers a significant performance increase in the numerical simulation of the Schrödinger equation. In Chapter four I investigate the CTAP process for a linear arrangement of radio frequency traps where the centre-of-mass of both, single atoms and clouds of interacting atoms, can be coherently controlled. In Chapter five I present a theoretical study of adiabatic radio frequency potentials where I use Floquet theory to more accurately model situations where frequencies are close and/or field amplitudes are large. I also show how one can create highly versatile 2D adiabatic radio frequency potentials using multiple radio frequency fields with arbitrary field orientation and demonstrate their utility by simulating the creation of ring vortex solitons. In the sixth Chapter I discuss the stability and dynamics of a family of multidimensional solitonic states created in harmonically confined Bose-Einstein condensates. I demonstrate that these solitonic states have interesting dynamical instabilities, where a continuous collapse and revival of the initial state occurs. Through Bogoliubov analysis, I determine the modes responsible for the observed instabilities of each solitonic state and also extract information related to the time at which instability can be observed.
Resumo:
Existing work in Computer Science and Electronic Engineering demonstrates that Digital Signal Processing techniques can effectively identify the presence of stress in the speech signal. These techniques use datasets containing real or actual stress samples i.e. real-life stress such as 911 calls and so on. Studies that use simulated or laboratory-induced stress have been less successful and inconsistent. Pervasive, ubiquitous computing is increasingly moving towards voice-activated and voice-controlled systems and devices. Speech recognition and speaker identification algorithms will have to improve and take emotional speech into account. Modelling the influence of stress on speech and voice is of interest to researchers from many different disciplines including security, telecommunications, psychology, speech science, forensics and Human Computer Interaction (HCI). The aim of this work is to assess the impact of moderate stress on the speech signal. In order to do this, a dataset of laboratory-induced stress is required. While attempting to build this dataset it became apparent that reliably inducing measurable stress in a controlled environment, when speech is a requirement, is a challenging task. This work focuses on the use of a variety of stressors to elicit a stress response during tasks that involve speech content. Biosignal analysis (commercial Brain Computer Interfaces, eye tracking and skin resistance) is used to verify and quantify the stress response, if any. This thesis explains the basis of the author’s hypotheses on the elicitation of affectively-toned speech and presents the results of several studies carried out throughout the PhD research period. These results show that the elicitation of stress, particularly the induction of affectively-toned speech, is not a simple matter and that many modulating factors influence the stress response process. A model is proposed to reflect the author’s hypothesis on the emotional response pathways relating to the elicitation of stress with a required speech content. Finally the author provides guidelines and recommendations for future research on speech under stress. Further research paths are identified and a roadmap for future research in this area is defined.
Resumo:
There are difficulties with utilising self- report and physiological measures of assessment amongst forensic populations. This study investigates implicit based measures amongst sexual offenders, nonsexual offenders and low risk samples. Implicit measurement is a term applied to measurement methods that makes it difficult to influence responses through conscious control. The test battery includes the Implicit Association Test (IAT), Rapid Serial Visual Presentation (RSVP), Viewing Time (VT) and the Structured Clinical interview for disorders. The IAT proposes that people will perform better on a task when they depend on well-practiced cognitive associations. The RSVP task requires participants to identify a single target image that is presented amongst a series of rapidly presented visual images. RSVP operates on the premise that if two target images are presented within 500milliseconds of each other, the possibility that the participant will recognize the second target is significantly reduced when the first target is of salience to the individual. This is the attentional blink phenomenon. VT is based on the principle that people will look longer at images that are of salience. Results showed that on the VT task, child sexual offenders took longer to view images of children than low risk groups. Nude over clothed images induced a greater attentional blink amongst low risk and offending samples on the RSVP task. Sexual offenders took longer than low risk groups on word pairing tasks where sexual words were paired with adult words on the IAT. The SCID highlighted differences between the offending and non offending groups on the sub scales for personality disorders. More erotic stimulus items on the VT and RSVP measures is recommended to better differentiate sexual preference between offending and non offending samples. A pictorial IAT is recommended. Findings provide the basis for further development of implicit measures within the assessment of sexual offenders.
Resumo:
Quantitative analysis of penetrative deformation in sedimentary rocks of fold and thrust belts has largely been carried out using clast based strain analysis techniques. These methods analyse the geometric deviations from an original state that populations of clasts, or strain markers, have undergone. The characterisation of these geometric changes, or strain, in the early stages of rock deformation is not entirely straight forward. This is in part due to the paucity of information on the original state of the strain markers, but also the uncertainty of the relative rheological properties of the strain markers and their matrix during deformation, as well as the interaction of two competing fabrics, such as bedding and cleavage. Furthermore one of the single largest setbacks for accurate strain analysis has been associated with the methods themselves, they are traditionally time consuming, labour intensive and results can vary between users. A suite of semi-automated techniques have been tested and found to work very well, but in low strain environments the problems discussed above persist. Additionally these techniques have been compared to Anisotropy of Magnetic Susceptibility (AMS) analyses, which is a particularly sensitive tool for the characterisation of low strain in sedimentary lithologies.