936 resultados para Non-ideal systems
Resumo:
A simple but efficient voice activity detector based on the Hilbert transform and a dynamic threshold is presented to be used on the pre-processing of audio signals -- The algorithm to define the dynamic threshold is a modification of a convex combination found in literature -- This scheme allows the detection of prosodic and silence segments on a speech in presence of non-ideal conditions like a spectral overlapped noise -- The present work shows preliminary results over a database built with some political speech -- The tests were performed adding artificial noise to natural noises over the audio signals, and some algorithms are compared -- Results will be extrapolated to the field of adaptive filtering on monophonic signals and the analysis of speech pathologies on futures works
Resumo:
Jerne's idiotypic network theory postulates that the immune response involves inter-antibody stimulation and suppression as well as matching to antigens. The theory has proved the most popular Artificial Immune System (AIS) model for incorporation into behavior-based robotics but guidelines for implementing idiotypic selection are scarce. Furthermore, the direct effects of employing the technique have not been demonstrated in the form of a comparison with non-idiotypic systems. This paper aims to address these issues. A method for integrating an idiotypic AIS network with a Reinforcement Learning based control system (RL) is described and the mechanisms underlying antibody stimulation and suppression are explained in detail. Some hypotheses that account for the network advantage are put forward and tested using three systems with increasing idiotypic complexity. The basic RL, a simplified hybrid AIS-RL that implements idiotypic selection independently of derived concentration levels and a full hybrid AIS-RL scheme are examined. The test bed takes the form of a simulated Pioneer robot that is required to navigate through maze worlds detecting and tracking door markers.
Resumo:
Jerne's idiotypic network theory postulates that the immune response involves inter-antibody stimulation and suppression as well as matching to antigens. The theory has proved the most popular Artificial Immune System (AIS) model for incorporation into behavior-based robotics but guidelines for implementing idiotypic selection are scarce. Furthermore, the direct effects of employing the technique have not been demonstrated in the form of a comparison with non-idiotypic systems. This paper aims to address these issues. A method for integrating an idiotypic AIS network with a Reinforcement Learning based control system (RL) is described and the mechanisms underlying antibody stimulation and suppression are explained in detail. Some hypotheses that account for the network advantage are put forward and tested using three systems with increasing idiotypic complexity. The basic RL, a simplified hybrid AIS-RL that implements idiotypic selection independently of derived concentration levels and a full hybrid AIS-RL scheme are examined. The test bed takes the form of a simulated Pioneer robot that is required to navigate through maze worlds detecting and tracking door markers.
Resumo:
Neste trabalho, generalizamos o Princípio da Mínima Ação proposto por Riewe para sistemas não conservativos, contendo forças dissipativas lineares dependentes de derivadas temporais de qualquer ordem. A Ação generalizada é construída a partir de funções Lagrangianas dependentes de derivadas de ordem inteira e fracionária. Diferente de outras formulações, o uso de derivadas fracionárias permite a construção de Lagrangianas físicas para sistemas não conservativos. Uma Lagrangiana é dita física se fornece relações fisicamente consistentes para o momentum e o Hamiltoniano do sistema. Neste Princípio da Mínima Ação generalizado, as equações de movimento são obtidas a partir da equação de Euler-Lagrange e, tomando-se o limite indo à zero para o intervalo de tempo definindo a Ação. Finalmente, como exemplo de aplicação, formulamos pela primeira vez uma Lagrangiana física para o problema da carga pontual acelerada.
Resumo:
Successful implementation of fault-tolerant quantum computation on a system of qubits places severe demands on the hardware used to control the many-qubit state. It is known that an accuracy threshold Pa exists for any quantum gate that is to be used for such a computation to be able to continue for an unlimited number of steps. Specifically, the error probability Pe for such a gate must fall below the accuracy threshold: Pe < Pa. Estimates of Pa vary widely, though Pa ∼ 10−4 has emerged as a challenging target for hardware designers. I present a theoretical framework based on neighboring optimal control that takes as input a good quantum gate and returns a new gate with better performance. I illustrate this approach by applying it to a universal set of quantum gates produced using non-adiabatic rapid passage. Performance improvements are substantial comparing to the original (unimproved) gates, both for ideal and non-ideal controls. Under suitable conditions detailed below, all gate error probabilities fall by 1 to 4 orders of magnitude below the target threshold of 10−4. After applying the neighboring optimal control theory to improve the performance of quantum gates in a universal set, I further apply the general control theory in a two-step procedure for fault-tolerant logical state preparation, and I illustrate this procedure by preparing a logical Bell state fault-tolerantly. The two-step preparation procedure is as follow: Step 1 provides a one-shot procedure using neighboring optimal control theory to prepare a physical qubit state which is a high-fidelity approximation to the Bell state |β01⟩ = 1/√2(|01⟩ + |10⟩). I show that for ideal (non-ideal) control, an approximate |β01⟩ state could be prepared with error probability ϵ ∼ 10−6 (10−5) with one-shot local operations. Step 2 then takes a block of p pairs of physical qubits, each prepared in |β01⟩ state using Step 1, and fault-tolerantly prepares the logical Bell state for the C4 quantum error detection code.
Resumo:
In Part 1 of this thesis, we propose that biochemical cooperativity is a fundamentally non-ideal process. We show quantal effects underlying biochemical cooperativity and highlight apparent ergodic breaking at small volumes. The apparent ergodic breaking manifests itself in a divergence of deterministic and stochastic models. We further predict that this divergence of deterministic and stochastic results is a failure of the deterministic methods rather than an issue of stochastic simulations.
Ergodic breaking at small volumes may allow these molecular complexes to function as switches to a greater degree than has previously been shown. We propose that this ergodic breaking is a phenomenon that the synapse might exploit to differentiate Ca$^{2+}$ signaling that would lead to either the strengthening or weakening of a synapse. Techniques such as lattice-based statistics and rule-based modeling are tools that allow us to directly confront this non-ideality. A natural next step to understanding the chemical physics that underlies these processes is to consider \textit{in silico} specifically atomistic simulation methods that might augment our modeling efforts.
In the second part of this thesis, we use evolutionary algorithms to optimize \textit{in silico} methods that might be used to describe biochemical processes at the subcellular and molecular levels. While we have applied evolutionary algorithms to several methods, this thesis will focus on the optimization of charge equilibration methods. Accurate charges are essential to understanding the electrostatic interactions that are involved in ligand binding, as frequently discussed in the first part of this thesis.
Resumo:
Coprime and nested sampling are well known deterministic sampling techniques that operate at rates significantly lower than the Nyquist rate, and yet allow perfect reconstruction of the spectra of wide sense stationary signals. However, theoretical guarantees for these samplers assume ideal conditions such as synchronous sampling, and ability to perfectly compute statistical expectations. This thesis studies the performance of coprime and nested samplers in spatial and temporal domains, when these assumptions are violated. In spatial domain, the robustness of these samplers is studied by considering arrays with perturbed sensor locations (with unknown perturbations). Simplified expressions for the Fisher Information matrix for perturbed coprime and nested arrays are derived, which explicitly highlight the role of co-array. It is shown that even in presence of perturbations, it is possible to resolve $O(M^2)$ under appropriate conditions on the size of the grid. The assumption of small perturbations leads to a novel ``bi-affine" model in terms of source powers and perturbations. The redundancies in the co-array are then exploited to eliminate the nuisance perturbation variable, and reduce the bi-affine problem to a linear underdetermined (sparse) problem in source powers. This thesis also studies the robustness of coprime sampling to finite number of samples and sampling jitter, by analyzing their effects on the quality of the estimated autocorrelation sequence. A variety of bounds on the error introduced by such non ideal sampling schemes are computed by considering a statistical model for the perturbation. They indicate that coprime sampling leads to stable estimation of the autocorrelation sequence, in presence of small perturbations. Under appropriate assumptions on the distribution of WSS signals, sharp bounds on the estimation error are established which indicate that the error decays exponentially with the number of samples. The theoretical claims are supported by extensive numerical experiments.
Resumo:
The electrical and optical coupling between subcells in a multijunction solar cell affects its external quantum efficiency (EQE) measurement. In this study, we show how a low breakdown voltage of a component subcell impacts the EQE determination of a multijunction solar cell and demands the use of a finely adjusted external voltage bias. The optimum voltage bias for the EQE measurement of a Ge subcell in two different GaInP/GaInAs/Ge triple-junction solar cells is determined both by sweeping the external voltage bias and by tracing the I–V curve under the same light bias conditions applied during the EQE measurement. It is shown that the I–V curve gives rapid and valuable information about the adequate light and voltage bias needed, and also helps to detect problems associated with non-ideal I–V curves that might affect the EQE measurement. The results also show that, if a non-optimum voltage bias is applied, a measurement artifact can result. Only when the problems associated with a non-ideal I–V curve and/or a low breakdown voltage have been discarded, the measurement artifacts, if any, can be attributed to other effects such as luminescent coupling between subcells.
Resumo:
Because of their relative simplicity and the barriers to gene flow, islands are ideal systems to study the distribution of biodiversity. However, the knowledge that can be extracted from this peculiar ecosystem regarding epidemiology of economically relevant diseases has not been widely addressed. We used information available in the scientific literature for 10 old world islands or archipelagos and original data on Sicily to gain new insights into the epidemiology of the Mycobacterium tuberculosis complex (MTC). We explored three nonexclusive working hypotheses on the processes modulating bovine tuberculosis (bTB) herd prevalence in cattle and MTC strain diversity: insularity, hosts and trade. Results suggest that bTB herd prevalence was positively correlated with island size, the presence of wild hosts, and the number of imported cattle, but neither with isolation nor with cattle density. MTC strain diversity was positively related with cattle bTB prevalence, presence of wild hosts and the number of imported cattle, but not with island size, isolation, and cattle density. The three most common spoligotype patterns coincided between Sicily and mainland Italy. However in Sicily, these common patterns showed a clearer dominance than on the Italian mainland, and seven of 19 patterns (37%) found in Sicily had not been reported from continental Italy. Strain patterns were not spatially clustered in Sicily. We were able to infer several aspects of MTC epidemiology and control in islands and thus in fragmented host and pathogen populations. Our results point out the relevance of the intensity of the cattle commercial networks in the epidemiology of MTC, and suggest that eradication will prove more difficult with increasing size of the island and its environmental complexity, mainly in terms of the diversity of suitable domestic and wild MTC hosts.
Resumo:
Various load compensation schemes proposed in literature assume that voltage source at point of common coupling (PCC) is stiff. In practice, however, the load is remote from a distribution substation and is supplied by a feeder. In the presence of feeder impedance, the PWM inverter switchings distort both the PCC voltage and the source currents. In this paper load compensation with such a non-stiff source is considered. A switching control of the voltage source inverter (VSI) based on state feedback is used for load compensation with non-stiff source. The design of the state feedback controller requires careful considerations in choosing a gain matrix and in the generation of reference quantities. These aspects are considered in this paper. Detailed simulation and experimental results are given to support the control design.
Resumo:
The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.
Resumo:
Fire safety of buildings has been recognised as very important by the building industry and the community at large. Traditionally, increased fire rating is provided by simply adding more plasterboards to light gauge steel frame (LSF) walls, which is inefficient. Many research studies have been undertaken to investigate the thermal behaviour of traditional LSF stud wall systems under standard fire conditions. However, no research has been undertaken on the thermal behaviour of LSF stud walls using the recently proposed composite panel. Extensive fire testing of both non-load bearing and load bearing wall panels was conducted in this research based on the standard time-temperature curve in AS1530.4. Three groups of LSF wall specimens were tested with no insulation, cavity insulation and the new composite panel based on an external insulation layer between plasterboards. This paper presents the details of this experimental study into the thermal performance of non-load bearing walls lined with various configurations of plasterboard and insulation. Extensive descriptive and numerical results of the tested non-load bearing wall panels given in this paper provide a thorough understanding of their thermal behaviour, and valuable time-temperature data that can be used to validate numerical models. Test results showed that the innovative composite stud wall systems outperformed the traditional stud wall systems in terms of their thermal performance, giving a much higher fire rating.
Resumo:
In many modeling situations in which parameter values can only be estimated or are subject to noise, the appropriate mathematical representation is a stochastic ordinary differential equation (SODE). However, unlike the deterministic case in which there are suites of sophisticated numerical methods, numerical methods for SODEs are much less sophisticated. Until a recent paper by K. Burrage and P.M. Burrage (1996), the highest strong order of a stochastic Runge-Kutta method was one. But K. Burrage and P.M. Burrage (1996) showed that by including additional random variable terms representing approximations to the higher order Stratonovich (or Ito) integrals, higher order methods could be constructed. However, this analysis applied only to the one Wiener process case. In this paper, it will be shown that in the multiple Wiener process case all known stochastic Runge-Kutta methods can suffer a severe order reduction if there is non-commutativity between the functions associated with the Wiener processes. Importantly, however, it is also suggested how this order can be repaired if certain commutator operators are included in the Runge-Kutta formulation. (C) 1998 Elsevier Science B.V. and IMACS. All rights reserved.
Resumo:
There are different ways to authenticate humans, which is an essential prerequisite for access control. The authentication process can be subdivided into three categories that rely on something someone i) knows (e.g. password), and/or ii) has (e.g. smart card), and/or iii) is (biometric features). Besides classical attacks on password solutions and the risk that identity-related objects can be stolen, traditional biometric solutions have their own disadvantages such as the requirement of expensive devices, risk of stolen bio-templates etc. Moreover, existing approaches provide the authentication process usually performed only once initially. Non-intrusive and continuous monitoring of user activities emerges as promising solution in hardening authentication process: iii-2) how so. behaves. In recent years various keystroke dynamic behavior-based approaches were published that are able to authenticate humans based on their typing behavior. The majority focuses on so-called static text approaches, where users are requested to type a previously defined text. Relatively few techniques are based on free text approaches that allow a transparent monitoring of user activities and provide continuous verification. Unfortunately only few solutions are deployable in application environments under realistic conditions. Unsolved problems are for instance scalability problems, high response times and error rates. The aim of this work is the development of behavioral-based verification solutions. Our main requirement is to deploy these solutions under realistic conditions within existing environments in order to enable a transparent and free text based continuous verification of active users with low error rates and response times.