994 resultados para Filtering theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last few years a state-space formulation has been introduced into self-tuning control. This has not only allowed for a wider choice of possible control actions, but has also provided an insight into the theory underlying—and hidden by—that used in the polynomial description. This paper considers many of the self-tuning algorithms, both state-space and polynomial, presently in use, and by starting from first principles develops the observers which are, effectively, used in each case. At any specific time instant the state estimator can be regarded as taking one of two forms. In the first case the most recently available output measurement is excluded, and here an optimal and conditionally stable observer is obtained. In the second case the present output signal is included, and here it is shown that although the observer is once again conditionally stable, it is no longer optimal. This result is of significance, as many of the popular self-tuning controllers lie in the second, rather than first, category.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional mathematical tools, like Fourier Analysis, have proven to be efficient when analyzing steady-state distortions; however, the growing utilization of electronically controlled loads and the generation of a new dynamics in industrial environments signals have suggested the need of a powerful tool to perform the analysis of non-stationary distortions, overcoming limitations of frequency techniques. Wavelet Theory provides a new approach to harmonic analysis, focusing the decomposition of a signal into non-sinusoidal components, which are translated and scaled in time, generating a time-frequency basis. The correct choice of the waveshape to be used in decomposition is very important and discussed in this work. A brief theoretical introduction on Wavelet Transform is presented and some cases (practical and simulated) are discussed. Distortions commonly found in industrial environments, such as the current waveform of a Switched-Mode Power Supply and the input phase voltage waveform of motor fed by inverter are analyzed using Wavelet Theory. Applications such as extracting the fundamental frequency of a non-sinusoidal current signal, or using the ability of compact representation to detect non-repetitive disturbances are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A correction procedure based on digital signal processing theory is proposed to smooth the numeric oscillations in electromagnetic transient simulation results from transmission line modeling based on an equivalent representation by lumped parameters. The proposed improvement to this well-known line representation is carried out with an Finite Impulse Response (FIR) digital filter used to exclude the high-frequency components associated with the spurious numeric oscillations. To prove the efficacy of this correction method, a well-established frequency-dependent line representation using state equations is modeled with an FIR filter included in the model. The results obtained from the state-space model with and without the FIR filtering are compared with the results simulated by a line model based on distributed parameters and inverse transforms. Finally, the line model integrated with the FIR filtering is also tested and validated based on simulations that include nonlinear and time-variable elements. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Internet of Things (IoT) is the next industrial revolution: we will interact naturally with real and virtual devices as a key part of our daily life. This technology shift is expected to be greater than the Web and Mobile combined. As extremely different technologies are needed to build connected devices, the Internet of Things field is a junction between electronics, telecommunications and software engineering. Internet of Things application development happens in silos, often using proprietary and closed communication protocols. There is the common belief that only if we can solve the interoperability problem we can have a real Internet of Things. After a deep analysis of the IoT protocols, we identified a set of primitives for IoT applications. We argue that each IoT protocol can be expressed in term of those primitives, thus solving the interoperability problem at the application protocol level. Moreover, the primitives are network and transport independent and make no assumption in that regard. This dissertation presents our implementation of an IoT platform: the Ponte project. Privacy issues follows the rise of the Internet of Things: it is clear that the IoT must ensure resilience to attacks, data authentication, access control and client privacy. We argue that it is not possible to solve the privacy issue without solving the interoperability problem: enforcing privacy rules implies the need to limit and filter the data delivery process. However, filtering data require knowledge of how the format and the semantics of the data: after an analysis of the possible data formats and representations for the IoT, we identify JSON-LD and the Semantic Web as the best solution for IoT applications. Then, this dissertation present our approach to increase the throughput of filtering semantic data by a factor of ten.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Theory on plant succession predicts a temporal increase in the complexity of spatial community structure and of competitive interactions: initially random occurrences of early colonising species shift towards spatially and competitively structured plant associations in later successional stages. Here we use long-term data on early plant succession in a German post mining area to disentangle the importance of random colonisation, habitat filtering, and competition on the temporal and spatial development of plant community structure. We used species co-occurrence analysis and a recently developed method for assessing competitive strength and hierarchies (transitive versus intransitive competitive orders) in multispecies communities. We found that species turnover decreased through time within interaction neighbourhoods, but increased through time outside interaction neighbourhoods. Successional change did not lead to modular community structure. After accounting for species richness effects, the strength of competitive interactions and the proportion of transitive competitive hierarchies increased through time. Although effects of habitat filtering were weak, random colonization and subsequent competitive interactions had strong effects on community structure. Because competitive strength and transitivity were poorly correlated with soil characteristics, there was little evidence for context dependent competitive strength associated with intransitive competitive hierarchies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper describes two new transport layer (TCP) options and an expanded transport layer queuing strategy that facilitate three functions that are fundamental to the dispatching-based clustered service. A transport layer option has been developed to facilitate. the use of client wait time data within the service request processing of the cluster. A second transport layer option has been developed to facilitate the redirection of service requests by the cluster dispatcher to the cluster processing member. An expanded transport layer service request queuing strategy facilitates the trust based filtering of incoming service requests so that a graceful degradation of service delivery may be achieved during periods of overload - most dramatically evidenced by distributed denial of service attacks against the clustered service. We describe how these new options and queues have been implemented and successfully tested within the transport layer of the Linux kernel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Marr's work offered guidelines on how to investigate vision (the theory - algorithm - implementation distinction), as well as specific proposals on how vision is done. Many of the latter have inevitably been superseded, but the approach was inspirational and remains so. Marr saw the computational study of vision as tightly linked to psychophysics and neurophysiology, but the last twenty years have seen some weakening of that integration. Because feature detection is a key stage in early human vision, we have returned to basic questions about representation of edges at coarse and fine scales. We describe an explicit model in the spirit of the primal sketch, but tightly constrained by psychophysical data. Results from two tasks (location-marking and blur-matching) point strongly to the central role played by second-derivative operators, as proposed by Marr and Hildreth. Edge location and blur are evaluated by finding the location and scale of the Gaussian-derivative `template' that best matches the second-derivative profile (`signature') of the edge. The system is scale-invariant, and accurately predicts blur-matching data for a wide variety of 1-D and 2-D images. By finding the best-fitting scale, it implements a form of local scale selection and circumvents the knotty problem of integrating filter outputs across scales. [Supported by BBSRC and the Wellcome Trust]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The models of teaching social sciences and clinical practice are insufficient for the needs of practical-reflective teaching of social sciences applied to health. The scope of this article is to reflect on the challenges and perspectives of social science education for health professionals. In the 1950s the important movement bringing together social sciences and the field of health began, however weak credentials still prevail. This is due to the low professional status of social scientists in health and the ill-defined position of the social sciences professionals in the health field. It is also due to the scant importance attributed by students to the social sciences, the small number of professionals and the colonization of the social sciences by the biomedical culture in the health field. Thus, the professionals of social sciences applied to health are also faced with the need to build an identity, even after six decades of their presence in the field of health. This is because their ambivalent status has established them as a partial, incomplete and virtual presence, requiring a complex survival strategy in the nebulous area between social sciences and health.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Atomic charge transfer-counter polarization effects determine most of the infrared fundamental CH intensities of simple hydrocarbons, methane, ethylene, ethane, propyne, cyclopropane and allene. The quantum theory of atoms in molecules/charge-charge flux-dipole flux model predicted the values of 30 CH intensities ranging from 0 to 123 km mol(-1) with a root mean square (rms) error of only 4.2 km mol(-1) without including a specific equilibrium atomic charge term. Sums of the contributions from terms involving charge flux and/or dipole flux averaged 20.3 km mol(-1), about ten times larger than the average charge contribution of 2.0 km mol(-1). The only notable exceptions are the CH stretching and bending intensities of acetylene and two of the propyne vibrations for hydrogens bound to sp hybridized carbon atoms. Calculations were carried out at four quantum levels, MP2/6-311++G(3d,3p), MP2/cc-pVTZ, QCISD/6-311++G(3d,3p) and QCISD/cc-pVTZ. The results calculated at the QCISD level are the most accurate among the four with root mean square errors of 4.7 and 5.0 km mol(-1) for the 6-311++G(3d,3p) and cc-pVTZ basis sets. These values are close to the estimated aggregate experimental error of the hydrocarbon intensities, 4.0 km mol(-1). The atomic charge transfer-counter polarization effect is much larger than the charge effect for the results of all four quantum levels. Charge transfer-counter polarization effects are expected to also be important in vibrations of more polar molecules for which equilibrium charge contributions can be large.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

to identify salient behavioral, normative, control and self-efficacy beliefs related to the behavior of adherence to oral antidiabetic agents, using the Theory of Planned Behavior. cross-sectional, exploratory study with 17 diabetic patients in chronic use of oral antidiabetic medication and in outpatient follow-up. Individual interviews were recorded, transcribed and content-analyzed using pre-established categories. behavioral beliefs concerning advantages and disadvantages of adhering to medication emerged, such as the possibility of avoiding complications from diabetes, preventing or delaying the use of insulin, and a perception of side effects. The children of patients and physicians are seen as important social references who influence medication adherence. The factors that facilitate adherence include access to free-of-cost medication and taking medications associated with temporal markers. On the other hand, a complex therapeutic regimen was considered a factor that hinders adherence. Understanding how to use medication and forgetfulness impact the perception of patients regarding their ability to adhere to oral antidiabetic agents. medication adherence is a complex behavior permeated by behavioral, normative, control and self-efficacy beliefs that should be taken into account when assessing determinants of behavior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Universidade Estadual de Campinas . Faculdade de Educação Física