994 resultados para Filtering theory
Resumo:
Multiexponential decays may contain time-constants differing in several orders of magnitudes. In such cases, uniform sampling results in very long records featuring a high degree of oversampling at the final part of the transient. Here, we analyze a nonlinear time scale transformation to reduce the total number of samples with minimum signal distortion, achieving an important reduction of the computational cost of subsequent analyses. We propose a time-varying filter whose length is optimized for minimum mean square error
Resumo:
This paper proposes a spatial filtering technique forthe reception of pilot-aided multirate multicode direct-sequencecode division multiple access (DS/CDMA) systems such as widebandCDMA (WCDMA). These systems introduce a code-multiplexedpilot sequence that can be used for the estimation of thefilter weights, but the presence of the traffic signal (transmittedat the same time as the pilot sequence) corrupts that estimationand degrades the performance of the filter significantly. This iscaused by the fact that although the traffic and pilot signals areusually designed to be orthogonal, the frequency selectivity of thechannel degrades this orthogonality at hte receiving end. Here,we propose a semi-blind technique that eliminates the self-noisecaused by the code-multiplexing of the pilot. We derive analyticallythe asymptotic performance of both the training-only andthe semi-blind techniques and compare them with the actual simulatedperformance. It is shown, both analytically and via simulation,that high gains can be achieved with respect to training-onlybasedtechniques.
Resumo:
In this paper we obtain the linear minimum mean square estimator (LMMSE) for discrete-time linear systems subject to state and measurement multiplicative noises and Markov jumps on the parameters. It is assumed that the Markov chain is not available. By using geometric arguments we obtain a Kalman type filter conveniently implementable in a recurrence form. The stationary case is also studied and a proof for the convergence of the error covariance matrix of the LMMSE to a stationary value under the assumption of mean square stability of the system and ergodicity of the associated Markov chain is obtained. It is shown that there exists a unique positive semi-definite solution for the stationary Riccati-like filter equation and, moreover, this solution is the limit of the error covariance matrix of the LMMSE. The advantage of this scheme is that it is very easy to implement and all calculations can be performed offline. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
A particular property of the matched desiredimpulse response receiver is introduced in this paper, namely,the fact that full exploitation of the diversity is obtained withmultiple beamformers when the channel is spatially and timelydispersive. This particularity makes the receiver specially suitablefor mobile and underwater communications. The new structureprovides better performance than conventional and weightedVRAKE receivers, and a diversity gain with no needs of additionalradio frequency equipment. The baseband hardware neededfor this new receiver may be obtained through reconfigurabilityof the RAKE architectures available at the base station. Theproposed receiver is tested through simulations assuming UTRAfrequency-division-duplexing mode.
Resumo:
A general criterion for the design of adaptive systemsin digital communications called the statistical reference criterionis proposed. The criterion is based on imposition of the probabilitydensity function of the signal of interest at the outputof the adaptive system, with its application to the scenario ofhighly powerful interferers being the main focus of this paper.The knowledge of the pdf of the wanted signal is used as adiscriminator between signals so that interferers with differingdistributions are rejected by the algorithm. Its performance isstudied over a range of scenarios. Equations for gradient-basedcoefficient updates are derived, and the relationship with otherexisting algorithms like the minimum variance and the Wienercriterion are examined.
Resumo:
This paper deals with the design of nonregenerativerelaying transceivers in cooperative systems where channel stateinformation (CSI) is available at the relay station. The conventionalnonregenerative approach is the amplify and forward(A&F) approach, where the signal received at the relay is simplyamplified and retransmitted. In this paper, we propose an alternativelinear transceiver design for nonregenerative relaying(including pure relaying and the cooperative transmission cases),making proper use of CSI at the relay station. Specifically, wedesign the optimum linear filtering performed on the data to beforwarded at the relay. As optimization criteria, we have consideredthe maximization of mutual information (that provides aninformation rate for which reliable communication is possible) fora given available transmission power at the relay station. Threedifferent levels of CSI can be considered at the relay station: onlyfirst hop channel information (between the source and relay);first hop channel and second hop channel (between relay anddestination) information, or a third situation where the relaymay have complete cooperative channel information includingall the links: first and second hop channels and also the directchannel between source and destination. Despite the latter beinga more unrealistic situation, since it requires the destination toinform the relay station about the direct channel, it is useful as anupper benchmark. In this paper, we consider the last two casesrelating to CSI.We compare the performance so obtained with theperformance for the conventional A&F approach, and also withthe performance of regenerative relays and direct noncooperativetransmission for two particular cases: narrowband multiple-inputmultiple-output transceivers and wideband single input singleoutput orthogonal frequency division multiplex transmissions.
Resumo:
A sieve plate distillation column has been constructed and interfaced to a minicomputer with the necessary instrumentation for dynamic, estimation and control studies with special bearing on low-cost and noise-free instrumentation. A dynamic simulation of the column with a binary liquid system has been compiled using deterministic models that include fluid dynamics via Brambilla's equation for tray liquid holdup calculations. The simulation predictions have been tested experimentally under steady-state and transient conditions. The simulator's predictions of the tray temperatures have shown reasonably close agreement with the measured values under steady-state conditions and in the face of a step change in the feed rate. A method of extending linear filtering theory to highly nonlinear systems with very nonlinear measurement functional relationships has been proposed and tested by simulation on binary distillation. The simulation results have proved that the proposed methodology can overcome the typical instability problems associated with the Kalman filters. Three extended Kalman filters have been formulated and tested by simulation. The filters have been used to refine a much simplified model sequentially and to estimate parameters such as the unmeasured feed composition using information from the column simulation. It is first assumed that corrupted tray composition measurements are made available to the filter and then corrupted tray temperature measurements are accessed instead. The simulation results have demonstrated the powerful capability of the Kalman filters to overcome the typical hardware problems associated with the operation of on-line analyzers in relation to distillation dynamics and control by, in effect, replacirig them. A method of implementing estimator-aided feedforward (EAFF) control schemes has been proposed and tested by simulation on binary distillation. The results have shown that the EAFF scheme provides much better control and energy conservation than the conventional feedback temperature control in the face of a sustained step change in the feed rate or multiple changes in the feed rate, composition and temperature. Further extensions of this work are recommended as regards simulation, estimation and EAFF control.
Resumo:
This thesis is a compilation of 6 papers that the author has written together with Alberto Lanconelli (chapters 3, 5 and 8) and Hyun-Jung Kim (ch 7). The logic thread that link all these chapters together is the interest to analyze and approximate the solutions of certain stochastic differential equations using the so called Wick product as the basic tool. In the first chapter we present arguably the most important achievement of this thesis; namely the generalization to multiple dimensions of a Wick-Wong-Zakai approximation theorem proposed by Hu and Oksendal. By exploiting the relationship between the Wick product and the Malliavin derivative we propose an original reduction method which allows us to approximate semi-linear systems of stochastic differential equations of the Itô type. Furthermore in chapter 4 we present a non-trivial extension of the aforementioned results to the case in which the system of stochastic differential equations are driven by a multi-dimensional fraction Brownian motion with Hurst parameter bigger than 1/2. In chapter 5 we employ our approach and present a “short time” approximation for the solution of the Zakai equation from non-linear filtering theory and provide an estimation of the speed of convergence. In chapters 6 and 7 we study some properties of the unique mild solution for the Stochastic Heat Equation driven by spatial white noise of the Wick-Skorohod type. In particular by means of our reduction method we obtain an alternative derivation of the Feynman-Kac representation for the solution, we find its optimal Hölder regularity in time and space and present a Feynman-Kac-type closed form for its spatial derivative. Chapter 8 treats a somewhat different topic; in particular we investigate some probabilistic aspects of the unique global strong solution of a two dimensional system of semi-linear stochastic differential equations describing a predator-prey model perturbed by Gaussian noise.
Resumo:
Includes bibliography.
Resumo:
This thesis objective is to discover “How are informal decisions reached by screeners when filtering out undesirable job applications?” Grounded theory techniques were employed in the field to observe and analyse informal decisions at the source by screeners in three distinct empirical studies. Whilst grounded theory provided the method for case and cross-case analysis, literature from academic and non-academic sources was evaluated and integrated to strengthen this research and create a foundation for understanding informal decisions. As informal decisions in early hiring processes have been under researched, this thesis contributes to current knowledge in several ways. First, it locates the Cycle of Employment which enhances Robertson and Smith’s (1993) Selection Paradigm through the integration of stages that individuals occupy whilst seeking employment. Secondly, a general depiction of the Workflow of General Hiring Processes provides a template for practitioners to map and further develop their organisational processes. Finally, it highlights the emergence of the Locality Effect, which is a geographically driven heuristic and bias that can significantly impact recruitment and informal decisions. Although screeners make informal decisions using multiple variables, informal decisions are made in stages as evidence in the Cycle of Employment. Moreover, informal decisions can be erroneous as a result of a majority and minority influence, the weighting of information, the injection of inappropriate information and criteria, and the influence of an assessor. This thesis considers these faults and develops a basic framework of understanding informal decisions to which future research can be launched.
Resumo:
Nitrogen-doped carbon nanotubes can provide reactive sites on the porphyrin-like defects. It is well known that many porphyrins have transition-metal atoms, and we have explored transition-metal atoms bonded to those porphyrin-like defects inN-doped carbon nanotubes. The electronic structure and transport are analyzed by means of a combination of density functional theory and recursive Green's function methods. The results determined the heme B-like defect (an iron atom bonded to four nitrogens) is the most stable and has a higher polarization current for a single defect. With randomly positioned heme B defects in nanotubes a few hundred nanometers long, the polarization reaches near 100%, meaning they are effective spin filters. A disorder-induced magnetoresistance effect is also observed in those long nanotubes, and values as high as 20 000% are calculated with nonmagnectic eletrodes.
Resumo:
This paper deals with the H(infinity) recursive estimation problem for general rectangular time-variant descriptor systems in discrete time. Riccati-equation based recursions for filtered and predicted estimates are developed based on a data fitting approach and game theory. In this approach, the nature determines a state sequence seeking to maximize the estimation cost, whereas the estimator tries to find an estimate that brings the estimation cost to a minimum. A solution exists for a specified gamma-level if the resulting cost is positive. In order to present some computational alternatives to the H(infinity) filters developed, they are rewritten in information form along with the respective array algorithms. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
La synthèse d'images dites photoréalistes nécessite d'évaluer numériquement la manière dont la lumière et la matière interagissent physiquement, ce qui, malgré la puissance de calcul impressionnante dont nous bénéficions aujourd'hui et qui ne cesse d'augmenter, est encore bien loin de devenir une tâche triviale pour nos ordinateurs. Ceci est dû en majeure partie à la manière dont nous représentons les objets: afin de reproduire les interactions subtiles qui mènent à la perception du détail, il est nécessaire de modéliser des quantités phénoménales de géométries. Au moment du rendu, cette complexité conduit inexorablement à de lourdes requêtes d'entrées-sorties, qui, couplées à des évaluations d'opérateurs de filtrage complexes, rendent les temps de calcul nécessaires à produire des images sans défaut totalement déraisonnables. Afin de pallier ces limitations sous les contraintes actuelles, il est nécessaire de dériver une représentation multiéchelle de la matière. Dans cette thèse, nous construisons une telle représentation pour la matière dont l'interface correspond à une surface perturbée, une configuration qui se construit généralement via des cartes d'élévations en infographie. Nous dérivons notre représentation dans le contexte de la théorie des microfacettes (conçue à l'origine pour modéliser la réflectance de surfaces rugueuses), que nous présentons d'abord, puis augmentons en deux temps. Dans un premier temps, nous rendons la théorie applicable à travers plusieurs échelles d'observation en la généralisant aux statistiques de microfacettes décentrées. Dans l'autre, nous dérivons une procédure d'inversion capable de reconstruire les statistiques de microfacettes à partir de réponses de réflexion d'un matériau arbitraire dans les configurations de rétroréflexion. Nous montrons comment cette théorie augmentée peut être exploitée afin de dériver un opérateur général et efficace de rééchantillonnage approximatif de cartes d'élévations qui (a) préserve l'anisotropie du transport de la lumière pour n'importe quelle résolution, (b) peut être appliqué en amont du rendu et stocké dans des MIP maps afin de diminuer drastiquement le nombre de requêtes d'entrées-sorties, et (c) simplifie de manière considérable les opérations de filtrage par pixel, le tout conduisant à des temps de rendu plus courts. Afin de valider et démontrer l'efficacité de notre opérateur, nous synthétisons des images photoréalistes anticrenelées et les comparons à des images de référence. De plus, nous fournissons une implantation C++ complète tout au long de la dissertation afin de faciliter la reproduction des résultats obtenus. Nous concluons avec une discussion portant sur les limitations de notre approche, ainsi que sur les verrous restant à lever afin de dériver une représentation multiéchelle de la matière encore plus générale.
Resumo:
A quasi-optical deembedding technique for characterizing waveguides is demonstrated using wide-band time-resolved terahertz spectroscopy. A transfer function representation is adopted for the description of the signal in the input and output port of the waveguides. The time-domain responses were discretized and the waveguide transfer function was obtained through a parametric approach in the z-domain after describing the system with an AutoRegressive with eXogenous input (ARX), as well as with a state-space model. Prior to the identification procedure, filtering was performed in the wavelet domain to minimize both signal distortion, as well as the noise propagating in the ARX and subspace models. The optimal filtering procedure used in the wavelet domain for the recorded time-domain signatures is described in detail. The effect of filtering prior to the identification procedures is elucidated with the aid of pole-zero diagrams. Models derived from measurements of terahertz transients in a precision WR-8 waveguide adjustable short are presented.