893 resultados para unknown-input estimation
Resumo:
A multiuser scheduling multiple-input multiple-output (MIMO) cognitive radio network (CRN) with space-time block coding (STBC) is considered in this paper, where one secondary base station (BS) communicates with one secondary user (SU) selected from K candidates. The joint impact of imperfect channel state information (CSI) in BS → SUs and BS → PU due to channel estimation errors and feedback delay on the outage performance is firstly investigated. We obtain the exact outage probability expressions for the considered network under the peak interference power IP at PU and maximum transmit power Pm at BS which cover perfect/imperfect CSI scenarios in BS → SUs and BS → PU. In addition, asymptotic expressions of outage probability in high SNR region are also derived from which we obtain several important insights into the system design. For example, only with perfect CSIs in BS → SUs, i.e., without channel estimation errors and feedback delay, the multiuser diversity can be exploited. Finally, simulation results confirm the correctness of our analysis.
Resumo:
Bridge construction responds to the need for environmentally friendly design of motorways and facilitates the passage through sensitive natural areas and the bypassing of urban areas. However, according to numerous research studies, bridge construction presents substantial budget overruns. Therefore, it is necessary early in the planning process for the decision makers to have reliable estimates of the final cost based on previously constructed projects. At the same time, the current European financial crisis reduces the available capital for investments and financial institutions are even less willing to finance transportation infrastructure. Consequently, it is even more necessary today to estimate the budget of high-cost construction projects -such as road bridges- with reasonable accuracy, in order for the state funds to be invested with lower risk and the projects to be designed with the highest possible efficiency. In this paper, a Bill-of-Quantities (BoQ) estimation tool for road bridges is developed in order to support the decisions made at the preliminary planning and design stages of highways. Specifically, a Feed-Forward Artificial Neural Network (ANN) with a hidden layer of 10 neurons is trained to predict the superstructure material quantities (concrete, pre-stressed steel and reinforcing steel) using the width of the deck, the adjusted length of span or cantilever and the type of the bridge as input variables. The training dataset includes actual data from 68 recently constructed concrete motorway bridges in Greece. According to the relevant metrics, the developed model captures very well the complex interrelations in the dataset and demonstrates strong generalisation capability. Furthermore, it outperforms the linear regression models developed for the same dataset. Therefore, the proposed cost estimation model stands as a useful and reliable tool for the construction industry as it enables planners to reach informed decisions for technical and economic planning of concrete bridge projects from their early implementation stages.
Resumo:
In this paper, we consider the uplink of a single-cell multi-user single-input multiple-output (MU-SIMO) system with in-phase and quadrature-phase imbalance (IQI). Particularly, we investigate the effect of receive (RX) IQI on the performance of MU-SIMO systems with large antenna arrays employing maximum-ratio combining (MRC) receivers. In order to study how IQI affects channel estimation, we derive a new channel estimator for the IQI-impaired model and show that the higher the value of signal-to-noise ratio (SNR) the higher the impact of IQI on the spectral efficiency (SE). Moreover, a novel pilot-based joint estimator of the augmented MIMO channel matrix and IQI coefficients is described and then, a low-complexity IQI compensation scheme is proposed which is based on the
IQI coefficients’ estimation and it is independent of the channel gain. The performance of the proposed compensation scheme is analytically evaluated by deriving a tractable approximation of the ergodic SE assuming transmission over Rayleigh fading channels with large-scale fading. Furthermore, we investigate how many MSs should be scheduled in massive multiple-input multiple-output (MIMO) systems with IQI and show that the highest SE loss occurs at the optimal operating point. Finally,
by deriving asymptotic power scaling laws, and proving that the SE loss due to IQI is asymptotically independent of the number of BS antennas, we show that massive MIMO is resilient to the effect of RX IQI.
Resumo:
Tese dout., Engenharia electrónica e computação - Processamento de sinal, Universidade do Algarve, 2008
Resumo:
Thesis (Ph.D.)--University of Washington, 2015
Resumo:
This paper employs the Lyapunov direct method for the stability analysis of fractional order linear systems subject to input saturation. A new stability condition based on saturation function is adopted for estimating the domain of attraction via ellipsoid approach. To further improve this estimation, the auxiliary feedback is also supported by the concept of stability region. The advantages of the proposed method are twofold: (1) it is straightforward to handle the problem both in analysis and design because of using Lyapunov method, (2) the estimation leads to less conservative results. A numerical example illustrates the feasibility of the proposed method.
Resumo:
The attached file is created with Scientific Workplace Latex
Resumo:
Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière.
Resumo:
Identification and Control of Non‐linear dynamical systems are challenging problems to the control engineers.The topic is equally relevant in communication,weather prediction ,bio medical systems and even in social systems,where nonlinearity is an integral part of the system behavior.Most of the real world systems are nonlinear in nature and wide applications are there for nonlinear system identification/modeling.The basic approach in analyzing the nonlinear systems is to build a model from known behavior manifest in the form of system output.The problem of modeling boils down to computing a suitably parameterized model,representing the process.The parameters of the model are adjusted to optimize a performanace function,based on error between the given process output and identified process/model output.While the linear system identification is well established with many classical approaches,most of those methods cannot be directly applied for nonlinear system identification.The problem becomes more complex if the system is completely unknown but only the output time series is available.Blind recognition problem is the direct consequence of such a situation.The thesis concentrates on such problems.Capability of Artificial Neural Networks to approximate many nonlinear input-output maps makes it predominantly suitable for building a function for the identification of nonlinear systems,where only the time series is available.The literature is rich with a variety of algorithms to train the Neural Network model.A comprehensive study of the computation of the model parameters,using the different algorithms and the comparison among them to choose the best technique is still a demanding requirement from practical system designers,which is not available in a concise form in the literature.The thesis is thus an attempt to develop and evaluate some of the well known algorithms and propose some new techniques,in the context of Blind recognition of nonlinear systems.It also attempts to establish the relative merits and demerits of the different approaches.comprehensiveness is achieved in utilizing the benefits of well known evaluation techniques from statistics. The study concludes by providing the results of implementation of the currently available and modified versions and newly introduced techniques for nonlinear blind system modeling followed by a comparison of their performance.It is expected that,such comprehensive study and the comparison process can be of great relevance in many fields including chemical,electrical,biological,financial and weather data analysis.Further the results reported would be of immense help for practical system designers and analysts in selecting the most appropriate method based on the goodness of the model for the particular context.
Resumo:
We present a statistical image-based shape + structure model for Bayesian visual hull reconstruction and 3D structure inference. The 3D shape of a class of objects is represented by sets of contours from silhouette views simultaneously observed from multiple calibrated cameras. Bayesian reconstructions of new shapes are then estimated using a prior density constructed with a mixture model and probabilistic principal components analysis. We show how the use of a class-specific prior in a visual hull reconstruction can reduce the effect of segmentation errors from the silhouette extraction process. The proposed method is applied to a data set of pedestrian images, and improvements in the approximate 3D models under various noise conditions are shown. We further augment the shape model to incorporate structural features of interest; unknown structural parameters for a novel set of contours are then inferred via the Bayesian reconstruction process. Model matching and parameter inference are done entirely in the image domain and require no explicit 3D construction. Our shape model enables accurate estimation of structure despite segmentation errors or missing views in the input silhouettes, and works even with only a single input view. Using a data set of thousands of pedestrian images generated from a synthetic model, we can accurately infer the 3D locations of 19 joints on the body based on observed silhouette contours from real images.
Resumo:
This paper presents an image-based rendering system using algebraic relations between different views of an object. The system uses pictures of an object taken from known positions. Given three such images it can generate "virtual'' ones as the object would look from any position near the ones that the two input images were taken from. The extrapolation from the example images can be up to about 60 degrees of rotation. The system is based on the trilinear constraints that bind any three view so fan object. As a side result, we propose two new methods for camera calibration. We developed and used one of them. We implemented the system and tested it on real images of objects and faces. We also show experimentally that even when only two images taken from unknown positions are given, the system can be used to render the object from other view points as long as we have a good estimate of the internal parameters of the camera used and we are able to find good correspondence between the example images. In addition, we present the relation between these algebraic constraints and a factorization method for shape and motion estimation. As a result we propose a method for motion estimation in the special case of orthographic projection.
Resumo:
We present a technique for the rapid and reliable evaluation of linear-functional output of elliptic partial differential equations with affine parameter dependence. The essential components are (i) rapidly uniformly convergent reduced-basis approximations — Galerkin projection onto a space WN spanned by solutions of the governing partial differential equation at N (optimally) selected points in parameter space; (ii) a posteriori error estimation — relaxations of the residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs; and (iii) offline/online computational procedures — stratagems that exploit affine parameter dependence to de-couple the generation and projection stages of the approximation process. The operation count for the online stage — in which, given a new parameter value, we calculate the output and associated error bound — depends only on N (typically small) and the parametric complexity of the problem. The method is thus ideally suited to the many-query and real-time contexts. In this paper, based on the technique we develop a robust inverse computational method for very fast solution of inverse problems characterized by parametrized partial differential equations. The essential ideas are in three-fold: first, we apply the technique to the forward problem for the rapid certified evaluation of PDE input-output relations and associated rigorous error bounds; second, we incorporate the reduced-basis approximation and error bounds into the inverse problem formulation; and third, rather than regularize the goodness-of-fit objective, we may instead identify all (or almost all, in the probabilistic sense) system configurations consistent with the available experimental data — well-posedness is reflected in a bounded "possibility region" that furthermore shrinks as the experimental error is decreased.
Resumo:
In this paper, we present an on-line estimation algorithm for an uncertain time delay in a continuous system based on the observational input-output data, subject to observational noise. The first order Pade approximation is used to approximate the time delay. At each time step, the algorithm combines the well known Kalman filter algorithm and the recursive instrumental variable least squares (RIVLS) algorithm in cascade form. The instrumental variable least squares algorithm is used in order to achieve the consistency of the delay parameter estimate, since an error-in-the-variable model is involved. An illustrative example is utilized to demonstrate the efficacy of the proposed approach.
An empirical study of process-related attributes in segmented software cost-estimation relationships
Resumo:
Parametric software effort estimation models consisting on a single mathematical relationship suffer from poor adjustment and predictive characteristics in cases in which the historical database considered contains data coming from projects of a heterogeneous nature. The segmentation of the input domain according to clusters obtained from the database of historical projects serves as a tool for more realistic models that use several local estimation relationships. Nonetheless, it may be hypothesized that using clustering algorithms without previous consideration of the influence of well-known project attributes misses the opportunity to obtain more realistic segments. In this paper, we describe the results of an empirical study using the ISBSG-8 database and the EM clustering algorithm that studies the influence of the consideration of two process-related attributes as drivers of the clustering process: the use of engineering methodologies and the use of CASE tools. The results provide evidence that such consideration conditions significantly the final model obtained, even though the resulting predictive quality is of a similar magnitude.
Resumo:
This study investigates the superposition-based cooperative transmission system. In this system, a key point is for the relay node to detect data transmitted from the source node. This issued was less considered in the existing literature as the channel is usually assumed to be flat fading and a priori known. In practice, however, the channel is not only a priori unknown but subject to frequency selective fading. Channel estimation is thus necessary. Of particular interest is the channel estimation at the relay node which imposes extra requirement for the system resources. The authors propose a novel turbo least-square channel estimator by exploring the superposition structure of the transmission data. The proposed channel estimator not only requires no pilot symbols but also has significantly better performance than the classic approach. The soft-in-soft-out minimum mean square error (MMSE) equaliser is also re-derived to match the superimposed data structure. Finally computer simulation results are shown to verify the proposed algorithm.