939 resultados para Channel estimation error


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse de doctorat consiste en trois chapitres qui traitent des sujets de choix de portefeuilles de grande taille, et de mesure de risque. Le premier chapitre traite du problème d’erreur d’estimation dans les portefeuilles de grande taille, et utilise le cadre d'analyse moyenne-variance. Le second chapitre explore l'importance du risque de devise pour les portefeuilles d'actifs domestiques, et étudie les liens entre la stabilité des poids de portefeuille de grande taille et le risque de devise. Pour finir, sous l'hypothèse que le preneur de décision est pessimiste, le troisième chapitre dérive la prime de risque, une mesure du pessimisme, et propose une méthodologie pour estimer les mesures dérivées. Le premier chapitre améliore le choix optimal de portefeuille dans le cadre du principe moyenne-variance de Markowitz (1952). Ceci est motivé par les résultats très décevants obtenus, lorsque la moyenne et la variance sont remplacées par leurs estimations empiriques. Ce problème est amplifié lorsque le nombre d’actifs est grand et que la matrice de covariance empirique est singulière ou presque singulière. Dans ce chapitre, nous examinons quatre techniques de régularisation pour stabiliser l’inverse de la matrice de covariance: le ridge, spectral cut-off, Landweber-Fridman et LARS Lasso. Ces méthodes font chacune intervenir un paramètre d’ajustement, qui doit être sélectionné. La contribution principale de cette partie, est de dériver une méthode basée uniquement sur les données pour sélectionner le paramètre de régularisation de manière optimale, i.e. pour minimiser la perte espérée d’utilité. Précisément, un critère de validation croisée qui prend une même forme pour les quatre méthodes de régularisation est dérivé. Les règles régularisées obtenues sont alors comparées à la règle utilisant directement les données et à la stratégie naïve 1/N, selon leur perte espérée d’utilité et leur ratio de Sharpe. Ces performances sont mesurée dans l’échantillon (in-sample) et hors-échantillon (out-of-sample) en considérant différentes tailles d’échantillon et nombre d’actifs. Des simulations et de l’illustration empirique menées, il ressort principalement que la régularisation de la matrice de covariance améliore de manière significative la règle de Markowitz basée sur les données, et donne de meilleurs résultats que le portefeuille naïf, surtout dans les cas le problème d’erreur d’estimation est très sévère. Dans le second chapitre, nous investiguons dans quelle mesure, les portefeuilles optimaux et stables d'actifs domestiques, peuvent réduire ou éliminer le risque de devise. Pour cela nous utilisons des rendements mensuelles de 48 industries américaines, au cours de la période 1976-2008. Pour résoudre les problèmes d'instabilité inhérents aux portefeuilles de grandes tailles, nous adoptons la méthode de régularisation spectral cut-off. Ceci aboutit à une famille de portefeuilles optimaux et stables, en permettant aux investisseurs de choisir différents pourcentages des composantes principales (ou dégrées de stabilité). Nos tests empiriques sont basés sur un modèle International d'évaluation d'actifs financiers (IAPM). Dans ce modèle, le risque de devise est décomposé en deux facteurs représentant les devises des pays industrialisés d'une part, et celles des pays émergents d'autres part. Nos résultats indiquent que le risque de devise est primé et varie à travers le temps pour les portefeuilles stables de risque minimum. De plus ces stratégies conduisent à une réduction significative de l'exposition au risque de change, tandis que la contribution de la prime risque de change reste en moyenne inchangée. Les poids de portefeuille optimaux sont une alternative aux poids de capitalisation boursière. Par conséquent ce chapitre complète la littérature selon laquelle la prime de risque est importante au niveau de l'industrie et au niveau national dans la plupart des pays. Dans le dernier chapitre, nous dérivons une mesure de la prime de risque pour des préférences dépendent du rang et proposons une mesure du degré de pessimisme, étant donné une fonction de distorsion. Les mesures introduites généralisent la mesure de prime de risque dérivée dans le cadre de la théorie de l'utilité espérée, qui est fréquemment violée aussi bien dans des situations expérimentales que dans des situations réelles. Dans la grande famille des préférences considérées, une attention particulière est accordée à la CVaR (valeur à risque conditionnelle). Cette dernière mesure de risque est de plus en plus utilisée pour la construction de portefeuilles et est préconisée pour compléter la VaR (valeur à risque) utilisée depuis 1996 par le comité de Bâle. De plus, nous fournissons le cadre statistique nécessaire pour faire de l’inférence sur les mesures proposées. Pour finir, les propriétés des estimateurs proposés sont évaluées à travers une étude Monte-Carlo, et une illustration empirique en utilisant les rendements journaliers du marché boursier américain sur de la période 2000-2011.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An Orthogonal Frequency Division Multiplexing (OFDM) communication system with a transmitter and a receiver. The transmitter is arranged to transmit channel estimation sequences on each of a plurality of band groups, or bands, and to transmit data on each of the band groups or bands. The receiver is arranged to receive the channel estimation sequences for each band group or band to calculate channel state information from each of the channel estimation sequences transmitted on that band group or band and to form an average channel state information. The receiver receives the transmitted data, transforms the received data into the frequency domain, equalizes the received data using the channel state information, demaps the equalized data to re-construct the received data as soft bits and modifies the soft bits using the averaged channel state information.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The primary purpose of this study was to model the partitioning of evapotranspiration in a maize-sunflower intercrop at various canopy covers. The Shuttleworth-Wallace (SW) model was extended for intercropping systems to include both crop transpiration and soil evaporation and allowing interaction between the two. To test the accuracy of the extended SW model, two field experiments of maize-sunflower intercrop were conducted in 1998 and 1999. Plant transpiration and soil evaporation were measured using sap flow gauges and lysimeters, respectively. The mean prediction error (simulated minus measured values) for transpiration was zero (which indicated no overall bias in estimation error), and its accuracy was not affected by the plant growth stages, but simulated transpiration during high measured transpiration rates tended to be slightly underestimated. Overall, the predictions for daily soil evaporation were also accurate. Model estimation errors were probably due to the simplified modelling of soil water content, stomatal resistances and soil heat flux as well as due to the uncertainties in characterising the 2 micrometeorological conditions. The SW’s prediction of transpiration was most sensitive to parameters most directly related to the canopy characteristics such as the partitioning of captured solar radiation, canopy resistance, and bulk boundary layer resistance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

High bandwidth-efficiency quadrature amplitude modulation (QAM) signaling widely adopted in high-rate communication systems suffers from a drawback of high peak-toaverage power ratio, which may cause the nonlinear saturation of the high power amplifier (HPA) at transmitter. Thus, practical high-throughput QAM communication systems exhibit nonlinear and dispersive channel characteristics that must be modeled as a Hammerstein channel. Standard linear equalization becomes inadequate for such Hammerstein communication systems. In this paper, we advocate an adaptive B-Spline neural network based nonlinear equalizer. Specifically, during the training phase, an efficient alternating least squares (LS) scheme is employed to estimate the parameters of the Hammerstein channel, including both the channel impulse response (CIR) coefficients and the parameters of the B-spline neural network that models the HPA’s nonlinearity. In addition, another B-spline neural network is used to model the inversion of the nonlinear HPA, and the parameters of this inverting B-spline model can easily be estimated using the standard LS algorithm based on the pseudo training data obtained as a natural byproduct of the Hammerstein channel identification. Nonlinear equalisation of the Hammerstein channel is then accomplished by the linear equalization based on the estimated CIR as well as the inverse B-spline neural network model. Furthermore, during the data communication phase, the decision-directed LS channel estimation is adopted to track the time-varying CIR. Extensive simulation results demonstrate the effectiveness of our proposed B-Spline neural network based nonlinear equalization scheme.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1. The rapid expansion of systematic monitoring schemes necessitates robust methods to reliably assess species' status and trends. Insect monitoring poses a challenge where there are strong seasonal patterns, requiring repeated counts to reliably assess abundance. Butterfly monitoring schemes (BMSs) operate in an increasing number of countries with broadly the same methodology, yet they differ in their observation frequency and in the methods used to compute annual abundance indices. 2. Using simulated and observed data, we performed an extensive comparison of two approaches used to derive abundance indices from count data collected via BMS, under a range of sampling frequencies. Linear interpolation is most commonly used to estimate abundance indices from seasonal count series. A second method, hereafter the regional generalized additive model (GAM), fits a GAM to repeated counts within sites across a climatic region. For the two methods, we estimated bias in abundance indices and the statistical power for detecting trends, given different proportions of missing counts. We also compared the accuracy of trend estimates using systematically degraded observed counts of the Gatekeeper Pyronia tithonus (Linnaeus 1767). 3. The regional GAM method generally outperforms the linear interpolation method. When the proportion of missing counts increased beyond 50%, indices derived via the linear interpolation method showed substantially higher estimation error as well as clear biases, in comparison to the regional GAM method. The regional GAM method also showed higher power to detect trends when the proportion of missing counts was substantial. 4. Synthesis and applications. Monitoring offers invaluable data to support conservation policy and management, but requires robust analysis approaches and guidance for new and expanding schemes. Based on our findings, we recommend the regional generalized additive model approach when conducting integrative analyses across schemes, or when analysing scheme data with reduced sampling efforts. This method enables existing schemes to be expanded or new schemes to be developed with reduced within-year sampling frequency, as well as affording options to adapt protocols to more efficiently assess species status and trends across large geographical scales.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The l1-norm sparsity constraint is a widely used technique for constructing sparse models. In this contribution, two zero-attracting recursive least squares algorithms, referred to as ZA-RLS-I and ZA-RLS-II, are derived by employing the l1-norm of parameter vector constraint to facilitate the model sparsity. In order to achieve a closed-form solution, the l1-norm of the parameter vector is approximated by an adaptively weighted l2-norm, in which the weighting factors are set as the inversion of the associated l1-norm of parameter estimates that are readily available in the adaptive learning environment. ZA-RLS-II is computationally more efficient than ZA-RLS-I by exploiting the known results from linear algebra as well as the sparsity of the system. The proposed algorithms are proven to converge, and adaptive sparse channel estimation is used to demonstrate the effectiveness of the proposed approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJETIVO: Avaliar a prevalência de tracoma em escolares de Botucatu/SP-Brasil e a distribuição espacial dos casos. MÉTODOS: Foi realizado um estudo transversal, em crianças de 7-14 anos, que frequentavam as escolas do ensino fundamental de Botucatu/SP, em novembro/2005. O tamanho da amostra foi estimado em 2.092 crianças, considerando-se a prevalência histórica de 11,2%, aceitando-se erro de estimação de 10% e nível de confiança de 95%. A amostra foi probabilística, ponderada e acrescida de 20%, devido à possível ocorrência de perdas. Examinaram-se 2.692 crianças. O diagnóstico foi clínico, baseado na normatização da Organização Mundial da Saúde (OMS). Para avaliação dos dados espaciais, utilizou-se o programa CartaLinx (v1.2), sendo os setores de demanda escolar digitalizados de acordo com as divisões do planejamento da Secretaria de Educação. Os dados foram analisados estatisticamente, sendo a análise da estrutura espacial dos eventos calculadas usando o programa Geoda. RESULTADOS: A prevalência de tracoma nos escolares de Botucatu foi de 2,9%, tendo sido detectados casos de tracoma folicular. A análise exploratória espacial não permitiu rejeitar a hipótese nula de aleatoriedade (I= -0,45, p>0,05), não havendo setores de demanda significativos. A análise feita para os polígonos de Thiessen também mostrou que o padrão global foi aleatório (I= -0,07; p=0,49). Entretanto, os indicadores locais apontaram um agrupamento do tipo baixo-baixo para um polígono ao norte da área urbana. CONCLUSÃO: A prevalência de tracoma em escolares de Botucatu foi de 2,9%. A análise da distribuição espacial não revelou áreas de maior aglomeração de casos. Embora o padrão global da doença não reproduza as condições socioeconômicas da população, a prevalência mais baixa do tracoma foi encontrada em setores de menor vulnerabilidade social.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An algorithm for adaptive IIR filtering that uses prefiltering structure in direct form is presented. This structure has an estimation error that is a linear function of the coefficients. This property greatly simplifies the derivation of gradient-based algorithms. Computer simulations show that the proposed structure improves convergence speed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper deals with the problem of establishing a state estimator for switched affine systems. For that matter, a modification on the Luenberger observer is proposed, the switched Luenberger observer, whose idea is to design one output gain matrix for each mode of the original system. The efficiency of the proposed method relies on a simplification on estimation error which is proved always valid, guaranteeing the estimation error to asymptotically converge to zero, for any initial state and switching law. Next, a dynamic output-dependent switching law is formulated. Then, design methodologies using linear matrix inequalities are proposed, which, to the authors's knowledge, have not yet been applied to this problem. Finally, observers for DC-DC converters are designed and simulated as application examples. © 2013 Brazilian Society for Automatics - SBA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work has been realized by the author in his PhD course in Electronics, Computer Science and Telecommunication at the University of Bologna, Faculty of Engineering, Italy. The subject of this thesis regards important channel estimation aspects in wideband wireless communication systems, such as echo cancellation in digital video broadcasting systems and pilot aided channel estimation through an innovative pilot design in Multi-Cell Multi-User MIMO-OFDM network. All the documentation here reported is a summary of years of work, under the supervision of Prof. Oreste Andrisano, coordinator of Wireless Communication Laboratory - WiLab, in Bologna. All the instrumentation that has been used for the characterization of the telecommunication systems belongs to CNR (National Research Council), CNIT (Italian Inter-University Center), and DEIS (Dept. of Electronics, Computer Science, and Systems). From November 2009 to May 2010, the author spent his time abroad, working in collaboration with DOCOMO - Communications Laboratories Europe GmbH (DOCOMO Euro-Labs) in Munich, Germany, in the Wireless Technologies Research Group. Some important scientific papers, submitted and/or published on IEEE journals and conferences have been produced by the author.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the scarcest resources in the wireless communication system is the limited frequency spectrum. Many wireless communication systems are hindered by the bandwidth limitation and are not able to provide high speed communication. However, Ultra-wideband (UWB) communication promises a high speed communication because of its very wide bandwidth of 7.5GHz (3.1GHz-10.6GHz). The unprecedented bandwidth promises many advantages for the 21st century wireless communication system. However, UWB has many hardware challenges, such as a very high speed sampling rate requirement for analog to digital conversion, channel estimation, and implementation challenges. In this thesis, a new method is proposed using compressed sensing (CS), a mathematical concept of sub-Nyquist rate sampling, to reduce the hardware complexity of the system. The method takes advantage of the unique signal structure of the UWB symbol. Also, a new digital implementation method for CS based UWB is proposed. Lastly, a comparative study is done of the CS-UWB hardware implementation methods. Simulation results show that the application of compressed sensing using the proposed method significantly reduces the number of hardware complexity compared to the conventional method of using compressed sensing based UWB receiver.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spacecraft formation flying navigation continues to receive a great deal of interest. The research presented in this dissertation focuses on developing methods for estimating spacecraft absolute and relative positions, assuming measurements of only relative positions using wireless sensors. The implementation of the extended Kalman filter to the spacecraft formation navigation problem results in high estimation errors and instabilities in state estimation at times. This is due tp the high nonlinearities in the system dynamic model. Several approaches are attempted in this dissertation aiming at increasing the estimation stability and improving the estimation accuracy. A differential geometric filter is implemented for spacecraft positions estimation. The differential geometric filter avoids the linearization step (which is always carried out in the extended Kalman filter) through a mathematical transformation that converts the nonlinear system into a linear system. A linear estimator is designed in the linear domain, and then transformed back to the physical domain. This approach demonstrated better estimation stability for spacecraft formation positions estimation, as detailed in this dissertation. The constrained Kalman filter is also implemented for spacecraft formation flying absolute positions estimation. The orbital motion of a spacecraft is characterized by two range extrema (perigee and apogee). At the extremum, the rate of change of a spacecraft’s range vanishes. This motion constraint can be used to improve the position estimation accuracy. The application of the constrained Kalman filter at only two points in the orbit causes filter instability. Two variables are introduced into the constrained Kalman filter to maintain the stability and improve the estimation accuracy. An extended Kalman filter is implemented as a benchmark for comparison with the constrained Kalman filter. Simulation results show that the constrained Kalman filter provides better estimation accuracy as compared with the extended Kalman filter. A Weighted Measurement Fusion Kalman Filter (WMFKF) is proposed in this dissertation. In wireless localizing sensors, a measurement error is proportional to the distance of the signal travels and sensor noise. In this proposed Weighted Measurement Fusion Kalman Filter, the signal traveling time delay is not modeled; however, each measurement is weighted based on the measured signal travel distance. The obtained estimation performance is compared to the standard Kalman filter in two scenarios. The first scenario assumes using a wireless local positioning system in a GPS denied environment. The second scenario assumes the availability of both the wireless local positioning system and GPS measurements. The simulation results show that the WMFKF has similar accuracy performance as the standard Kalman Filter (KF) in the GPS denied environment. However, the WMFKF maintains the position estimation error within its expected error boundary when the WLPS detection range limit is above 30km. In addition, the WMFKF has a better accuracy and stability performance when GPS is available. Also, the computational cost analysis shows that the WMFKF has less computational cost than the standard KF, and the WMFKF has higher ellipsoid error probable percentage than the standard Measurement Fusion method. A method to determine the relative attitudes between three spacecraft is developed. The method requires four direction measurements between the three spacecraft. The simulation results and covariance analysis show that the method’s error falls within a three sigma boundary without exhibiting any singularity issues. A study of the accuracy of the proposed method with respect to the shape of the spacecraft formation is also presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This is the reconstructed pCO2 data from Tree ring cellulose d13C data with estimation errors for 10 sites (location given below) by a geochemical model as given in the publication by Trina Bose, Supriyo Chakraborty, Hemant Borgaonkar, Saikat Sengupta. This data was generated in Stable Isotope Laboratory, Indian Institute of Tropical Meteorology, Pune - 411008, India

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coccolithophores are important calcifying phytoplankton predicted to be impacted by changes in ocean carbonate chemistry caused by the absorption of anthropogenic CO2. However, it is difficult to disentangle the effects of the simultaneously changing carbonate system parameters (CO2, bicarbonate, carbonate and protons) on the physiological responses to elevated CO2. Here, we adopted a multifactorial approach at constant pH or CO2 whilst varying dissolved inorganic carbon (DIC) to determine physiological and transcriptional responses to individual carbonate system parameters. We show that Emiliania huxleyi is sensitive to low CO2 (growth and photosynthesis) and low bicarbonate (calcification) as well as low pH beyond a limited tolerance range, but is much less sensitive to elevated CO2 and bicarbonate. Multiple up-regulated genes at low DIC bear the hallmarks of a carbon-concentrating mechanism (CCM) that is responsive to CO2 and bicarbonate but not to pH. Emiliania huxleyi appears to have evolved mechanisms to respond to limiting rather than elevated CO2. Calcification does not function as a CCM, but is inhibited at low DIC to allow the redistribution of DIC from calcification to photosynthesis. The presented data provides a significant step in understanding how E. huxleyi will respond to changing carbonate chemistry at a cellular level

Relevância:

80.00% 80.00%

Publicador:

Resumo:

n this article, a tool for simulating the channel impulse response for indoor visible light communications using 3D computer-aided design (CAD) models is presented. The simulation tool is based on a previous Monte Carlo ray-tracing algorithm for indoor infrared channel estimation, but including wavelength response evaluation. The 3D scene, or the simulation environment, can be defined using any CAD software in which the user specifies, in addition to the setting geometry, the reflection characteristics of the surface materials as well as the structures of the emitters and receivers involved in the simulation. Also, in an effort to improve the computational efficiency, two optimizations are proposed. The first one consists of dividing the setting into cubic regions of equal size, which offers a calculation improvement of approximately 50% compared to not dividing the 3D scene into sub-regions. The second one involves the parallelization of the simulation algorithm, which provides a computational speed-up proportional to the number of processors used.