126 resultados para Robust estimation
Resumo:
The relief of the seafloor is an important source of data for many scientists. In this paper we present an optical system to deal with underwater 3D reconstruction. This system is formed by three cameras that take images synchronously in a constant frame rate scheme. We use the images taken by these cameras to compute dense 3D reconstructions. We use Bundle Adjustment to estimate the motion ofthe trinocular rig. Given the path followed by the system, we get a dense map of the observed scene by registering the different dense local reconstructions in a unique and bigger one
Resumo:
Every year, flash floods cause economic losses and major problems for undertaking daily activity in the Catalonia region (NE Spain). Sometimes catastrophic damage and casualties occur. When a long term analysis of floods is undertaken, a question arises regarding the changing role of the vulnerability and the hazard in risk evolution. This paper sets out to give some information to deal with this question, on the basis of analysis of all the floods that have occurred in Barcelona county (Catalonia) since the 14th century, as well as the flooded area, urban evolution, impacts and the weather conditions for any of most severe events. With this objective, the identification and classification of historical floods, and characterisation of flash-floods among these, have been undertaken. Besides this, the main meteorological factors associated with recent flash floods in this city and neighbouring regions are well-known. On the other hand, the identification of rainfall trends that could explain the historical evolution of flood hazard occurrence in this city has been analysed. Finally, identification of the influence of urban development on the vulnerability to floods has been carried out. Barcelona city has been selected thanks to its long continuous data series (daily rainfall data series, since 1854; one of the longest rainfall rate series of Europe, since 1921) and for the accurate historical archive information that is available (since the Roman Empire for the urban evolution). The evolution of flood occurrence shows the existence of oscillations in the earlier and later modern-age periods that can be attributed to climatic variability, evolution of the perception threshold and changes in vulnerability. A great increase of vulnerability can be assumed for the period 1850¿1900. The analysis of the time evolution for the Barcelona rainfall series (1854¿2000) shows that no trend exists, although, due to changes in urban planning, flash-floods impact has altered over this time. The number of catastrophic flash floods has diminished, although the extraordinary ones have increased.
Resumo:
The current operational very short-term and short-term quantitative precipitation forecast (QPF) at the Meteorological Service of Catalonia (SMC) is made by three different methodologies: Advection of the radar reflectivity field (ADV), Identification, tracking and forecasting of convective structures (CST) and numerical weather prediction (NWP) models using observational data assimilation (radar, satellite, etc.). These precipitation forecasts have different characteristics, lead time and spatial resolutions. The objective of this study is to combine these methods in order to obtain a single and optimized QPF at each lead time. This combination (blending) of the radar forecast (ADV and CST) and precipitation forecast from NWP model is carried out by means of different methodologies according to the prediction horizon. Firstly, in order to take advantage of the rainfall location and intensity from radar observations, a phase correction technique is applied to the NWP output to derive an additional corrected forecast (MCO). To select the best precipitation estimation in the first and second hour (t+1 h and t+2 h), the information from radar advection (ADV) and the corrected outputs from the model (MCO) are mixed by using different weights, which vary dynamically, according to indexes that quantify the quality of these predictions. This procedure has the ability to integrate the skill of rainfall location and patterns that are given by the advection of radar reflectivity field with the capacity of generating new precipitation areas from the NWP models. From the third hour (t+3 h), as radar-based forecasting has generally low skills, only the quantitative precipitation forecast from model is used. This blending of different sources of prediction is verified for different types of episodes (convective, moderately convective and stratiform) to obtain a robust methodology for implementing it in an operational and dynamic way.
Resumo:
[cat] Es presenta un estimador nucli transformat que és adequat per a distribucions de cua pesada. Utilitzant una transformació basada en la distribució de probabilitat Beta l’elecció del paràmetre de finestra és molt directa. Es presenta una aplicació a dades d’assegurances i es mostra com calcular el Valor en Risc.
Resumo:
[cat] Es presenta un estimador nucli transformat que és adequat per a distribucions de cua pesada. Utilitzant una transformació basada en la distribució de probabilitat Beta l’elecció del paràmetre de finestra és molt directa. Es presenta una aplicació a dades d’assegurances i es mostra com calcular el Valor en Risc.
Resumo:
We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed
Resumo:
We present optimal measuring strategies for an estimation of the entanglement of unknown two-qubit pure states and of the degree of mixing of unknown single-qubit mixed states, of which N identical copies are available. The most general measuring strategies are considered in both situations, to conclude in the first case that a local, although collective, measurement suffices to estimate entanglement, a nonlocal property, optimally.
Resumo:
An accurate mass formula at finite temperature has been used to obtain a more precise estimation of temperature effects on fission barriers calculated within the liquid drop model.
Resumo:
The statistical theory of signal detection and the estimation of its parameters are reviewed and applied to the case of detection of the gravitational-wave signal from a coalescing binary by a laser interferometer. The correlation integral and the covariance matrix for all possible static configurations are investigated numerically. Approximate analytic formulas are derived for the case of narrow band sensitivity configuration of the detector.
Resumo:
We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed
Resumo:
The amino acid composition of the protein from three strains of rat (Wistar, Zucker lean and Zucker obese), subjected to reference and high-fat diets has been used to determine the mean empirical formula, molecular weight and N content of whole-rat protein. The combined whole protein of the rat was uniform for the six experimental groups, containing an estimate of 17.3% N and a mean aminoacyl residue molecular weight of 103.7. This suggests that the appropriate protein factor for the calculation of rat protein from its N content should be 5.77 instead of the classical 6.25. In addition, an estimate of the size of the non-protein N mass in the whole rat gave a figure in the range of 5.5 % of all N. The combination of the two calculations gives a protein factor of 5.5 for the conversion of total N into rat protein.
Resumo:
In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is studied. The performance of the ten lag-one autocorrelation estimators is compared in terms of Mean Square Error (combining bias and variance) using data series generated by Monte Carlo simulation. The results show that there is not a single optimal estimator for all conditions, suggesting that the estimator ought to be chosen according to sample size and to the information available of the possible direction of the serial dependence. Additionally, the probability of labelling an actually existing autocorrelation as statistically significant is explored using Monte Carlo sampling. The power estimates obtained are quite similar among the tests associated with the different estimators. These estimates evidence the small probability of detecting autocorrelation in series with less than 20 measurement times.
Resumo:
Type 1 diabetic patients depend on external insulin delivery to keep their blood glucose within near-normal ranges. In this work, two robust closed-loop controllers for blood glucose regulation are developed to prevent the life-threatening hypoglycemia, as well as to avoid extended hyperglycemia. The proposed controllers are designed by using the sliding mode control technique in a Smith predictor structure. To improve meal disturbance rejection, a simple feedforward controller is added to inject meal-time insulin bolus. Simulations scenarios were used to test the controllers, and showed the controllers ability to maintain the glucose levels within the safe limits in the presence of errors in measurements, modeling and meal estimation
Resumo:
Chromosomal anomalies, like Robertsonian and reciprocal translocations represent a big problem in cattle breeding as their presence induces, in the carrier subjects, a well documented fertility reduction. In cattle reciprocal translocations (RCPs, a chromosome abnormality caused by an exchange of material between nonhomologous chromosomes) are considered rare as to date only 19 reciprocal translocations have been described. In cattle it is common knowledge that the Robertsonian translocations represent the most common cytogenetic anomalies, and this is probably due to the existence of the endemic 1;29 Robertsonian translocation. However, these considerations are based on data obtained using techniques that are unable to identify all reciprocal translocations and thus their frequency is clearly underestimated. The purpose of this work is to provide a first realistic estimate of the impact of RCPs in the cattle population studied, trying to eliminate the factors which have caused an underestimation of their frequency so far. We performed this work using a mathematical as well as a simulation approach and, as biological data, we considered the cytogenetic results obtained in the last 15 years. The results obtained show that only 16% of reciprocal translocations can be detected using simple Giemsa techniques and consequently they could be present in no less than 0,14% of cattle subjects, a frequency five times higher than that shown by de novo Robertsonian translocations. This data is useful to open a debate about the need to introduce a more efficient method to identify RCP in cattle.
Resumo:
In this paper we propose an endpoint detection system based on the use of several features extracted from each speech frame, followed by a robust classifier (i.e Adaboost and Bagging of decision trees, and a multilayer perceptron) and a finite state automata (FSA). We present results for four different classifiers. The FSA module consisted of a 4-state decision logic that filtered false alarms and false positives. We compare the use of four different classifiers in this task. The look ahead of the method that we propose was of 7 frames, which are the number of frames that maximized the accuracy of the system. The system was tested with real signals recorded inside a car, with signal to noise ratio that ranged from 6 dB to 30dB. Finally we present experimental results demonstrating that the system yields robust endpoint detection.