925 resultados para Estimation Of Distribution Algorithm
Resumo:
On December 17 came into force on community standard marine fuels.The SOx emissions will be increased in the main shipping routes at a rate of 3 to 4% annually. Most of the sulphur burden will be attributed to shipping activity. Therefore the extension of SECAs could be beneficial towards the improvement of air quality. This paper begins with a review of the current situation SECAS and ECAS areas, highlighting the rules to be implemented shortly. The aim of the paper is known the current situation bunkering determine the estimated short term in Spain from economic variables
Resumo:
After the 2010 Haiti earthquake, that hits the city of Port-au-Prince, capital city of Haiti, a multidisciplinary working group of specialists (seismologist, geologists, engineers and architects) from different Spanish Universities and also from Haiti, joined effort under the SISMO-HAITI project (financed by the Universidad Politecnica de Madrid), with an objective: Evaluation of seismic hazard and risk in Haiti and its application to the seismic design, urban planning, emergency and resource management. In this paper, as a first step for a structural damage estimation of future earthquakes in the country, a calibration of damage functions has been carried out by means of a two-stage procedure. After compiling a database with observed damage in the city after the earthquake, the exposure model (building stock) has been classified and through an iteratively two-step calibration process, a specific set of damage functions for the country has been proposed. Additionally, Next Generation Attenuation Models (NGA) and Vs30 models have been analysed to choose the most appropriate for the seismic risk estimation in the city. Finally in a next paper, these functions will be used to estimate a seismic risk scenario for a future earthquake.
Resumo:
Esta tesis doctoral presenta un procedimiento integral de control de calidad en centrales fotovoltaicas, que comprende desde la fase inicial de estimación de las expectativas de producción hasta la vigilancia del funcionamiento de la instalación una vez en operación, y que permite reducir la incertidumbre asociada su comportamiento y aumentar su fiabilidad a largo plazo, optimizando su funcionamiento. La coyuntura de la tecnología fotovoltaica ha evolucionado enormemente en los últimos años, haciendo que las centrales fotovoltaicas sean capaces de producir energía a unos precios totalmente competitivos en relación con otras fuentes de energía. Esto hace que aumente la exigencia sobre el funcionamiento y la fiabilidad de estas instalaciones. Para cumplir con dicha exigencia, es necesaria la adecuación de los procedimientos de control de calidad aplicados, así como el desarrollo de nuevos métodos que deriven en un conocimiento más completo del estado de las centrales, y que permitan mantener la vigilancia sobre las mismas a lo largo del tiempo. Además, los ajustados márgenes de explotación actuales requieren que durante la fase de diseño se disponga de métodos de estimación de la producción que comporten la menor incertidumbre posible. La propuesta de control de calidad presentada en este trabajo parte de protocolos anteriores orientados a la fase de puesta en marcha de una instalación fotovoltaica, y las complementa con métodos aplicables a la fase de operación, prestando especial atención a los principales problemas que aparecen en las centrales a lo largo de su vida útil (puntos calientes, impacto de la suciedad, envejecimiento…). Además, incorpora un protocolo de vigilancia y análisis del funcionamiento de las instalaciones a partir de sus datos de monitorización, que incluye desde la comprobación de la validez de los propios datos registrados hasta la detección y el diagnóstico de fallos, y que permite un conocimiento automatizado y detallado de las plantas. Dicho procedimiento está orientado a facilitar las tareas de operación y mantenimiento, de manera que se garantice una alta disponibilidad de funcionamiento de la instalación. De vuelta a la fase inicial de cálculo de las expectativas de producción, se utilizan los datos registrados en las centrales para llevar a cabo una mejora de los métodos de estimación de la radiación, que es la componente que más incertidumbre añade al proceso de modelado. El desarrollo y la aplicación de este procedimiento de control de calidad se han llevado a cabo en 39 grandes centrales fotovoltaicas, que totalizan una potencia de 250 MW, distribuidas por varios países de Europa y América Latina. ABSTRACT This thesis presents a comprehensive quality control procedure to be applied in photovoltaic plants, which covers from the initial phase of energy production estimation to the monitoring of the installation performance, once it is in operation. This protocol allows reducing the uncertainty associated to the photovoltaic plants behaviour and increases their long term reliability, therefore optimizing their performance. The situation of photovoltaic technology has drastically evolved in recent years, making photovoltaic plants capable of producing energy at fully competitive prices, in relation to other energy sources. This fact increases the requirements on the performance and reliability of these facilities. To meet this demand, it is necessary to adapt the quality control procedures and to develop new methods able to provide a more complete knowledge of the state of health of the plants, and able to maintain surveillance on them over time. In addition, the current meagre margins in which these installations operate require procedures capable of estimating energy production with the lower possible uncertainty during the design phase. The quality control procedure presented in this work starts from previous protocols oriented to the commissioning phase of a photovoltaic system, and complete them with procedures for the operation phase, paying particular attention to the major problems that arise in photovoltaic plants during their lifetime (hot spots, dust impact, ageing...). It also incorporates a protocol to control and analyse the installation performance directly from its monitoring data, which comprises from checking the validity of the recorded data itself to the detection and diagnosis of failures, and which allows an automated and detailed knowledge of the PV plant performance that can be oriented to facilitate the operation and maintenance of the installation, so as to ensure a high operation availability of the system. Back to the initial stage of calculating production expectations, the data recorded in the photovoltaic plants is used to improved methods for estimating the incident irradiation, which is the component that adds more uncertainty to the modelling process. The development and implementation of the presented quality control procedure has been carried out in 39 large photovoltaic plants, with a total power of 250 MW, located in different European and Latin-American countries.
Resumo:
When many protein sequences are available for estimating the time of divergence between two species, it is customary to estimate the time for each protein separately and then use the average for all proteins as the final estimate. However, it can be shown that this estimate generally has an upward bias, and that an unbiased estimate is obtained by using distances based on concatenated sequences. We have shown that two concatenation-based distances, i.e., average gamma distance weighted with sequence length (d2) and multiprotein gamma distance (d3), generally give more satisfactory results than other concatenation-based distances. Using these two distance measures for 104 protein sequences, we estimated the time of divergence between mice and rats to be approximately 33 million years ago. Similarly, the time of divergence between humans and rodents was estimated to be approximately 96 million years ago. We also investigated the dependency of time estimates on statistical methods and various assumptions made by using sequence data from eubacteria, protists, plants, fungi, and animals. Our best estimates of the times of divergence between eubacteria and eukaryotes, between protists and other eukaryotes, and between plants, fungi, and animals were 3, 1.7, and 1.3 billion years ago, respectively. However, estimates of ancient divergence times are subject to a substantial amount of error caused by uncertainty of the molecular clock, horizontal gene transfer, errors in sequence alignments, etc.
Resumo:
Estimation of evolutionary distances has always been a major issue in the study of molecular evolution because evolutionary distances are required for estimating the rate of evolution in a gene, the divergence dates between genes or organisms, and the relationships among genes or organisms. Other closely related issues are the estimation of the pattern of nucleotide substitution, the estimation of the degree of rate variation among sites in a DNA sequence, and statistical testing of the molecular clock hypothesis. Mathematical treatments of these problems are considerably simplified by the assumption of a stationary process in which the nucleotide compositions of the sequences under study have remained approximately constant over time, and there now exist fairly extensive studies of stationary models of nucleotide substitution, although some problems remain to be solved. Nonstationary models are much more complex, but significant progress has been recently made by the development of the paralinear and LogDet distances. This paper reviews recent studies on the above issues and reports results on correcting the estimation bias of evolutionary distances, the estimation of the pattern of nucleotide substitution, and the estimation of rate variation among the sites in a sequence.
Resumo:
We report a previously unappreciated property of the signals that target organelle-specific proteins to their subcellular sites of action. Such targeting sequences are shown to be polymorphic. We discovered this polymorphism when we cloned the mitochondrial manganese-containing superoxide dismutase from cell lines of normal individuals and patients with genetic diseases of premature aging and compared their sequences to each other and to those previously reported. The polymorphism consists of a single nucleotide change in the region of the DNA that encodes the signal sequence such that either an alanine or valine is present. Subsequently, eight cell lines were analyzed and all three possible combinations of the two signal sequences were observed. Such signal sequence polymorphisms could result in diseases of distribution, where essential proteins are not properly targeted, thereby leading to absolute or relative deficiencies of critical enzymes within specific cellular compartments. Progeria and related syndromes may be diseases of distribution.
Resumo:
Context. Chromospheric activity produces both photometric and spectroscopic variations that can be mistaken as planets. Large spots crossing the stellar disc can produce planet-like periodic variations in the light curve of a star. These spots clearly affect the spectral line profiles, and their perturbations alter the line centroids creating a radial velocity jitter that might “contaminate” the variations induced by a planet. Precise chromospheric activity measurements are needed to estimate the activity-induced noise that should be expected for a given star. Aims. We obtain precise chromospheric activity measurements and projected rotational velocities for nearby (d ≤ 25 pc) cool (spectral types F to K) stars, to estimate their expected activity-related jitter. As a complementary objective, we attempt to obtain relationships between fluxes in different activity indicator lines, that permit a transformation of traditional activity indicators, i.e., Ca II H & K lines, to others that hold noteworthy advantages. Methods. We used high resolution (~50 000) echelle optical spectra. Standard data reduction was performed using the IRAF ECHELLE package. To determine the chromospheric emission of the stars in the sample, we used the spectral subtraction technique. We measured the equivalent widths of the chromospheric emission lines in the subtracted spectrum and transformed them into fluxes by applying empirical equivalent width and flux relationships. Rotational velocities were determined using the cross-correlation technique. To infer activity-related radial velocity (RV) jitter, we used empirical relationships between this jitter and the R’_HK index. Results. We measured chromospheric activity, as given by different indicators throughout the optical spectra, and projected rotational velocities for 371 nearby cool stars. We have built empirical relationships among the most important chromospheric emission lines. Finally, we used the measured chromospheric activity to estimate the expected RV jitter for the active stars in the sample.
Resumo:
Comunicación presentada en el VII Symposium Nacional de Reconocimiento de Formas y Análisis de Imágenes, SNRFAI, Barcelona, abril 1997.
Resumo:
In this paper we present a study of the computational cost of the GNG3D algorithm for mesh optimization. This algorithm has been implemented taking as a basis a new method which is based on neural networks and consists on two differentiated phases: an optimization phase and a reconstruction phase. The optimization phase is developed applying an optimization algorithm based on the Growing Neural Gas model, which constitutes an unsupervised incremental clustering algorithm. The primary goal of this phase is to obtain a simplified set of vertices representing the best approximation of the original 3D object. In the reconstruction phase we use the information provided by the optimization algorithm to reconstruct the faces thus obtaining the optimized mesh. The computational cost of both phases is calculated, showing some examples.
Resumo:
Information of crop phenology is essential for evaluating crop productivity. In a previous work, we determined phenological stages with remote sensing data using a dynamic system framework and an extended Kalman filter (EKF) approach. In this paper, we demonstrate that the particle filter is a more reliable method to infer any phenological stage compared to the EKF. The improvements achieved with this approach are discussed. In addition, this methodology enables the estimation of key cultivation dates, thus providing a practical product for many applications. The dates of some important stages, as the sowing date and the day when the crop reaches the panicle initiation stage, have been chosen to show the potential of this technique.
Resumo:
The aim of this study was to obtain the exact value of the keratometric index (nkexact) and to clinically validate a variable keratometric index (nkadj) that minimizes this error. Methods: The nkexact value was determined by obtaining differences (DPc) between keratometric corneal power (Pk) and Gaussian corneal power (PGauss c ) equal to 0. The nkexact was defined as the value associated with an equivalent difference in the magnitude of DPc for extreme values of posterior corneal radius (r2c) for each anterior corneal radius value (r1c). This nkadj was considered for the calculation of the adjusted corneal power (Pkadj). Values of r1c ∈ (4.2, 8.5) mm and r2c ∈ (3.1, 8.2) mm were considered. Differences of True Net Power with PGauss c , Pkadj, and Pk(1.3375) were calculated in a clinical sample of 44 eyes with keratoconus. Results: nkexact ranged from 1.3153 to 1.3396 and nkadj from 1.3190 to 1.3339 depending on the eye model analyzed. All the nkadj values adjusted perfectly to 8 linear algorithms. Differences between Pkadj and PGauss c did not exceed 60.7 D (Diopter). Clinically, nk = 1.3375 was not valid in any case. Pkadj and True Net Power and Pk(1.3375) and Pkadj were statistically different (P , 0.01), whereas no differences were found between PGauss c and Pkadj (P . 0.01). Conclusions: The use of a single value of nk for the calculation of the total corneal power in keratoconus has been shown to be imprecise, leading to inaccuracies in the detection and classification of this corneal condition. Furthermore, our study shows the relevance of corneal thickness in corneal power calculations in keratoconus.
Resumo:
Purpose: The aim of this study was to analyze theoretically the errors in the central corneal power calculation in eyes with keratoconus when a keratometric index (nk) is used and to clinically confirm the errors induced by this approach. Methods: Differences (DPc) between central corneal power estimation with the classical nk (Pk) and with the Gaussian equation (PGauss c ) in eyes with keratoconus were simulated and evaluated theoretically, considering the potential range of variation of the central radius of curvature of the anterior (r1c) and posterior (r2c) corneal surfaces. Further, these differences were also studied in a clinical sample including 44 keratoconic eyes (27 patients, age range: 14–73 years). The clinical agreement between Pk and PGauss c (true net power) obtained with a Scheimpflug photography–based topographer was evaluated in such eyes. Results: For nk = 1.3375, an overestimation was observed in most cases in the theoretical simulations, with DPc ranging from an underestimation of 20.1 diopters (D) (r1c = 7.9 mm and r2c = 8.2 mm) to an overestimation of 4.3 D (r1c = 4.7 mm and r2c = 3.1 mm). Clinically, Pk always overestimated the PGauss c given by the topography system in a range between 0.5 and 2.5 D (P , 0.01). The mean clinical DPc was 1.48 D, with limits of agreement of 0.71 and 2.25 D. A very strong statistically significant correlation was found between DPc and r2c (r = 20.93, P , 0.01). Conclusions: The use of a single value for nk for the calculation of corneal power is imprecise in keratoconus and can lead to significant clinical errors.
Resumo:
This paper deals with the estimation of a time-invariant channel spectrum from its own nonuniform samples, assuming there is a bound on the channel’s delay spread. Except for this last assumption, this is the basic estimation problem in systems providing channel spectral samples. However, as shown in the paper, the delay spread bound leads us to view the spectrum as a band-limited signal, rather than the Fourier transform of a tapped delay line (TDL). Using this alternative model, a linear estimator is presented that approximately minimizes the expected root-mean-square (RMS) error for a deterministic channel. Its main advantage over the TDL is that it takes into account the spectrum’s smoothness (time width), thus providing a performance improvement. The proposed estimator is compared numerically with the maximum likelihood (ML) estimator based on a TDL model in pilot-assisted channel estimation (PACE) for OFDM.
Resumo:
AIM: To evaluate the prediction error in intraocular lens (IOL) power calculation for a rotationally asymmetric refractive multifocal IOL and the impact on this error of the optimization of the keratometric estimation of the corneal power and the prediction of the effective lens position (ELP). METHODS: Retrospective study including a total of 25 eyes of 13 patients (age, 50 to 83y) with previous cataract surgery with implantation of the Lentis Mplus LS-312 IOL (Oculentis GmbH, Germany). In all cases, an adjusted IOL power (PIOLadj) was calculated based on Gaussian optics using a variable keratometric index value (nkadj) for the estimation of the corneal power (Pkadj) and on a new value for ELP (ELPadj) obtained by multiple regression analysis. This PIOLadj was compared with the IOL power implanted (PIOLReal) and the value proposed by three conventional formulas (Haigis, Hoffer Q and Holladay). RESULTS: PIOLReal was not significantly different than PIOLadj and Holladay IOL power (P>0.05). In the Bland and Altman analysis, PIOLadj showed lower mean difference (-0.07 D) and limits of agreement (of 1.47 and -1.61 D) when compared to PIOLReal than the IOL power value obtained with the Holladay formula. Furthermore, ELPadj was significantly lower than ELP calculated with other conventional formulas (P<0.01) and was found to be dependent on axial length, anterior chamber depth and Pkadj. CONCLUSION: Refractive outcomes after cataract surgery with implantation of the multifocal IOL Lentis Mplus LS-312 can be optimized by minimizing the keratometric error and by estimating ELP using a mathematical expression dependent on anatomical factors.
Resumo:
The Remez penalty and smoothing algorithm (RPSALG) is a unified framework for penalty and smoothing methods for solving min-max convex semi-infinite programing problems, whose convergence was analyzed in a previous paper of three of the authors. In this paper we consider a partial implementation of RPSALG for solving ordinary convex semi-infinite programming problems. Each iteration of RPSALG involves two types of auxiliary optimization problems: the first one consists of obtaining an approximate solution of some discretized convex problem, while the second one requires to solve a non-convex optimization problem involving the parametric constraints as objective function with the parameter as variable. In this paper we tackle the latter problem with a variant of the cutting angle method called ECAM, a global optimization procedure for solving Lipschitz programming problems. We implement different variants of RPSALG which are compared with the unique publicly available SIP solver, NSIPS, on a battery of test problems.