37 resultados para Competing risks, Estimation of predator mortality, Over dispersion, Stochastic modeling
em Universidad Politécnica de Madrid
Resumo:
This paper proposes a new multi-objective estimation of distribution algorithm (EDA) based on joint modeling of objectives and variables. This EDA uses the multi-dimensional Bayesian network as its probabilistic model. In this way it can capture the dependencies between objectives, variables and objectives, as well as the dependencies learnt between variables in other Bayesian network-based EDAs. This model leads to a problem decomposition that helps the proposed algorithm to find better trade-off solutions to the multi-objective problem. In addition to Pareto set approximation, the algorithm is also able to estimate the structure of the multi-objective problem. To apply the algorithm to many-objective problems, the algorithm includes four different ranking methods proposed in the literature for this purpose. The algorithm is applied to the set of walking fish group (WFG) problems, and its optimization performance is compared with an evolutionary algorithm and another multi-objective EDA. The experimental results show that the proposed algorithm performs significantly better on many of the problems and for different objective space dimensions, and achieves comparable results on some compared with the other algorithms.
Resumo:
Real time Tritium concentrations in air coming from an ITER-like reactor as source were coupled the European Centre Medium Range Weather Forecast (ECMWF) numerical model with the lagrangian atmospheric dispersion model FLEXPART. This tool ECMWF/FLEXPART was analyzed in normal operating conditions in the Western Mediterranean Basin during 45 days at summer 2010. From comparison with NORMTRI plumes over Western Mediterranean Basin the real time results have demonstrated an overestimation of the corresponding climatologically sequence Tritium concentrations in air outputs, at several distances from the reactor. For these purpose two clouds development patterns were established. The first one was following a cyclonic circulation over the Mediterranean Sea and the second one was based in the cloud delivered over the Interior of the Iberian Peninsula by another stabilized circulation corresponding to a High. One of the important remaining activities defined then, was the tool qualification. The aim of this paper is to present the ECMWF/FLEXPART products confronted with Tritium concentration in air data. For this purpose a database to develop and validate ECMWF/FLEXPART tritium in both assessments has been selected from a NORMTRI run. Similarities and differences, underestimation and overestimation with NORMTRI will allowfor refinement in some features of ECMWF/FLEXPART
Resumo:
The optimum quality that can be asymptotically achieved in the estimation of a probability p using inverse binomial sampling is addressed. A general definition of quality is used in terms of the risk associated with a loss function that satisfies certain assumptions. It is shown that the limit superior of the risk for p asymptotically small has a minimum over all (possibly randomized) estimators. This minimum is achieved by certain non-randomized estimators. The model includes commonly used quality criteria as particular cases. Applications to the non-asymptotic regime are discussed considering specific loss functions, for which minimax estimators are derived.
Resumo:
A comprehensive assessment of nitrogen (N) flows at the landscape scale is fundamental to understand spatial interactions in the N cascade and to inform the development of locally optimised N management strategies. To explore these interactions, complete N budgets were estimated for two contrasting hydrological catchments (dominated by agricultural grassland vs. semi-natural peat-dominated moorland), forming part of an intensively studied landscape in southern Scotland. Local scale atmospheric dispersion modelling and detailed farm and field inventories provided high resolution estimations of input fluxes. Direct agricultural inputs (i.e. grazing excreta, N2 fixation, organic and synthetic fertiliser) accounted for most of the catchment N inputs, representing 82% in the grassland and 62% in the moorland catchment, while atmospheric deposition made a significant contribution, particularly in the moorland catchment, contributing 38% of the N inputs. The estimated catchment N budgets highlighted areas of key uncertainty, particularly N2 exchange and stream N export. The resulting N balances suggest that the study catchments have a limited capacity to store N within soils, vegetation and groundwater. The "catchment N retention", i.e. the amount of N which is either stored within the catchment or lost through atmospheric emissions, was estimated to be 13% of the net anthropogenic input in the moorland and 61% in the grassland catchment. These values contrast with regional scale estimates: Catchment retentions of net anthropogenic input estimated within Europe at the regional scale range from 50% to 90%, with an average of 82% (Billen et al., 2011). This study emphasises the need for detailed budget analyses to identify the N status of European landscapes.
Resumo:
One of the most promising areas in which probabilistic graphical models have shown an incipient activity is the field of heuristic optimization and, in particular, in Estimation of Distribution Algorithms. Due to their inherent parallelism, different research lines have been studied trying to improve Estimation of Distribution Algorithms from the point of view of execution time and/or accuracy. Among these proposals, we focus on the so-called distributed or island-based models. This approach defines several islands (algorithms instances) running independently and exchanging information with a given frequency. The information sent by the islands can be either a set of individuals or a probabilistic model. This paper presents a comparative study for a distributed univariate Estimation of Distribution Algorithm and a multivariate version, paying special attention to the comparison of two alternative methods for exchanging information, over a wide set of parameters and problems ? the standard benchmark developed for the IEEE Workshop on Evolutionary Algorithms and other Metaheuristics for Continuous Optimization Problems of the ISDA 2009 Conference. Several analyses from different points of view have been conducted to analyze both the influence of the parameters and the relationships between them including a characterization of the configurations according to their behavior on the proposed benchmark.
Resumo:
Esta tesis doctoral presenta un procedimiento integral de control de calidad en centrales fotovoltaicas, que comprende desde la fase inicial de estimación de las expectativas de producción hasta la vigilancia del funcionamiento de la instalación una vez en operación, y que permite reducir la incertidumbre asociada su comportamiento y aumentar su fiabilidad a largo plazo, optimizando su funcionamiento. La coyuntura de la tecnología fotovoltaica ha evolucionado enormemente en los últimos años, haciendo que las centrales fotovoltaicas sean capaces de producir energía a unos precios totalmente competitivos en relación con otras fuentes de energía. Esto hace que aumente la exigencia sobre el funcionamiento y la fiabilidad de estas instalaciones. Para cumplir con dicha exigencia, es necesaria la adecuación de los procedimientos de control de calidad aplicados, así como el desarrollo de nuevos métodos que deriven en un conocimiento más completo del estado de las centrales, y que permitan mantener la vigilancia sobre las mismas a lo largo del tiempo. Además, los ajustados márgenes de explotación actuales requieren que durante la fase de diseño se disponga de métodos de estimación de la producción que comporten la menor incertidumbre posible. La propuesta de control de calidad presentada en este trabajo parte de protocolos anteriores orientados a la fase de puesta en marcha de una instalación fotovoltaica, y las complementa con métodos aplicables a la fase de operación, prestando especial atención a los principales problemas que aparecen en las centrales a lo largo de su vida útil (puntos calientes, impacto de la suciedad, envejecimiento…). Además, incorpora un protocolo de vigilancia y análisis del funcionamiento de las instalaciones a partir de sus datos de monitorización, que incluye desde la comprobación de la validez de los propios datos registrados hasta la detección y el diagnóstico de fallos, y que permite un conocimiento automatizado y detallado de las plantas. Dicho procedimiento está orientado a facilitar las tareas de operación y mantenimiento, de manera que se garantice una alta disponibilidad de funcionamiento de la instalación. De vuelta a la fase inicial de cálculo de las expectativas de producción, se utilizan los datos registrados en las centrales para llevar a cabo una mejora de los métodos de estimación de la radiación, que es la componente que más incertidumbre añade al proceso de modelado. El desarrollo y la aplicación de este procedimiento de control de calidad se han llevado a cabo en 39 grandes centrales fotovoltaicas, que totalizan una potencia de 250 MW, distribuidas por varios países de Europa y América Latina. ABSTRACT This thesis presents a comprehensive quality control procedure to be applied in photovoltaic plants, which covers from the initial phase of energy production estimation to the monitoring of the installation performance, once it is in operation. This protocol allows reducing the uncertainty associated to the photovoltaic plants behaviour and increases their long term reliability, therefore optimizing their performance. The situation of photovoltaic technology has drastically evolved in recent years, making photovoltaic plants capable of producing energy at fully competitive prices, in relation to other energy sources. This fact increases the requirements on the performance and reliability of these facilities. To meet this demand, it is necessary to adapt the quality control procedures and to develop new methods able to provide a more complete knowledge of the state of health of the plants, and able to maintain surveillance on them over time. In addition, the current meagre margins in which these installations operate require procedures capable of estimating energy production with the lower possible uncertainty during the design phase. The quality control procedure presented in this work starts from previous protocols oriented to the commissioning phase of a photovoltaic system, and complete them with procedures for the operation phase, paying particular attention to the major problems that arise in photovoltaic plants during their lifetime (hot spots, dust impact, ageing...). It also incorporates a protocol to control and analyse the installation performance directly from its monitoring data, which comprises from checking the validity of the recorded data itself to the detection and diagnosis of failures, and which allows an automated and detailed knowledge of the PV plant performance that can be oriented to facilitate the operation and maintenance of the installation, so as to ensure a high operation availability of the system. Back to the initial stage of calculating production expectations, the data recorded in the photovoltaic plants is used to improved methods for estimating the incident irradiation, which is the component that adds more uncertainty to the modelling process. The development and implementation of the presented quality control procedure has been carried out in 39 large photovoltaic plants, with a total power of 250 MW, located in different European and Latin-American countries.
Resumo:
Triple-Play (3P) and Quadruple-Play (4P) services are being widely offered by telecommunication services providers. Such services must be able to offer equal or higher quality levels than those obtained with traditional systems, especially for the most demanding services such as broadcast IPTV. This paper presents a matrix-based model, defined in terms of service components, user perceptions, agent capabilities, performance indicators and evaluation functions, which allows to estimate the overall quality of a set of convergent services, as perceived by the users, from a set of performance and/or Quality of Service (QoS) parameters of the convergent IP transport network
Resumo:
This paper describes new approaches to improve the local and global approximation (matching) and modeling capability of Takagi–Sugeno (T-S) fuzzy model. The main aim is obtaining high function approximation accuracy and fast convergence. The main problem encountered is that T-S identification method cannot be applied when the membership functions are overlapped by pairs. This restricts the application of the T-S method because this type of membership function has been widely used during the last 2 decades in the stability, controller design of fuzzy systems and is popular in industrial control applications. The approach developed here can be considered as a generalized version of T-S identification method with optimized performance in approximating nonlinear functions. We propose a noniterative method through weighting of parameters approach and an iterative algorithm by applying the extended Kalman filter, based on the same idea of parameters’ weighting. We show that the Kalman filter is an effective tool in the identification of T-S fuzzy model. A fuzzy controller based linear quadratic regulator is proposed in order to show the effectiveness of the estimation method developed here in control applications. An illustrative example of an inverted pendulum is chosen to evaluate the robustness and remarkable performance of the proposed method locally and globally in comparison with the original T-S model. Simulation results indicate the potential, simplicity, and generality of the algorithm. An illustrative example is chosen to evaluate the robustness. In this paper, we prove that these algorithms converge very fast, thereby making them very practical to use.
Resumo:
The aim of this paper was to accurately estimate the local truncation error of partial differential equations, that are numerically solved using a finite difference or finite volume approach on structured and unstructured meshes. In this work, we approximated the local truncation error using the @t-estimation procedure, which aims to compare the residuals on a sequence of grids with different spacing. First, we focused the analysis on one-dimensional scalar linear and non-linear test cases to examine the accuracy of the estimation of the truncation error for both finite difference and finite volume approaches on different grid topologies. Then, we extended the analysis to two-dimensional problems: first on linear and non-linear scalar equations and finally on the Euler equations. We demonstrated that this approach yields a highly accurate estimation of the truncation error if some conditions are fulfilled. These conditions are related to the accuracy of the restriction operators, the choice of the boundary conditions, the distortion of the grids and the magnitude of the iteration error.
Resumo:
In this paper, we seek to expand the use of direct methods in real-time applications by proposing a vision-based strategy for pose estimation of aerial vehicles. The vast majority of approaches make use of features to estimate motion. Conversely, the strategy we propose is based on a MR (Multi-Resolution) implementation of an image registration technique (Inverse Compositional Image Alignment ICIA) using direct methods. An on-board camera in a downwards-looking configuration, and the assumption of planar scenes, are the bases of the algorithm. The motion between frames (rotation and translation) is recovered by decomposing the frame-to-frame homography obtained by the ICIA algorithm applied to a patch that covers around the 80% of the image. When the visual estimation is required (e.g. GPS drop-out), this motion is integrated with the previous known estimation of the vehicles' state, obtained from the on-board sensors (GPS/IMU), and the subsequent estimations are based only on the vision-based motion estimations. The proposed strategy is tested with real flight data in representative stages of a flight: cruise, landing, and take-off, being two of those stages considered critical: take-off and landing. The performance of the pose estimation strategy is analyzed by comparing it with the GPS/IMU estimations. Results show correlation between the visual estimation obtained with the MR-ICIA and the GPS/IMU data, that demonstrate that the visual estimation can be used to provide a good approximation of the vehicle's state when it is required (e.g. GPS drop-outs). In terms of performance, the proposed strategy is able to maintain an estimation of the vehicle's state for more than one minute, at real-time frame rates based, only on visual information.
Resumo:
Background Most aerial plant parts are covered with a hydrophobic lipid-rich cuticle, which is the interface between the plant organs and the surrounding environment. Plant surfaces may have a high degree of hydrophobicity because of the combined effects of surface chemistry and roughness. The physical and chemical complexity of the plant cuticle limits the development of models that explain its internal structure and interactions with surface-applied agrochemicals. In this article we introduce a thermodynamic method for estimating the solubilities of model plant surface constituents and relating them to the effects of agrochemicals. Results Following the van Krevelen and Hoftyzer method, we calculated the solubility parameters of three model plant species and eight compounds that differ in hydrophobicity and polarity. In addition, intact tissues were examined by scanning electron microscopy and the surface free energy, polarity, solubility parameter and work of adhesion of each were calculated from contact angle measurements of three liquids with different polarities. By comparing the affinities between plant surface constituents and agrochemicals derived from (a) theoretical calculations and (b) contact angle measurements we were able to distinguish the physical effect of surface roughness from the effect of the chemical nature of the epicuticular waxes. A solubility parameter model for plant surfaces is proposed on the basis of an increasing gradient from the cuticular surface towards the underlying cell wall. Conclusions The procedure enabled us to predict the interactions among agrochemicals, plant surfaces, and cuticular and cell wall components, and promises to be a useful tool for improving our understanding of biological surface interactions.
Resumo:
Evolutionary search algorithms have become an essential asset in the algorithmic toolbox for solving high-dimensional optimization problems in across a broad range of bioinformatics problems. Genetic algorithms, the most well-known and representative evolutionary search technique, have been the subject of the major part of such applications. Estimation of distribution algorithms (EDAs) offer a novel evolutionary paradigm that constitutes a natural and attractive alternative to genetic algorithms. They make use of a probabilistic model, learnt from the promising solutions, to guide the search process. In this paper, we set out a basic taxonomy of EDA techniques, underlining the nature and complexity of the probabilistic model of each EDA variant. We review a set of innovative works that make use of EDA techniques to solve challenging bioinformatics problems, emphasizing the EDA paradigm's potential for further research in this domain.
Resumo:
Background Most aerial plant parts are covered with a hydrophobic lipid-rich cuticle, which is the interface between the plant organs and the surrounding environment. Plant surfaces may have a high degree of hydrophobicity because of the combined effects of surface chemistry and roughness. The physical and chemical complexity of the plant cuticle limits the development of models that explain its internal structure and interactions with surface-applied agrochemicals. In this article we introduce a thermodynamic method for estimating the solubilities of model plant surface constituents and relating them to the effects of agrochemicals. Results Following the van Krevelen and Hoftyzer method, we calculated the solubility parameters of three model plant species and eight compounds that differ in hydrophobicity and polarity. In addition, intact tissues were examined by scanning electron microscopy and the surface free energy, polarity, solubility parameter and work of adhesion of each were calculated from contact angle measurements of three liquids with different polarities. By comparing the affinities between plant surface constituents and agrochemicals derived from (a) theoretical calculations and (b) contact angle measurements we were able to distinguish the physical effect of surface roughness from the effect of the chemical nature of the epicuticular waxes. A solubility parameter model for plant surfaces is proposed on the basis of an increasing gradient from the cuticular surface towards the underlying cell wall. Conclusions The procedure enabled us to predict the interactions among agrochemicals, plant surfaces, and cuticular and cell wall components, and promises to be a useful tool for improving our understanding of biological surface interactions.
Resumo:
In this paper we investigate whether conventional text categorization methods may suffice to infer different verbal intelligence levels. This research goal relies on the hypothesis that the vocabulary that speakers make use of reflects their verbal intelligence levels. Automatic verbal intelligence estimation of users in a spoken language dialog system may be useful when defining an optimal dialog strategy by improving its adaptation capabilities. The work is based on a corpus containing descriptions (i.e. monologs) of a short film by test persons yielding different educational backgrounds and the verbal intelligence scores of the speakers. First, a one-way analysis of variance was performed to compare the monologs with the film transcription and to demonstrate that there are differences in the vocabulary used by the test persons yielding different verbal intelligence levels. Then, for the classification task, the monologs were represented as feature vectors using the classical TF–IDF weighting scheme. The Naive Bayes, k-nearest neighbors and Rocchio classifiers were tested. In this paper we describe and compare these classification approaches, define the optimal classification parameters and discuss the classification results obtained.
Resumo:
An increasing number of neuroimaging studies are concerned with the identification of interactions or statistical dependencies between brain areas. Dependencies between the activities of different brain regions can be quantified with functional connectivity measures such as the cross-correlation coefficient. An important factor limiting the accuracy of such measures is the amount of empirical data available. For event-related protocols, the amount of data also affects the temporal resolution of the analysis. We use analytical expressions to calculate the amount of empirical data needed to establish whether a certain level of dependency is significant when the time series are autocorrelated, as is the case for biological signals. These analytical results are then contrasted with estimates from simulations based on real data recorded with magnetoencephalography during a resting-state paradigm and during the presentation of visual stimuli. Results indicate that, for broadband signals, 50–100 s of data is required to detect a true underlying cross-correlations coefficient of 0.05. This corresponds to a resolution of a few hundred milliseconds for typical event-related recordings. The required time window increases for narrow band signals as frequency decreases. For instance, approximately 3 times as much data is necessary for signals in the alpha band. Important implications can be derived for the design and interpretation of experiments to characterize weak interactions, which are potentially important for brain processing.