920 resultados para Estimation Of Distribution Algorithms
Resumo:
In marginal lands Opuntia ficus-indica (OFI) could be used as an alternative fruit and forage crop. The plant vigour and the biomass production were evaluated in Portuguese germplasm (15 individuals from 16 ecotypes) by non-destructive methods, 2 years following planting in a marginal soil and dryland conditions. Two Italian cultivars (Gialla and Bianca) were included in the study for comparison purposes. The biomass production and the plant vigour were estimated by measuring the cladodes number and area, and the fresh (FW) and dry weight (DW) per plant. We selected linear models by using the biometric data from 60 cladodes to predict the cladode area, the FW and the DW per plant. Among ecotypes, significant differences were found in the studied biomass-related parameters and several homogeneous groups were established. Four Portuguese ecotypes had higher biomass production than the others, 3.20 Mg ha−1 on average, a value not significantly different to the improved ‘Gialla’ cultivar, which averaged 3.87 Mg ha−1. Those ecotypes could be used to start a breeding program and to deploy material for animal feeding and fruit production.
Resumo:
The aim was to verify the correlation between follicular population count, superovulatory response and the recovery of viable structures in the in vivo production of sheep embryos. In conclusion, there is a median correlation between follicular population observed by ultrasonography and viable recovered structures after superovulation protocol. Therefore, this tool is not indicated as a screening tool, alone, in the selection of Santa Inês sheep embryo donors.
Resumo:
The goal of this dissertation thesis is the estimation of the Saturnian satellites ephemerides using optical data of Cassini. In the first part we describe the software employed for the reduction of the images showing its main features and the accuracy that can be achieved comparing the results with published astrometry. Afterwards we describe the orbit determination problem (ODP) with particular focus on the weights selection for the estimation process. The third chapter describes the dynamical model used and the sources of potential errors in the residuals. The model have been validated trying to replicate JPL's published ephemerides SAT365, SAT375, SAT389 and SAT409. The final part investigates the residuals and the estimated ephemerides with particular focus on the giant moon Titan, the only in the solar system with an atmosphere other than the Earth. No astrometry have been retrieved in literature of Titan using optical observables, thus this represents one of the first investigations of the giant.
Resumo:
Jupiter and its moons are a complex dynamical system that include several phenomenon like tides interactions, moon's librations and resonances. One of the most interesting characteristics of the Jovian system is the presence of the Laplace resonance, where the orbital periods of Ganymede, Europa and Io maintain a 4:2:1 ratio respectively. It is interesting to study the role of the Laplace Resonance in the dynamic of the system, especially regarding the dissipative nature of the tidal interaction between Jupiter and its closest moon, Io. Numerous theories have been proposed regarding the orbital evolution of the Galilean satellites, but they disagree about the amount of dissipation of the system, therefore about the magnitude and the direction of the evolution of the system, mainly because of the lack of experimental data. The future JUICE space mission is a great opportunity to solve this dispute. JUICE is an ESA (European Space Agency) L-class mission (the largest category of missions in the ESA Cosmic Vision) that, at the beginning of 2030, will be inserted in the Jovian system and that will perform several flybys of the Galilean satellites, with the exception of Io. Subsequently, during the last part of the mission, it will orbit around Ganymede for nine months, with a possible extension of the mission. The data that JUICE will collect during the mission will have an exceptional accuracy, allowing to investigate several aspects of the dynamics the system, especially, the evolution of Laplace Resonance of the Galilean moons and its stability. This thesis will focus on the JUICE mission, in particular in the gravity estimation and orbit reconstruction of the Galilean satellites during the Jovian orbital phase using radiometric data. This is accomplished through an orbit determination technique called multi-arc approach, using the JPL's orbit determination software MONTE (Mission-analysis, Operations and Navigation Tool-kit Environment).
Resumo:
The objective of this thesis is the small area estimation of an economic security indicator. Economic security is a complex concept that carries a variety of meanings. In the literature there is no a formal unambiguous definition for economic security and in this work we refer to the definition recently provided for its opposite, economic insecurity, as the “anxiety produced by the possible exposure to adverse economic events and by the anticipation of the difficulty to recover from them” (Bossert and D’Ambrosio, 2013). In the last decade interest for economic insecurity/security has grown constantly, especially since the financial crisis of 2008, but even more in the last year after the economic consequences due to the Covid-19 pandemic. In this research, economic security is measures through a longitudinal indicator that takes into account the income levels of Italian households, from 2014 to 2016. The target areas are groups of Italian provinces, for which the indicator is estimated using longitudinal data taken from EU-SILC survey. We notice that the sample size is too low to obtain reliable estimates for our target areas. Therefore we resort to some Small Area Estimation strategies to improve the reliability of the results. In particular we consider small area models specified at area level. Besides the basic Fay-Herriot area-level model, we propose to consider some longitudinal extensions, including time-specific random effects following an autoregressive processes of order 1 (AR1) and a moving average of order 1 (MA1). We found that all the small area models used show a significant efficiency gain, especially MA1 model.
Resumo:
In the agri-food sector, measurement and monitoring activities contribute to high quality end products. In particular, considering food of plant origin, several product quality attributes can be monitored. Among the non-destructive measurement techniques, a large variety of optical techniques are available, including hyperspectral imaging (HSI) in the visible/near-infrared (Vis/NIR) range, which, due to the capacity to integrate image analysis and spectroscopy, proved particularly useful in agronomy and food science. Many published studies regarding HSI systems were carried out under controlled laboratory conditions. In contrast, few studies describe the application of HSI technology directly in the field, in particular for high-resolution proximal measurements carried out on the ground. Based on this background, the activities of the present PhD project were aimed at exploring and deepening knowledge in the application of optical techniques for the estimation of quality attributes of agri-food plant products. First, research activities on laboratory trials carried out on apricots and kiwis for the estimation of soluble solids content (SSC) and flesh firmness (FF) through HSI were reported; subsequently, FF was estimated on kiwis using a NIR-sensitive device; finally, the procyanidin content of red wine was estimated through a device based on the pulsed spectral sensitive photometry technique. In the second part, trials were carried out directly in the field to assess the degree of ripeness of red wine grapes by estimating SSC through HSI, and finally a method for the automatic selection of regions of interest in hyperspectral images of the vineyard was developed. The activities described above have revealed the potential of the optical techniques for sorting-line application; moreover, the application of the HSI technique directly in the field has proved particularly interesting, suggesting further investigations to solve a variety of problems arising from the many environmental variables that may affect the results of the analyses.
Resumo:
The work carried out in this thesis aims at: - studying – in both simulative and experimental methods – the effect of electrical transients (i.e., Voltage Polarity Reversals VPRs, Temporary OverVoltages TOVs, and Superimposed Switching Impulses SSIs) on the aging phenomena in HVDC extruded cable insulations. Dielectric spectroscopy, conductivity measurements, Fourier Transform Infra-Red FTIR spectroscopy, and space charge measurements show variation in the insulating properties of the aged Cross-Linked Polyethylene XLPE specimens compared to non-aged ones. Scission in XLPE bonds and formation of aging chemical bonds is also noticed in aged insulations due to possible oxidation reactions. The aged materials show more ability to accumulate space charges compared to non-aged ones. An increase in both DC electrical conductivity and imaginary permittivity has been also noticed. - The development of life-based geometric design of HVDC cables in a detailed parametric analysis of all parameters that affect the design. Furthermore, the effect of both electrical and thermal transients on the design is also investigated. - The intrinsic thermal instability in HVDC cables and the effect of insulation characteristics on the thermal stability using a temperature and field iterative loop (using numerical methods – Finite Difference Method FDM). The dielectric loss coefficient is also calculated for DC cables and found to be less than that in AC cables. This emphasizes that the intrinsic thermal instability is critical in HVDC cables. - Fitting electrical conductivity models to the experimental measurements using both models found in the literature and modified models to find the best fit by considering the synergistic effect between field and temperature coefficients of electrical conductivity.
Resumo:
The cerebral cortex presents self-similarity in a proper interval of spatial scales, a property typical of natural objects exhibiting fractal geometry. Its complexity therefore can be characterized by the value of its fractal dimension (FD). In the computation of this metric, it has usually been employed a frequentist approach to probability, with point estimator methods yielding only the optimal values of the FD. In our study, we aimed at retrieving a more complete evaluation of the FD by utilizing a Bayesian model for the linear regression analysis of the box-counting algorithm. We used T1-weighted MRI data of 86 healthy subjects (age 44.2 ± 17.1 years, mean ± standard deviation, 48% males) in order to gain insights into the confidence of our measure and investigate the relationship between mean Bayesian FD and age. Our approach yielded a stronger and significant (P < .001) correlation between mean Bayesian FD and age as compared to the previous implementation. Thus, our results make us suppose that the Bayesian FD is a more truthful estimation for the fractal dimension of the cerebral cortex compared to the frequentist FD.
Resumo:
This work proposes the analysis of tracking algorithms for point objects and extended targets particle filter on a radar application problem. Through simulations, the number of particles, the process and measurement noise of particle filter have been optimized. Four different scenarios have been considered in this work: point object with linear trajectory, point object with non-linear trajectory, extended object with linear trajectory, extended object with non-linear trajectory. The extended target has been modelled as an ellipse parametrized by the minor and major axes, the orientation angle, and the center coordinates (5 parameters overall).
Resumo:
Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.
Resumo:
A presente dissertação consta de estudos sobre deconvolução sísmica, onde buscamos otimizar desempenhos na operação de suavização, na resolução da estimativa da distribuição dos coeficientes de reflexão e na recuperação do pulso-fonte. Os filtros estudados são monocanais, e as formulações consideram o sismograma como o resultado de um processo estocástico estacionário, e onde demonstramos os efeitos de janelas e de descoloração. O principio aplicado é o da minimização da variância dos desvios entre o valor obtido e o desejado, resultando no sistema de equações normais Wiener-Hopf cuja solução é o vetor dos coeficientes do filtro para ser aplicado numa convolução. O filtro de deconvolução ao impulso é desenhado considerando a distribuição dos coeficientes de reflexão como uma série branca. O operador comprime bem os eventos sísmicos a impulsos, e o seu inverso é uma boa aproximação do pulso-fonte. O janelamento e a descoloração melhoram o resultado deste filtro. O filtro de deconvolução aos impulsos é desenhado utilizando a distribuição dos coeficientes de reflexão. As propriedades estatísticas da distribuição dos coeficientes de reflexão tem efeito no operador e em seu desempenho. Janela na autocorrelação degrada a saída, e a melhora é obtida quando ela é aplicada no operador deconvolucional. A transformada de Hilbert não segue o princípio dos mínimos-quadrados, e produz bons resultados na recuperação do pulso-fonte sob a premissa de fase-mínima. O inverso do pulso-fonte recuperado comprime bem os eventos sísmicos a impulsos. Quando o traço contém ruído aditivo, os resultados obtidos com auxilio da transformada de Hilbert são melhores do que os obtidos com o filtro de deconvolução ao impulso. O filtro de suavização suprime ruído presente no traço sísmico em função da magnitude do parâmetro de descoloração utilizado. A utilização dos traços suavizados melhora o desempenho da deconvolução ao impulso. A descoloração dupla gera melhores resultados do que a descoloração simples. O filtro casado é obtido através da maximização de uma função sinal/ruído. Os resultados obtidos na estimativa da distribuição dos coeficientes de reflexão com o filtro casado possuem melhor resolução do que o filtro de suavização.
Resumo:
Two so-called “integrated” polarimetric rate estimation techniques, ZPHI (Testud et al., 2000) and ZZDR (Illingworth and Thompson, 2005), are evaluated using 12 episodes of the year 2005 observed by the French C-band operational Trappes radar, located near Paris. The term “integrated” means that the concentration parameter of the drop size distribution is assumed to be constant over some area and the algorithms retrieve it using the polarimetric variables in that area. The evaluation is carried out in ideal conditions (no partial beam blocking, no ground-clutter contamination, no bright band contamination, a posteriori calibration of the radar variables ZH and ZDR) using hourly rain gauges located at distances less than 60 km from the radar. Also included in the comparison, for the sake of benchmarking, is a conventional Z = 282R1.66 estimator, with and without attenuation correction and with and without adjustment by rain gauges as currently done operationally at Météo France. Under those ideal conditions, the two polarimetric algorithms, which rely solely on radar data, appear to perform as well if not better, pending on the measurements conditions (attenuation, rain rates, …), than the conventional algorithms, even when the latter take into account rain gauges through the adjustment scheme. ZZDR with attenuation correction is the best estimator for hourly rain gauge accumulations lower than 5 mm h−1 and ZPHI is the best one above that threshold. A perturbation analysis has been conducted to assess the sensitivity of the various estimators with respect to biases on ZH and ZDR, taking into account the typical accuracy and stability that can be reasonably achieved with modern operational radars these days (1 dB on ZH and 0.2 dB on ZDR). A +1 dB positive bias on ZH (radar too hot) results in a +14% overestimation of the rain rate with the conventional estimator used in this study (Z = 282R^1.66), a -19% underestimation with ZPHI and a +23% overestimation with ZZDR. Additionally, a +0.2 dB positive bias on ZDR results in a typical rain rate under- estimation of 15% by ZZDR.
Resumo:
The optimized allocation of protective devices in strategic points of the circuit improves the quality of the energy supply and the system reliability index. This paper presents a nonlinear integer programming (NLIP) model with binary variables, to deal with the problem of protective device allocation in the main feeder and all branches of an overhead distribution circuit, to improve the reliability index and to provide customers with service of high quality and reliability. The constraints considered in the problem take into account technical and economical limitations, such as coordination problems of serial protective devices, available equipment, the importance of the feeder and the circuit topology. The use of genetic algorithms (GAs) is proposed to solve this problem, using a binary representation that does (1) or does not (0) show allocation of protective devices (reclosers, sectionalizers and fuses) in predefined points of the circuit. Results are presented for a real circuit (134 busses), with the possibility of protective device allocation in 29 points. Also the ability of the algorithm in finding good solutions while improving significantly the indicators of reliability is shown. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
This paper focuses on the general problem of coordinating of multi-robot systems, more specifically, it addresses the self-election of heterogeneous and specialized tasks by autonomous robots. In this regard, it has proposed experimenting with two different techniques based chiefly on selforganization and emergence biologically inspired, by applying response threshold models as well as ant colony optimization. Under this approach it can speak of multi-tasks selection instead of multi-tasks allocation, that means, as the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. It has evaluated the robustness of the algorithms, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.
Resumo:
A two-component survival mixture model is proposed to analyse a set of ischaemic stroke-specific mortality data. The survival experience of stroke patients after index stroke may be described by a subpopulation of patients in the acute condition and another subpopulation of patients in the chronic phase. To adjust for the inherent correlation of observations due to random hospital effects, a mixture model of two survival functions with random effects is formulated. Assuming a Weibull hazard in both components, an EM algorithm is developed for the estimation of fixed effect parameters and variance components. A simulation study is conducted to assess the performance of the two-component survival mixture model estimators. Simulation results confirm the applicability of the proposed model in a small sample setting. Copyright (C) 2004 John Wiley Sons, Ltd.