6 resultados para Disease outbreaks. epidemiologic methods. Spatial Analysis. Epidemics.
em Universidad de Alicante
Resumo:
Fundamentos: Geocodificar es asignar coordenadas geográficas a puntos del espacio, frecuentemente direcciones postales. El error cometido al aplicar este proceso puede introducir un sesgo en las estimaciones de modelos espacio-temporales en estudios epidemiológicos. No se han encontrado estudios que midan este error en ciudades españolas. El objetivo es evaluar los errores en magnitud y direccionalidad de dos recursos gratuitos (Google y Yahoo) respecto a GPS en dos ciudades de España. Método: Se geocodificaron 30 direcciones aleatorias con los dos recursos citados y con GPS en Santa Pola (Alicante) y en Alicante. Se calculó la mediana y su IC95% del error en metros entre los recursos y GPS, para el total y por el status reportado. Se evaluó la direccionalidad del error calculando el cuadrante de localización y aplicando un test Chi-Cuadrado. Se evaluó el error del GPS midiendo 11 direcciones dos veces en un intervalo de 4 días. Resultados: La mediana del error total desde Google-GPS fue de 23,2 metros (16,0-32,2) para Santa Pola y 21,4 metros (14,9-31,1) en Alicante. Para Yahoo fue de 136,0 (19,2-318,5) para Santa Pola y 23,8 (13,6-29,2) para Alicante. Por status, se geocodificó entre un 73% y 90% como ‘exactas o interpoladas’ (menor error), tanto Google como Yahoo tuvieron una mediana del error de entre 19 y 22 metros en las dos ciudades. El error del GPS fue de 13,8 (6,7-17,8) metros. No se detectó direccionalidad. Conclusiones: El error de Google es asumible y estable en las dos ciudades, siendo un recurso fiable para geocodificar direcciones postales en España en estudios epidemiológicos.
Resumo:
Context. The Gaia-ESO Public Spectroscopic Survey is obtaining high-quality spectroscopy of some 100 000 Milky Way stars using the FLAMES spectrograph at the VLT, down to V = 19 mag, systematically covering all the main components of the Milky Way and providing the first homogeneous overview of the distributions of kinematics and chemical element abundances in the Galaxy. Observations of young open clusters, in particular, are giving new insights into their initial structure, kinematics, and their subsequent evolution. Aims. This paper describes the analysis of UVES and GIRAFFE spectra acquired in the fields of young clusters whose population includes pre-main sequence (PMS) stars. The analysis is applied to all stars in such fields, regardless of any prior information on membership, and provides fundamental stellar atmospheric parameters, elemental abundances, and PMS-specific parameters such as veiling, accretion, and chromospheric activity. Methods. When feasible, different methods were used to derive raw parameters (e.g. line equivalent widths) fundamental atmospheric parameters and derived parameters (e.g. abundances). To derive some of these parameters, we used methods that have been extensively used in the past and new ones developed in the context of the Gaia-ESO survey enterprise. The internal precision of these quantities was estimated by inter-comparing the results obtained by these different methods, while the accuracy was estimated by comparison with independent external data, such as effective temperature and surface gravity derived from angular diameter measurements, on a sample of benchmarks stars. A validation procedure based on these comparisons was applied to discard spurious or doubtful results and produce recommended parameters. Specific strategies were implemented to resolve problems of fast rotation, accretion signatures, chromospheric activity, and veiling. Results. The analysis carried out on spectra acquired in young cluster fields during the first 18 months of observations, up to June 2013, is presented in preparation of the first release of advanced data products. These include targets in the fields of the ρ Oph, Cha I, NGC 2264, γ Vel, and NGC 2547 clusters. Stellar parameters obtained with the higher resolution and larger wavelength coverage from UVES are reproduced with comparable accuracy and precision using the smaller wavelength range and lower resolution of the GIRAFFE setup adopted for young stars, which allows us to provide stellar parameters with confidence for the much larger GIRAFFE sample. Precisions are estimated to be ≈120 K rms in Teff, ≈0.3 dex rms in log g, and ≈0.15 dex rms in [Fe/H] for the UVES and GIRAFFE setups.
Resumo:
Context. The ongoing Gaia-ESO Public Spectroscopic Survey is using FLAMES at the VLT to obtain high-quality medium-resolution Giraffe spectra for about 105 stars and high-resolution UVES spectra for about 5000 stars. With UVES, the Survey has already observed 1447 FGK-type stars. Aims. These UVES spectra are analyzed in parallel by several state-of-the-art methodologies. Our aim is to present how these analyses were implemented, to discuss their results, and to describe how a final recommended parameter scale is defined. We also discuss the precision (method-to-method dispersion) and accuracy (biases with respect to the reference values) of the final parameters. These results are part of the Gaia-ESO second internal release and will be part of its first public release of advanced data products. Methods. The final parameter scale is tied to the scale defined by the Gaia benchmark stars, a set of stars with fundamental atmospheric parameters. In addition, a set of open and globular clusters is used to evaluate the physical soundness of the results. Each of the implemented methodologies is judged against the benchmark stars to define weights in three different regions of the parameter space. The final recommended results are the weighted medians of those from the individual methods. Results. The recommended results successfully reproduce the atmospheric parameters of the benchmark stars and the expected Teff-log g relation of the calibrating clusters. Atmospheric parameters and abundances have been determined for 1301 FGK-type stars observed with UVES. The median of the method-to-method dispersion of the atmospheric parameters is 55 K for Teff, 0.13 dex for log g and 0.07 dex for [Fe/H]. Systematic biases are estimated to be between 50−100 K for Teff, 0.10−0.25 dex for log g and 0.05−0.10 dex for [Fe/H]. Abundances for 24 elements were derived: C, N, O, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Y, Zr, Mo, Ba, Nd, and Eu. The typical method-to-method dispersion of the abundances varies between 0.10 and 0.20 dex. Conclusions. The Gaia-ESO sample of high-resolution spectra of FGK-type stars will be among the largest of its kind analyzed in a homogeneous way. The extensive list of elemental abundances derived in these stars will enable significant advances in the areas of stellar evolution and Milky Way formation and evolution.
Resumo:
A twenty-year period of severe land subsidence evolution in the Alto Guadalentín Basin (southeast Spain) is monitored using multi-sensor SAR images, processed by advanced differential interferometric synthetic aperture radar (DInSAR) techniques. The SAR images used in this study consist of four datasets acquired by ERS-1/2, ENVISAT, ALOS and COSMO-SkyMed satellites between 1992 and 2012. The integration of ground surface displacement maps retrieved for different time periods allows us to quantify up to 2.50 m of cumulated displacements that occurred between 1992 and 2012 in the Alto Guadalentín Basin. DInSAR results were locally compared with global positioning system (GPS) data available for two continuous stations located in the study area, demonstrating the high consistency of local vertical motion measurements between the two different surveying techniques. An average absolute error of 4.6 ± 4 mm for the ALOS data and of 4.8 ± 3.5 mm for the COSMO-SkyMed data confirmed the reliability of the analysis. The spatial analysis of DInSAR ground surface displacement reveals a direct correlation with the thickness of the compressible alluvial deposits. Detected ground subsidence in the past 20 years is most likely a consequence of a 100–200 m groundwater level drop that has persisted since the 1970s due to the overexploitation of the Alto Guadalentín aquifer system. The negative gradient of the pore pressure is responsible for the extremely slow consolidation of a very thick (> 100 m) layer of fine-grained silt and clay layers with low vertical hydraulic permeability (approximately 50 mm/h) wherein the maximum settlement has still not been reached.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
Background and objective: In this paper, we have tested the suitability of using different artificial intelligence-based algorithms for decision support when classifying the risk of congenital heart surgery. In this sense, classification of those surgical risks provides enormous benefits as the a priori estimation of surgical outcomes depending on either the type of disease or the type of repair, and other elements that influence the final result. This preventive estimation may help to avoid future complications, or even death. Methods: We have evaluated four machine learning algorithms to achieve our objective: multilayer perceptron, self-organizing map, radial basis function networks and decision trees. The architectures implemented have the aim of classifying among three types of surgical risk: low complexity, medium complexity and high complexity. Results: Accuracy outcomes achieved range between 80% and 99%, being the multilayer perceptron method the one that offered a higher hit ratio. Conclusions: According to the results, it is feasible to develop a clinical decision support system using the evaluated algorithms. Such system would help cardiology specialists, paediatricians and surgeons to forecast the level of risk related to a congenital heart disease surgery.