404 resultados para regularization
Resumo:
En este artículo abordamos el trabajo y la habitación como dos dimensiones centrales en la reproducción de la vida, enfatizando la realimentación mutua de ambas dimensiones. Caracterizamos las estrategias desplegadas por un conjunto de familias en un asentamiento en proceso de regularización urbana y dominial de la ciudad de Resistencia, capital de la Provincia del Chaco, en el Nordeste argentino. El trabajo de campo se realizó en el marco de una investigación de tesis de Licenciatura en Relaciones Laborales (RRLL) en la Universidad Nacional del Nordeste (UNNE) entre fines del año 2008 y principios del año 2009 y, si bien se basa en un conjunto de entrevistas a residentes de dicho asentamiento, se han puesto en relación un conjunto de indicadores que permiten contextualizar estas visiones subjetivas. La integración a redes de intercambio en diferentes niveles ?comunitario, a través de la integración en una ong o movimiento social, a nivel de las familias y dentro de las unidades domésticas? va construyendo un capital social colectivo que les permite reproducirse socialmente. Este abordaje, tributario de las formulaciones de Bourdieu, nos ha permitido resignificar prácticas, ampliando la noción de trabajo a la de estrategias de reproducción de la vida. Nuestro interés fue describir cómo algunas familias relatan e interpretan sus propias experiencias en relación con sus estrategias de supervivencia. El artículo avanza en una tipologización de las trayectorias estudiadas
Resumo:
The occupation of Rondônia, from the 1970s, occurred in a disorganized way, since it attracted an amount of migrants which was much bigger than the settling projects could sustain. The aim of this research was to analyze the circumstances in which the district of Nova Londrina was settled, utilizing for this reason a bibliographic survey, as well as the Oral History method. The arrival in Nova Londrina, Ji-Paraná, was highlighted by conflicts between the settlers and the settling company Calama S/A. In spite of the company's violence, the colonists resisted until there was an intervention from INCRA, through a land regularization program. In this context, the Urban Center for Rural Support (NUAR) was implanted, intending to support the agricultural workers
Resumo:
The naïve Bayes approach is a simple but often satisfactory method for supervised classification. In this paper, we focus on the naïve Bayes model and propose the application of regularization techniques to learn a naïve Bayes classifier. The main contribution of the paper is a stagewise version of the selective naïve Bayes, which can be considered a regularized version of the naïve Bayes model. We call it forward stagewise naïve Bayes. For comparison’s sake, we also introduce an explicitly regularized formulation of the naïve Bayes model, where conditional independence (absence of arcs) is promoted via an L 1/L 2-group penalty on the parameters that define the conditional probability distributions. Although already published in the literature, this idea has only been applied for continuous predictors. We extend this formulation to discrete predictors and propose a modification that yields an adaptive penalization. We show that, whereas the L 1/L 2 group penalty formulation only discards irrelevant predictors, the forward stagewise naïve Bayes can discard both irrelevant and redundant predictors, which are known to be harmful for the naïve Bayes classifier. Both approaches, however, usually improve the classical naïve Bayes model’s accuracy.
Resumo:
Learning the structure of a graphical model from data is a common task in a wide range of practical applications. In this paper, we focus on Gaussian Bayesian networks, i.e., on continuous data and directed acyclic graphs with a joint probability density of all variables given by a Gaussian. We propose to work in an equivalence class search space, specifically using the k-greedy equivalence search algorithm. This, combined with regularization techniques to guide the structure search, can learn sparse networks close to the one that generated the data. We provide results on some synthetic networks and on modeling the gene network of the two biological pathways regulating the biosynthesis of isoprenoids for the Arabidopsis thaliana plant
Resumo:
Locally weighted regression is a technique that predicts the response for new data items from their neighbors in the training data set, where closer data items are assigned higher weights in the prediction. However, the original method may suffer from overfitting and fail to select the relevant variables. In this paper we propose combining a regularization approach with locally weighted regression to achieve sparse models. Specifically, the lasso is a shrinkage and selection method for linear regression. We present an algorithm that embeds lasso in an iterative procedure that alternatively computes weights and performs lasso-wise regression. The algorithm is tested on three synthetic scenarios and two real data sets. Results show that the proposed method outperforms linear and local models for several kinds of scenarios
Resumo:
We develop a novel remote sensing technique for the observation of waves on the ocean surface. Our method infers the 3-D waveform and radiance of oceanic sea states via a variational stereo imagery formulation. In this setting, the shape and radiance of the wave surface are given by minimizers of a composite energy functional that combines a photometric matching term along with regularization terms involving the smoothness of the unknowns. The desired ocean surface shape and radiance are the solution of a system of coupled partial differential equations derived from the optimality conditions of the energy functional. The proposed method is naturally extended to study the spatiotemporal dynamics of ocean waves and applied to three sets of stereo video data. Statistical and spectral analysis are carried out. Our results provide evidence that the observed omnidirectional wavenumber spectrum S(k) decays as k-2.5 is in agreement with Zakharov's theory (1999). Furthermore, the 3-D spectrum of the reconstructed wave surface is exploited to estimate wave dispersion and currents.
Resumo:
Resumen Ejecutivo Desde hace unos años, la tecnología RFID parece estar tecnológicamente madura, aunque se halla inmersa en una continua evolución y mejora de sus prestaciones frente a la tecnología basada en el código de barras. Esta tecnología está empezando a aumentar su comercialización y uso generalizado. Este proyecto describe su funcionamiento y arquitectura, las aplicaciones e implementaciones reales en diversos sectores empresariales y soluciones o propuestas para mejorar y fomentar investigaciones, desarrollos e implementaciones futuras. Una primera parte consiste en el desarrollo teórico del funcionamiento, parámetros físicos y la arquitectura RFID. En la segunda parte se estudian las aplicaciones potenciales, beneficios e inconvenientes y varios puntos de vista; empresas que invierten a favor de RFID y han realizado implementaciones reales exitosas y organizaciones en contra. La tercera y última parte analiza las consecuencias negativas que supone un mal uso de la tecnología como la pérdida de privacidad o confidencialidad, se proponen algunas medidas de seguridad y soluciones al problema utilizando la regularización y estandarización. Finalmente, tras estudiar la tecnología RFID, se aportan algunas conclusiones de carácter actual y de futuro después de observar resultados de expectativas previstas. Executive Summary For several years, RFID technology seems to be technologically mature, but is immersed in a continuous evolution and improvement of their performance than the technology based on the barcode. This technology is starting to increase their marketing and widespread use. This project describes its operation and architecture, applications and real implementations in different business sectors and solutions or proposals to improve and enhance research, development and future implementations. The first part consists of theory development of the operation, physical parameters and architecture RFID. The second part focuses on the potential applications, benefits and drawbacks and several points of view; companies investing on behalf of RFID and have made successful actual deployments and organizations against. The third and last part discusses the negative impact that is a bad use of technology such as loss of privacy or confidentiality; suggest some security measures and solutions to the problem using regularization and standardization. Finally, it provides some conclusions of the current character, after studying RFID technology and the future after observing results of expectations.
Resumo:
With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Artificial Neural Networks still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning ANN parameters. In recent years the use of hybrid technologies, combining Artificial Neural Networks and Genetic Algorithms, has been utilized to. In this work, several ANN topologies were trained and tested using Artificial Neural Networks and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out.
Resumo:
Respiratory motion is a major source of reduced quality in positron emission tomography (PET). In order to minimize its effects, the use of respiratory synchronized acquisitions, leading to gated frames, has been suggested. Such frames, however, are of low signal-to-noise ratio (SNR) as they contain reduced statistics. Super-resolution (SR) techniques make use of the motion in a sequence of images in order to improve their quality. They aim at enhancing a low-resolution image belonging to a sequence of images representing different views of the same scene. In this work, a maximum a posteriori (MAP) super-resolution algorithm has been implemented and applied to respiratory gated PET images for motion compensation. An edge preserving Huber regularization term was used to ensure convergence. Motion fields were recovered using a B-spline based elastic registration algorithm. The performance of the SR algorithm was evaluated through the use of both simulated and clinical datasets by assessing image SNR, as well as the contrast, position and extent of the different lesions. Results were compared to summing the registered synchronized frames on both simulated and clinical datasets. The super-resolution image had higher SNR (by a factor of over 4 on average) and lesion contrast (by a factor of 2) than the single respiratory synchronized frame using the same reconstruction matrix size. In comparison to the motion corrected or the motion free images a similar SNR was obtained, while improvements of up to 20% in the recovered lesion size and contrast were measured. Finally, the recovered lesion locations on the SR images were systematically closer to the true simulated lesion positions. These observations concerning the SNR, lesion contrast and size were confirmed on two clinical datasets included in the study. In conclusion, the use of SR techniques applied to respiratory motion synchronized images lead to motion compensation combined with improved image SNR and contrast, without any increase in the overall acquisition times.
Resumo:
Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.
Resumo:
RESUMEN El apoyo a la selección de especies a la restauración de la vegetación en España en los últimos 40 años se ha basado fundamentalmente en modelos de distribución de especies, también llamados modelos de nicho ecológico, que estiman la probabilidad de presencia de las especies en función de las condiciones del medio físico (clima, suelo, etc.). Con esta tesis se ha intentado contribuir a la mejora de la capacidad predictiva de los modelos introduciendo algunas propuestas metodológicas adaptadas a los datos disponibles actualmente en España y enfocadas al uso de los modelos en la selección de especies. No siempre se dispone de datos a una resolución espacial adecuada para la escala de los proyectos de restauración de la vegetación. Sin embrago es habitual contar con datos de baja resolución espacial para casi todas las especies vegetales presentes en España. Se propone un método de recalibración que actualiza un modelo de regresión logística de baja resolución espacial con una nueva muestra de alta resolución espacial. El método permite obtener predicciones de calidad aceptable con muestras relativamente pequeñas (25 presencias de la especie) frente a las muestras mucho mayores (más de 100 presencias) que requería una estrategia de modelización convencional que no usara el modelo previo. La selección del método estadístico puede influir decisivamente en la capacidad predictiva de los modelos y por esa razón la comparación de métodos ha recibido mucha atención en la última década. Los estudios previos consideraban a la regresión logística como un método inferior a técnicas más modernas como las de máxima entropía. Los resultados de la tesis demuestran que esa diferencia observada se debe a que los modelos de máxima entropía incluyen técnicas de regularización y la versión de la regresión logística usada en las comparaciones no. Una vez incorporada la regularización a la regresión logística usando penalización, las diferencias en cuanto a capacidad predictiva desaparecen. La regresión logística penalizada es, por tanto, una alternativa más para el ajuste de modelos de distribución de especies y está a la altura de los métodos modernos con mejor capacidad predictiva como los de máxima entropía. A menudo, los modelos de distribución de especies no incluyen variables relativas al suelo debido a que no es habitual que se disponga de mediciones directas de sus propiedades físicas o químicas. La incorporación de datos de baja resolución espacial proveniente de mapas de suelo nacionales o continentales podría ser una alternativa. Los resultados de esta tesis sugieren que los modelos de distribución de especies de alta resolución espacial mejoran de forma ligera pero estadísticamente significativa su capacidad predictiva cuando se incorporan variables relativas al suelo procedente de mapas de baja resolución espacial. La validación es una de las etapas fundamentales del desarrollo de cualquier modelo empírico como los modelos de distribución de especies. Lo habitual es validar los modelos evaluando su capacidad predictiva especie a especie, es decir, comparando en un conjunto de localidades la presencia o ausencia observada de la especie con las predicciones del modelo. Este tipo de evaluación no responde a una cuestión clave en la restauración de la vegetación ¿cuales son las n especies más idóneas para el lugar a restaurar? Se ha propuesto un método de evaluación de modelos adaptado a esta cuestión que consiste en estimar la capacidad de un conjunto de modelos para discriminar entre las especies presentes y ausentes de un lugar concreto. El método se ha aplicado con éxito a la validación de 188 modelos de distribución de especies leñosas orientados a la selección de especies para la restauración de la vegetación en España. Las mejoras metodológicas propuestas permiten mejorar la capacidad predictiva de los modelos de distribución de especies aplicados a la selección de especies en la restauración de la vegetación y también permiten ampliar el número de especies para las que se puede contar con un modelo que apoye la toma de decisiones. SUMMARY During the last 40 years, decision support tools for plant species selection in ecological restoration in Spain have been based on species distribution models (also called ecological niche models), that estimate the probability of occurrence of the species as a function of environmental predictors (e.g., climate, soil). In this Thesis some methodological improvements are proposed to contribute to a better predictive performance of such models, given the current data available in Spain and focusing in the application of the models to selection of species for ecological restoration. Fine grained species distribution data are required to train models to be used at the scale of the ecological restoration projects, but this kind of data are not always available for every species. On the other hand, coarse grained data are available for almost every species in Spain. A recalibration method is proposed that updates a coarse grained logistic regression model using a new fine grained updating sample. The method allows obtaining acceptable predictive performance with reasonably small updating sample (25 occurrences of the species), in contrast with the much larger samples (more than 100 occurrences) required for a conventional modeling approach that discards the coarse grained data. The choice of the statistical method may have a dramatic effect on model performance, therefore comparisons of methods have received much interest in the last decade. Previous studies have shown a poorer performance of the logistic regression compared to novel methods like maximum entropy models. The results of this Thesis show that the observed difference is caused by the fact that maximum entropy models include regularization techniques and the versions of logistic regression compared do not. Once regularization has been added to the logistic regression using a penalization procedure, the differences in model performance disappear. Therefore, penalized logistic regression may be considered one of the best performing methods to model species distributions. Usually, species distribution models do not consider soil related predictors because direct measurements of the chemical or physical properties are often lacking. The inclusion of coarse grained soil data from national or continental soil maps could be a reasonable alternative. The results of this Thesis suggest that the performance of the models slightly increase after including soil predictors form coarse grained soil maps. Model validation is a key stage of the development of empirical models, such as species distribution models. The usual way of validating is based on the evaluation of model performance for each species separately, i.e., comparing observed species presences or absence to predicted probabilities in a set of sites. This kind of evaluation is not informative for a common question in ecological restoration projects: which n species are the most suitable for the environment of the site to be restored? A method has been proposed to address this question that estimates the ability of a set of models to discriminate among present and absent species in a evaluation site. The method has been successfully applied to the validation of 188 species distribution models used to support decisions on species selection for ecological restoration in Spain. The proposed methodological approaches improve the predictive performance of the predictive models applied to species selection in ecological restoration and increase the number of species for which a model that supports decisions can be fitted.
Resumo:
Whole brain resting state connectivity is a promising biomarker that might help to obtain an early diagnosis in many neurological diseases, such as dementia. Inferring resting-state connectivity is often based on correlations, which are sensitive to indirect connections, leading to an inaccurate representation of the real backbone of the network. The precision matrix is a better representation for whole brain connectivity, as it considers only direct connections. The network structure can be estimated using the graphical lasso (GL), which achieves sparsity through l1-regularization on the precision matrix. In this paper, we propose a structural connectivity adaptive version of the GL, where weaker anatomical connections are represented as stronger penalties on the corre- sponding functional connections. We applied beamformer source reconstruction to the resting state MEG record- ings of 81 subjects, where 29 were healthy controls, 22 were single-domain amnestic Mild Cognitive Impaired (MCI), and 30 were multiple-domain amnestic MCI. An atlas-based anatomical parcellation of 66 regions was ob- tained for each subject, and time series were assigned to each of the regions. The fiber densities between the re- gions, obtained with deterministic tractography from diffusion-weighted MRI, were used to define the anatomical connectivity. Precision matrices were obtained with the region specific time series in five different frequency bands. We compared our method with the traditional GL and a functional adaptive version of the GL, in terms of log-likelihood and classification accuracies between the three groups. We conclude that introduc- ing an anatomical prior improves the expressivity of the model and, in most cases, leads to a better classification between groups.
Resumo:
Se presenta en este trabajo una investigación sobre el comportamiento de losas de hormigón armado sometidas a explosiones y la simulación numérica de dicho fenómeno mediante el método de los elementos finitos. El trabajo aborda el estudio de la respuesta de dichos elementos estructurales por comparación entre los resultados obtenidos en ensayos reales a escala 1:1 y los cálculos realizados mediante modelos de ordenador. Este procedimiento permite verificar la idoneidad, o no, de estos últimos. Se expone en primer lugar el comportamiento mecánico de los modelos de material que son susceptibles de emplearse en la simulación de estructuras mediante el software empleado en la presente investigación, así como las diferentes formas de aplicar cargas explosivas en estructuras modeladas mediante el método de los Elementos Finitos, razonándose en ambos casos la elección llevada a cabo. Posteriormente, se describen los ensayos experimentales disponibles, que tuvieron lugar en las instalaciones del Laboratorio de Balística de Efectos, perteneciente al Instituto Tecnológico de la Marañosa (ITM), de Madrid, para estudiar el comportamiento de losas de hormigón armado a escala 1:1 sometidas a explosiones reales. Se ha propuesto un método de interpretación del nivel de daño en las losas mediante el martillo de Schmidt, que posteriormente permitirá comparar resultados con los modelos de ordenador realizados. Asimismo, se propone un método analítico para la determinación del tamaño óptimo de la malla en las simulaciones realizadas, basado en la distribución de la energía interna del sistema. Es conocido que el comportamiento de los modelos pueden verse fuertemente influenciados por el mallado empleado. Según el mallado sea “grosero” o “fino” el fallo puede no alcanzarse o hacerlo de forma prematura, o excesiva, respectivamente. Es más, algunos modelos de material contemplan una “regularización” del tamaño de la malla, pero en la presente investigación se evidencia que dicho procedimiento tiene un rango de validez limitado, incluso se determina un entorno óptimo de valores. Finalmente, se han elaborado los modelos numéricos con el software comercial LS-DYNA, contemplando todos los aspectos reseñados en los párrafos anteriores, procediendo a realizar una comparación de los resultados obtenidos en las simulaciones con los procedidos en los ensayos reales a escala 1:1, observando que existe una muy buena correlación entre ambas situaciones que evidencian que el procedimiento propuesto en la investigación es de todo punto adecuado para la simulación de losas de hormigón armado sometidas a explosiones. ABSTRACT This doctoral thesis presents an investigation on the behavior of reinforced concrete slabs subjected to explosions along with the numerical simulation of this phenomenon by the finite elements method. The work involves the study of the response of these structural elements by comparing the results of field tests at full scale and the calculations performed by the computer model. This procedure allows to verify the appropriateness or not of the latter. Firstly, the mechanical behavior of the material models that are likely to be used in the modelling of structures is explained. In addition, different ways of choosing explosive charges when conducting finite element methods are analyzed and discussed. Secondly, several experimental tests, which took place at the Laboratorio de Balística de Efectos at the Instituto Tecnológico de la Marañosa (ITM), in Madrid, are described in order to study the behavior of these reinforced concrete slabs. A method for the description of the slab damage level by the Schmidt hammer is proposed, which will make possible to compare the modelling results extracted from the computation experiments. Furthermore, an analytical method for determining the optimal mesh size to be used in the simulations is proposed. It is well known that the behavior of the models can be strongly influenced by the mesh size used. According to this, when modifiying the meshing density the damaged cannot be reached or do it prematurely, or excessive, respectively. Moreover, some material models include a regularization of the mesh size, but the present investigation evidenced that this procedure has a limited range of validity, even an optimal environment values are determined. The method proposed is based on the distribution of the internal energy of the system. Finally, several expecific numerical models have been performed by using LS-DYNA commercial software, considering all the aspects listed in the preceding paragraphs. Comparisons of the results extracted from the simulations and full scale experiments were carried out, noting that there exists a very good correlation between both of them. This fact demonstrates that the proposed research procedure is highly suitable for the modelling of reinforced concrete slabs subjected to blast loading.
Resumo:
Motivado por los últimos hallazgos realizados gracias a los recientes avances tecnológicos y misiones espaciales, el estudio de los asteroides ha despertado el interés de la comunidad científica. Tal es así que las misiones a asteroides han proliferado en los últimos años (Hayabusa, Dawn, OSIRIX-REx, ARM, AIMS-DART, ...) incentivadas por su enorme interés científico. Los asteroides son constituyentes fundamentales en la evolución del Sistema Solar, son además grandes concentraciones de valiosos recursos naturales, y también pueden considerarse como objectivos estratégicos para la futura exploración espacial. Desde hace tiempo se viene especulando con la posibilidad de capturar objetos próximos a la Tierra (NEOs en su acrónimo anglosajón) y acercarlos a nuestro planeta, permitiendo así un acceso asequible a los mismos para estudiarlos in-situ, explotar sus recursos u otras finalidades. Por otro lado, las asteroides se consideran con frecuencia como posibles peligros de magnitud planetaria, ya que impactos de estos objetos con la Tierra suceden constantemente, y un asteroide suficientemente grande podría desencadenar eventos catastróficos. Pese a la gravedad de tales acontecimientos, lo cierto es que son ciertamente difíciles de predecir. De hecho, los ricos aspectos dinámicos de los asteroides, su modelado complejo y las incertidumbres observaciones hacen que predecir su posición futura con la precisión necesaria sea todo un reto. Este hecho se hace más relevante cuando los asteroides sufren encuentros próximos con la Tierra, y más aún cuando estos son recurrentes. En tales situaciones en las cuales fuera necesario tomar medidas para mitigar este tipo de riesgos, saber estimar con precisión sus trayectorias y probabilidades de colisión es de una importancia vital. Por ello, se necesitan herramientas avanzadas para modelar su dinámica y predecir sus órbitas con precisión, y son también necesarios nuevos conceptos tecnológicos para manipular sus órbitas llegado el caso. El objetivo de esta Tesis es proporcionar nuevos métodos, técnicas y soluciones para abordar estos retos. Las contribuciones de esta Tesis se engloban en dos áreas: una dedicada a la propagación numérica de asteroides, y otra a conceptos de deflexión y captura de asteroides. Por lo tanto, la primera parte de este documento presenta novedosos avances de apliación a la propagación dinámica de alta precisión de NEOs empleando métodos de regularización y perturbaciones, con especial énfasis en el método DROMO, mientras que la segunda parte expone ideas innovadoras para la captura de asteroides y comenta el uso del “ion beam shepherd” (IBS) como tecnología para deflectarlos. Abstract Driven by the latest discoveries enabled by recent technological advances and space missions, the study of asteroids has awakened the interest of the scientific community. In fact, asteroid missions have become very popular in the recent years (Hayabusa, Dawn, OSIRIX-REx, ARM, AIMS-DART, ...) motivated by their outstanding scientific interest. Asteroids are fundamental constituents in the evolution of the Solar System, can be seen as vast concentrations of valuable natural resources, and are also considered as strategic targets for the future of space exploration. For long it has been hypothesized with the possibility of capturing small near-Earth asteroids and delivering them to the vicinity of the Earth in order to allow an affordable access to them for in-situ science, resource utilization and other purposes. On the other side of the balance, asteroids are often seen as potential planetary hazards, since impacts with the Earth happen all the time, and eventually an asteroid large enough could trigger catastrophic events. In spite of the severity of such occurrences, they are also utterly hard to predict. In fact, the rich dynamical aspects of asteroids, their complex modeling and observational uncertainties make exceptionally challenging to predict their future position accurately enough. This becomes particularly relevant when asteroids exhibit close encounters with the Earth, and more so when these happen recurrently. In such situations, where mitigation measures may need to be taken, it is of paramount importance to be able to accurately estimate their trajectories and collision probabilities. As a consequence, advanced tools are needed to model their dynamics and accurately predict their orbits, as well as new technological concepts to manipulate their orbits if necessary. The goal of this Thesis is to provide new methods, techniques and solutions to address these challenges. The contributions of this Thesis fall into two areas: one devoted to the numerical propagation of asteroids, and another to asteroid deflection and capture concepts. Hence, the first part of the dissertation presents novel advances applicable to the high accuracy dynamical propagation of near-Earth asteroids using regularization and perturbations techniques, with a special emphasis in the DROMO method, whereas the second part exposes pioneering ideas for asteroid retrieval missions and discusses the use of an “ion beam shepherd” (IBS) for asteroid deflection purposes.
Resumo:
El funcionamiento interno del cerebro es todavía hoy en día un misterio, siendo su comprensión uno de los principales desafíos a los que se enfrenta la ciencia moderna. El córtex cerebral es el área del cerebro donde tienen lugar los procesos cerebrales de más alto nivel, cómo la imaginación, el juicio o el pensamiento abstracto. Las neuronas piramidales, un tipo específico de neurona, suponen cerca del 80% de los cerca de los 10.000 millones de que componen el córtex cerebral, haciendo de ellas un objetivo principal en el estudio del funcionamiento del cerebro. La morfología neuronal, y más específicamente la morfología dendrítica, determina cómo estas procesan la información y los patrones de conexión entre neuronas, siendo los modelos computacionales herramientas imprescindibles para el estudio de su rol en el funcionamiento del cerebro. En este trabajo hemos creado un modelo computacional, con más de 50 variables relativas a la morfología dendrítica, capaz de simular el crecimiento de arborizaciones dendríticas basales completas a partir de reconstrucciones de neuronas piramidales reales, abarcando desde el número de dendritas hasta el crecimiento los los árboles dendríticos. A diferencia de los trabajos anteriores, nuestro modelo basado en redes Bayesianas contempla la arborización dendrítica en su conjunto, teniendo en cuenta las interacciones entre dendritas y detectando de forma automática las relaciones entre las variables morfológicas que caracterizan la arborización. Además, el análisis de las redes Bayesianas puede ayudar a identificar relaciones hasta ahora desconocidas entre variables morfológicas. Motivado por el estudio de la orientación de las dendritas basales, en este trabajo se introduce una regularización L1 generalizada, aplicada al aprendizaje de la distribución von Mises multivariante, una de las principales distribuciones de probabilidad direccional multivariante. También se propone una distancia circular multivariante que puede utilizarse para estimar la divergencia de Kullback-Leibler entre dos muestras de datos circulares. Comparamos los modelos con y sin regularizaci ón en el estudio de la orientación de la dendritas basales en neuronas humanas, comprobando que, en general, el modelo regularizado obtiene mejores resultados. El muestreo, ajuste y representación de la distribución von Mises multivariante se implementa en un nuevo paquete de R denominado mvCircular.---ABSTRACT---The inner workings of the brain are, as of today, a mystery. To understand the brain is one of the main challenges faced by current science. The cerebral cortex is the region of the brain where all superior brain processes, like imagination, judge and abstract reasoning take place. Pyramidal neurons, a specific type of neurons, constitute approximately the 80% of the more than 10.000 million neurons that compound the cerebral cortex. It makes the study of the pyramidal neurons crucial in order to understand how the brain works. Neuron morphology, and specifically the dendritic morphology, determines how the information is processed in the neurons, as well as the connection patterns among neurons. Computational models are one of the main tools for studying dendritic morphology and its role in the brain function. We have built a computational model that contains more than 50 morphological variables of the dendritic arborizations. This model is able to simulate the growth of complete dendritic arborizations from real neuron reconstructions, starting with the number of basal dendrites, and ending modeling the growth of dendritic trees. One of the main diferences between our approach, mainly based on the use of Bayesian networks, and other models in the state of the art is that we model the whole dendritic arborization instead of focusing on individual trees, which makes us able to take into account the interactions between dendrites and to automatically detect relationships between the morphologic variables that characterize the arborization. Moreover, the posterior analysis of the relationships in the model can help to identify new relations between morphological variables. Motivated by the study of the basal dendrites orientation, a generalized L1 regularization applied to the multivariate von Mises distribution, one of the most used distributions in multivariate directional statistics, is also introduced in this work. We also propose a circular multivariate distance that can be used to estimate the Kullback-Leibler divergence between two circular data samples. We compare the regularized and unregularized models on basal dendrites orientation of human neurons and prove that regularized model achieves better results than non regularized von Mises model. Sampling, fitting and plotting functions for the multivariate von Mises are implemented in a new R packaged called mvCircular.