63 resultados para Machine-tools - numerical control
Resumo:
Vicinal Ge(100) is the common substrate for state of the art multi-junction solar cells grown by metal-organic vapor phase epitaxy (MOVPE). While triple junction solar cells based on Ge(100) present efficiencies mayor que 40%, little is known about the microscopic III-V/Ge(100) nucleation and its interface formation. A suitable Ge(100) surface preparation prior to heteroepitaxy is crucial to achieve low defect densities in the III-V epilayers. Formation of single domain surfaces with double layer steps is required to avoid anti-phase domains in the III-V films. The step formation processes in MOVPE environment strongly depends on the major process parameters such as substrate temperature, H2 partial pressure, group V precursors [1], and reactor conditions. Detailed investigation of these processes on the Ge(100) surface by ultrahigh vacuum (UHV) based standard surface science tools are complicated due to the presence of H2 process gas. However, in situ surface characterization by reflection anisotropy spectroscopy (RAS) allowed us to study the MOVPE preparation of Ge(100) surfaces directly in dependence on the relevant process parameters [2, 3, 4]. A contamination free MOVPE to UHV transfer system [5] enabled correlation of the RA spectra to results from UHV-based surface science tools. In this paper, we established the characteristic RA spectra of vicinal Ge(100) surfaces terminated with monohydrides, arsenic and phosphorous. RAS enabled in situ control of oxide removal, H2 interaction and domain formation during MOVPE preparation.
Resumo:
Automatic Control Teaching in the new degree syllabus has reduced both, its contents and its implementation course, with regard to traditional engineering careers. On the other hand, where the qualification is not considered as automatic control specialist, it is required an adapted methodology to provide the minimum contents that the student needs to assimilate, even in the case that students do not perceive these contents as the most important in their future career. In this paper we present the contents of a small automatic course taught Naval Architecture and Marine Engineering Degrees at the School of Naval Engineering of the Polytechnic University of Madrid. We have included the contents covered using the proposed methodology which is based on practical work after lectures. Firstly, the students performed exercises by hand. Secondly, they solve the exercises using informatics support tools, and finally, they validate their previous results and their knowledge in the laboratory platforms.
Resumo:
The aim of this chapter is to discuss the applicability of recently proposed knowledge modelling tools to the development of agent-based systems. The discussion is derived from the real world experience of a particular software tool called KSM (Knowledge Structure Manager). The chapter provides details about this tool and then proceeds to show in which forms the software may be used to support the development of agent-based systems. Two multiagent systems, one in the field of telecommunications management and the other one in the field of flood control, are described. Conclusions about these studies are presented, summarizing the main contributions that knowledge modelling tools can bring to the development of agent-based systems.
Resumo:
Hybrid Stepper Motors are widely used in open-loop position applications. They are the choice of actuation for the collimators in the Large Hadron Collider, the largest particle accelerator at CERN. In this case the positioning requirements and the highly radioactive operating environment are unique. The latter forces both the use of long cables to connect the motors to the drives which act as transmission lines and also prevents the use of standard position sensors. However, reliable and precise operation of the collimators is critical for the machine, requiring the prevention of step loss in the motors and maintenance to be foreseen in case of mechanical degradation. In order to make the above possible, an approach is proposed for the application of an Extended Kalman Filter to a sensorless stepper motor drive, when the motor is separated from its drive by long cables. When the long cables and high frequency pulse width modulated control voltage signals are used together, the electrical signals difer greatly between the motor and drive-side of the cable. Since in the considered case only drive-side data is available, it is therefore necessary to estimate the motor-side signals. Modelling the entire cable and motor system in an Extended Kalman Filter is too computationally intensive for standard embedded real-time platforms. It is, in consequence, proposed to divide the problem into an Extended Kalman Filter, based only on the motor model, and separated motor-side signal estimators, the combination of which is less demanding computationally. The efectiveness of this approach is shown in simulation. Then its validity is experimentally demonstrated via implementation in a DSP based drive. A testbench to test its performance when driving an axis of a Large Hadron Collider collimator is presented along with the results achieved. It is shown that the proposed method is capable of achieving position and load torque estimates which allow step loss to be detected and mechanical degradation to be evaluated without the need for physical sensors. These estimation algorithms often require a precise model of the motor, but the standard electrical model used for hybrid stepper motors is limited when currents, which are high enough to produce saturation of the magnetic circuit, are present. New model extensions are proposed in order to have a more precise model of the motor independently of the current level, whilst maintaining a low computational cost. It is shown that a significant improvement in the model It is achieved with these extensions, and their computational performance is compared to study the cost of model improvement versus computation cost. The applicability of the proposed model extensions is demonstrated via their use in an Extended Kalman Filter running in real-time for closed-loop current control and mechanical state estimation. An additional problem arises from the use of stepper motors. The mechanics of the collimators can wear due to the abrupt motion and torque profiles that are applied by them when used in the standard way, i.e. stepping in open-loop. Closed-loop position control, more specifically Field Oriented Control, would allow smoother profiles, more respectful to the mechanics, to be applied but requires position feedback. As mentioned already, the use of sensors in radioactive environments is very limited for reliability reasons. Sensorless control is a known option but when the speed is very low or zero, as is the case most of the time for the motors used in the LHC collimator, the loss of observability prevents its use. In order to allow the use of position sensors without reducing the long term reliability of the whole system, the possibility to switch from closed to open loop is proposed and validated, allowing the use of closed-loop control when the position sensors function correctly and open-loop when there is a sensor failure. A different approach to deal with the switched drive working with long cables is also presented. Switched mode stepper motor drives tend to have poor performance or even fail completely when the motor is fed through a long cable due to the high oscillations in the drive-side current. The design of a stepper motor output fillter which solves this problem is thus proposed. A two stage filter, one devoted to dealing with the diferential mode and the other with the common mode, is designed and validated experimentally. With this ?lter the drive performance is greatly improved, achieving a positioning repeatability even better than with the drive working without a long cable, the radiated emissions are reduced and the overvoltages at the motor terminals are eliminated.
Resumo:
Esta tesis se enmarca dentro de una línea de investigación iniciada en el 2004 focalizada en el estudio de soluciones constructivas energéticamente eficientes para la crianza del vino. El objetivo principal es promover el ahorro energético y la sostenibilidad en el diseño de la sala de crianza de las bodegas. Para ello, se profundiza en el estudio de las bodegas subterráneas, ejemplo de eco-construcción, ya que por lo general proporcionan condiciones adecuadas para la crianza sin gasto energético en climatización. En concreto, se abordan aspectos clave para la caracterización y comprensión del funcionamiento de estas construcciones, que en muchas ocasiones condicionan el éxito o el fracaso de las mismas. Así, se analiza el complejo comportamiento de la ventilación natural a lo largo del año, determinando los factores que la condicionan, desvelando el papel que desempeña tanto el túnel de acceso como las chimeneas de ventilación en su funcionamiento. Además, se desarrollan y validan modelos de simulación mediante CFD, que permiten evaluar y predecir con detalle el comportamiento termofluidodinámico de las construcciones subterráneas. Por otra parte, se cuantifica la uniformidad y estabilidad de la sala de crianza a lo largo del año, información que permite fijar recomendaciones y pautas concretas de diseño de los planes de monitorización. Finalmente, se determinan las diferencias de comportamiento higrotérmico existentes entre un amplio abanico de bodegas subterráneas para vino tinto, bodegas representativas de otras soluciones constructivas alternativas de vino tinto, así como bodegas para cavas y vinos generosos, enmarcando el comportamiento de estas bodegas en un contexto global. Además, se desarrollan metodologías adaptadas a las características particulares de estas construcciones, en concreto un sistema de monitorización basado en termografía infrarroja para llevar a cabo inspecciones puntuales para el control ambiental de grandes naves de crianza. Es de esperar que los estudios técnicos y las herramientas desarrolladas ayuden a mejorar el diseño de nuevas bodegas, y a la mejora de las condiciones higrotérmicas de las ya existentes, colaborando a que España siga situada en la vanguardia de la producción de vino de calidad. ABSTRACT This thesis is part of the research started in 2004 and focusing on the study of the energy efficiency in wine aging constructive solutions. The main objective is to promote energy conservation and sustainability in the design of wine aging rooms. To do so, this study focuses on the analysis of the underground cellars, example of eco-construction, since they usually provide adequate conditions for wine aging without using air-conditioning systems. In particular, key aspects for characterizing and understanding these constructions, which often determine the success or failure of them. Thus, the complex behavior of natural ventilation throughout the year is discussed, determining the influencing factors and revealing the role of both the access tunnel and the ventilation chimney. In addition, numerical models are developed and validated using CFD simulation, allowing an in-depth assessment and prediction of the thermofluidynamic behavior in underground constructions. Moreover, uniformity and stability of the aging room throughout the year is quantified, allowing for precise information useful to set recommendations and design guidelines for defining monitoring plans. Finally, hygrothermal behavior differences between a wide range of underground cellars for red wine, red wine wineries with alternative construction solutions and wineries for generous wines and cava wines, In addition, new methodologies adapted towards the particular characteristics of these constructions, a monitoring system based on infrared thermography to perform periodic inspections for environmental controls of the aging rooms, are developed. It is expected that the technical studies and tools developed here will help to improve the design of new wineries, and also the hygrothermal conditions of the existing ones, headed for keeping Spain in the forefront of the countries producing high quality wine.
Resumo:
We examine, with recently developed Lagrangian tools, altimeter data and numerical simulations obtained from the HYCOM model in the Gulf of Mexico. Our data correspond to the months just after the Deepwater Horizon oil spill in the year 2010. Our Lagrangian analysis provides a skeleton that allows the interpretation of transport routes over the ocean surface. The transport routes are further verified by the simultaneous study of the evolution of several drifters launched during those months in the Gulf of Mexico. We find that there exist Lagrangian structures that justify the dynamics of the drifters, although the agreement depends on the quality of the data. We discuss the impact of the Lagrangian tools on the assessment of the predictive capacity of these data sets.
Resumo:
La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.
Resumo:
El Grupo de Diseño Electrónico y Microelectrónico de la Universidad Politécnica de Madrid -GDEM- se dedica, entre otras cosas, al estudio y mejora del consumo en sistemas empotrados. Es en este lugar y sobre este tema donde el proyecto a exponer ha tomado forma y desarrollo. Según un artículo de la revista online Revista de Electrónica Embebida, un sistema empotrado o embebido es aquel “sistema controlado por un microprocesador y que gracias a la programación que incorpora o que se le debe incorporar, realiza una función específica para la que ha sido diseñado, integrando en su interior la mayoría de los elementos necesarios para realizar dicho función”. El porqué de estudiar sobre este tema responde a que, cada vez, hay mayor presencia de sistemas empotrados en nuestra vida cotidiana. Esto es debido a que se está tendiendo a dotar de “inteligencia” a todo lo que puedan hacer nuestra vida un poco más fácil. Nos podemos encontrar dichos sistemas en fábricas, oficinas de atención a los ciudadanos, sistemas de seguridad de hogar, relojes, móviles, lavadoras, hornos, aspiradores y un largo etcétera en cualquier aparato que nos podamos imaginar. A pesar de sus grandes ventajas, aún hay grandes inconvenientes. El mayor problema que supone a día de hoy es la autonomía del mismo sistema, ya que hablamos de aparatos que muchas veces están alimentados por baterías -para ayudar a su portabilidad–. Por esto, se está intentando dotar a dichos sistemas de una capacidad de ahorro de energía y toma de decisiones que podrían ayudar a duplicar la autonomía de dicha batería. Un ejemplo claro son los Smartphones de hoy en día, unos aparatos casi indispensables que pueden tener una autonomía de un día. Esto es poco práctico para el usuario en caso de viajes, trabajo u otras situaciones en las que se le dé mucho uso y no pueda tener acceso a una red eléctrica. Es por esto que surge la necesidad de investigar, sin necesidad de mejorar el hardware del sistema, una manera de mejorar esta situación. Este proyecto trabajará en esa línea creando un sistema automático de medida el cual generará las corrientes que servirán como entrada para verificar el sistema de adquisición que junto con la tarjeta Beagle Board permitirá la toma de decisiones en relación con el consumo de energía. Para realizar este sistema, nos ayudaremos de diferentes herramientas que podremos encontrar en el laboratorio del GDEM, como la fuente de alimentación Agilent y la Beagle Board –como principales herramientas de trabajo- . El objetivo principal será la simulación de unas señales que, después de pasar un proceso de conversión y tratado, harán la función de representación del consumo de cada una de las partes que pueden formar un sistema empotrado genérico. Por lo tanto, podemos decir que el sistema hará la funcionalidad de un banco de pruebas que ayudará a simular dicho consumo para que el microprocesador del sistema pueda llegar a tomar alguna decisión. ABSTRACT. The Electronic and Microelectronic Design Group of Universidad Politécnica de Madrid -GDEM- is in charge, between other issues, of improving the embedded system’s consumption. It is in this place and about this subject where the exposed project has taken shape and development. According to an article from de online magazine Revista de Electronica Embebida, an embedded system is “the one controlled by a microprocessor and, thanks to the programing that it includes, it carries out a specific function what it has been designed for, being integrated in it the most necessary elements for realizing the already said function”. The because of studying this subject, answers that each time there is more presence of the embedded system in our daily life. This is due to the tendency of providing “intelligence” to all what can make our lives easier. We can find this kind of systems in factories, offices, security systems, watchers, mobile phones, washing machines, ovens, hoovers and, definitely, in all kind of machines what we can think of. Despite its large vantages, there are still some inconveniences. Nowadays, the most important problem is the autonomy of the system itself when machines that have to be supplied by batteries –making easier the portability-. Therefore, this project is going after a save capacity of energy for the system as well as being able to take decisions in order to duplicate batteries’ autonomy. Smartphones are a clear example. They are a very successful product but the autonomy is just one day. This is not practical for users, at all, if they have to travel, to work or to do any activity that involves a huge use of the phone without a socket nearby. That is why the need of investigating a way to improve this situation. This project is working on this line, creating an automatic system that will generate the currents for verifying the acquisition system that, with the beagle board, will help taking decisions regarding the energy’s consumption. To carry out this system, we need different tools that we can find in the laboratory of the group previously mentioned, like power supply Agilent and the Beagle Board – as main working tools –. The main goal is the simulation of some signals that, after a conversion process, will represent de consumption of each of the parts in the embedded generic system. Therefore, the system will be a testing ground that simulate the consumption, once sent to the processor, to be processed and so the microprocessor system might take some decision.
Resumo:
The aim of this work is to develop an automated tool for the optimization of turbomachinery blades founded on an evolutionary strategy. This optimization scheme will serve to deal with supersonic blades cascades for application to Organic Rankine Cycle (ORC) turbines. The blade geometry is defined using parameterization techniques based on B-Splines curves, that allow to have a local control of the shape. The location in space of the control points of the B-Spline curve define the design variables of the optimization problem. In the present work, the performance of the blade shape is assessed by means of fully-turbulent flow simulations performed with a CFD package, in which a look-up table method is applied to ensure an accurate thermodynamic treatment. The solver is set along with the optimization tool to determine the optimal shape of the blade. As only blade-to-blade effects are of interest in this study, quasi-3D calculations are performed, and a single-objective evolutionary strategy is applied to the optimization. As a result, a non-intrusive tool, with no need for gradients definition, is developed. The computational cost is reduced by the use of surrogate models. A Gaussian interpolation scheme (Kriging model) is applied for the estimated n-dimensional function, and a surrogate-based local optimization strategy is proved to yield an accurate way for optimization. In particular, the present optimization scheme has been applied to the re-design of a supersonic stator cascade of an axial-flow turbine. In this design exercise very strong shock waves are generated in the rear blade suction side and shock-boundary layer interaction mechanisms occur. A significant efficiency improvement as a consequence of a more uniform flow at the blade outlet section of the stator is achieved. This is also expected to provide beneficial effects on the design of a subsequent downstream rotor. The method provides an improvement to gradient-based methods and an optimized blade geometry is easily achieved using the genetic algorithm.
Resumo:
The European chestnut (Castanea sativa Mill.) is a multipurpose species that has been widely cultivated around the Mediterranean basin since ancient times. New varieties were brought to the Iberian Peninsula during the Roman Empire, which coexist since then with native populations that survived the last glaciation. The relevance of chestnut cultivation has being steadily growing since the Middle Ages, until the rural decline of the past century put a stop to this trend. Forest fires and diseases were also major factors. Chestnut cultivation is gaining momentum again due to its economic (wood, fruits) and ecologic relevance, and represents currently an important asset in many rural areas of Europe. In this Thesis we apply different molecular tools to help improve current management strategies. For this study we have chosen El Bierzo (Castile and Leon, NW Spain), which has a centenary tradition of chestnut cultivation and management, and also presents several unique features from a genetic perspective (next paragraph). Moreover, its nuts are widely appreciated in Spain and abroad for their organoleptic properties. We have focused our experimental work on two major problems faced by breeders and the industry: the lack of a fine-grained genetic characterization and the need for new strategies to control blight disease. To characterize with sufficient detail the genetic diversity and structure of El Bierzo orchards, we analyzed DNA from 169 trees grafted for nut production covering the entire region. We also analyzed 62 nuts from all traditional varieties. El Bierzo constitutes an outstanding scenario to study chestnut genetics and the influence of human management because: (i) it is located at one extreme of the distribution area; (ii) it is a major glacial refuge for the native species; (iii) it has a long tradition of human management (since Roman times, at least); and (iv) its geographical setting ensures an unusual degree of genetic isolation. Thirteen microsatellite markers provided enough informativeness and discrimination power to genotype at the individual level. Together with an unexpected level of genetic variability, we found evidence of genetic structure, with three major gene pools giving rise to the current population. High levels of genetic differentiation between groups supported this organization. Interestingly, genetic structure does not match with spatial boundaries, suggesting that the exchange of material and cultivation practices have strongly influenced natural gene flow. The microsatellite markers selected for this study were also used to classify a set of 62 samples belonging to all traditional varieties. We identified several cases of synonymies and homonymies, evidencing the need to substitute traditional classification systems with new tools for genetic profiling. Management and conservation strategies should also benefit from these tools. The avenue of high-throughput sequencing technologies, combined with the development of bioinformatics tools, have paved the way to study transcriptomes without the need for a reference genome. We took advantage of RNA sequencing and de novo assembly tools to determine the transcriptional landscape of chestnut in response to blight disease. In addition, we have selected a set of candidate genes with high potential for developing resistant varieties via genetic engineering. Our results evidenced a deep transcriptional reprogramming upon fungal infection. The plant hormones ET and JA appear to orchestrate the defensive response. Interestingly, our results also suggest a role for auxins in modulating such response. Many transcription factors were identified in this work that interact with promoters of genes involved in disease resistance. Among these genes, we have conducted a functional characterization of a two major thaumatin-like proteins (TLP) that belongs to the PR5 family. Two genes encoding chestnut cotyledon TLPs have been previously characterized, termed CsTL1 and CsTL2. We substantiate here their protective role against blight disease for the first time, including in silico, in vitro and in vivo evidence. The synergy between TLPs and other antifungal proteins, particularly endo-p-1,3-glucanases, bolsters their interest for future control strategies based on biotechnological approaches.
Resumo:
Las normativas que regulan la seguridad de las presas en España han recogido la necesidad de conocer los desplazamientos y deformaciones de sus estructuras y cimientos. A día de hoy, son muchas las presas en explotación que no cuentan con un sistema de auscultación adecuado para controlar este tipo de variables, ya que la instalación de métodos clásicos de precisión en las mismas podría no ser viable técnicamente y, de serlo, supondría un coste económico importante y una dudosa garantía del proceso de ejecución de la obra civil correspondiente. Con el desarrollo de las nuevas tecnologías, la informática y las telecomunicaciones, han surgido nuevos sistemas de auscultación de desplazamientos. Los sistemas GPS actuales, diseñados para el control de estructuras, guiado de maquinaria, navegación y topografía, estabilidad de taludes, subsidencias, etc. permiten alcanzar precisiones centimétricas. El sistema de control de movimientos basado en la tecnología DGPS (GPS diferencial) combinada con un filtro estadístico con el que se alcanzan sensibilidades de hasta ±1 mm en el sistema, suficientes para una auscultación normal de presas según los requerimientos de la normativa actual. Esta exactitud se adapta a los desplazamientos radiales de las presas, donde son muy comunes valores de amplitudes en coronación de hasta 15 mm en las de gravedad y de hasta 45 mm en el caso de las presas bóveda o arco. La presente investigación tiene por objetivo analizar la viabilidad del sistema DGPS en el control de movimientos de presas de hormigón comparando los diferentes sistemas de auscultación y su correlación con las variables físicas y las vinculadas al propio sistema GPS diferencial. Ante la necesidad de dar respuesta a estas preguntas y de validar e incorporar a la mencionada tecnología en la ingeniería civil en España, se ha llevado a cabo un estudio de caso en La Aceña (Ávila). Esta es una de las pocas presas españolas que se está controlando con dicha tecnología y de forma simultánea con los sistemas clásicos de auscultación y algunos otros de reciente aplicación La presente investigación se ha organizado con idea de dar respuesta a varias preguntas que el explotador de presas se plantea y que no se analizan en el estado del arte de la técnica: cómo hacer la configuración espacial del sistema y cuáles son los puntos necesarios que se deben controlar, qué sistemas de comunicaciones son los más fiables, cuáles son los costes asociados, calibración del software, vida útil y mantenimientos requeridos, así como la posibilidad de telecontrolar los datos. Entre las ventajas del sistema DGPS, podemos señalar su bajo coste de implantación y posibilidad de controlarlo de forma remota, así como la exactitud y carácter absoluto de los datos. Además, está especialmente indicado para presas aisladas o mal comunicadas y para aquellas otras en las que el explotador no tiene referencia alguna de la magnitud de los desplazamientos o deformaciones propias de la presa en toda su historia. Entre los inconvenientes de cualquier sistema apoyado en las nuevas tecnologías, destaca la importancia de las telecomunicaciones ya sea en el nivel local en la propia presao desde su ubicación hasta el centro de control de la explotación. Con la experiencia alcanzada en la gestión de la seguridad de presas y sobre la base de la reciente implantación de los nuevos métodos de auscultación descritos, se ha podido analizar cada una de sus ventajas e inconvenientes. En el capítulo 5, se presenta una tabla de decisión para el explotador que servirá como punto de partida para futuras inversiones. El impacto de esta investigación se ha visto reflejado en la publicación de varios artículos en revistas indexadas y en el debate suscitado entre gestores y profesionales del sector en los congresos nacionales e internacionales en los que se han presentado resultados preliminares. All regulations on the safety of dams in Spain have collected the need to know the displacements and deformations of the structure and its foundation. Today there are many dams holding not have an adequate system of auscultation to control variables such as the installation of classical methods of precision in the same might not be technically feasible, and if so, would cost important economic and guarantee the implementation process of the dubious civil works. With the development of new technologies, computing and telecommunications, new displacements auscultation systems have emerged. Current GPS systems designed to control structures, machine guidance, navigation and topography, slope stability, subsidence, etc, allow to reach centimeter-level accuracies. The motion control system based on DGPS technology applies a statistical filter that sensitivities are achieved in the system to ± 1 mm, sufficient for normal auscultation of dams as required by current regulations. This accuracy is adapted to the radial displacement of dams, which are common values in coronation amplitudes up to 15 mm in gravity dams and up to 45 mm in arch or arc dams. This research aims to analyze the feasibility of DGPS system in controlling movements of concrete dams, comparing the different systems auscultation and its correlation with physical variables and linked to differential GPS system itself. Given the need to answer this question and to validate and incorporate this technology to civil engineering in Spain, has conducted a case study in real time at the dam La Aceña (Ávila). This dam is one of the few Spanish companies, which are controlling with this technology and simultaneously with the classic auscultation systems and some other recent application. This research has been organized with a view to responding to questions that the dam operator arises and in the state of the art technique not discussed: how to make spatial configuration of the system and what are the necessary control points what communication systems are the most reliable, what are the associated costs, calibration software, service life and maintenance requirements, possibility of monitoring, etc. Among the advantages we can point to its low cost of implementation, the possibility of remote, high accuracy and absolute nature of the data. It could also be suitable for those isolated or poorly communicated dams and those in which the operator has no reference to the magnitude of displacements or deformations own prey in its history. The disadvantages of any system based on the new technologies we highlight the importance of telecommunications, either locally or from this dam control center of the farm. With the experience gained in the management of dam safety and based on the recent introduction of new methods of auscultation described, it has been possible to analyze each of their advantages and disadvantages. A decision table for the operator, which will serve as a starting point for future investments is presented. The impact of research, has been reflected in the publication of several articles in refereed journals and discussion among managers and professionals in national and international conferences in which they participated.
Resumo:
La presente Tesis proporciona una gran cantidad de información con respecto al uso de un nuevo y avanzado material polimérico (con base de poliolefina) especialmente adecuada para ser usada en forma de fibras como adición en el hormigón. Se han empleado fibras de aproximadamente 1 mm de diámetro, longitudes entre 48 y 60 mm y una superficie corrugada. Las prometedoras propiedades de este material (baja densidad, bajo coste, buen comportamiento resistente y gran estabilidad química) justifican el interés en desarrollar el esfuerzo de investigación requerido para demostrar las ventajas de su uso en aplicaciones prácticas. La mayor parte de la investigación se ha realizado usando hormigón autocompactante como matriz, ya que este material es óptimo para el relleno de los encofrados del hormigón, aunque también se ha empleado hormigón normal vibrado con el fin de comparar algunas propiedades. Además, el importante desarrollo del hormigón reforzado con fibras en los últimos años, tanto en investigación como en aplicaciones prácticas, también es muestra del gran interés que los resultados y consideraciones de diseño que esta Tesis pueden tener. El material compuesto resultante, Hormigón Reforzado con Fibras de Poliolefina (HRFP o PFRC por sus siglas inglesas) ha sido exhaustivamente ensayado y estudiado en muchos aspectos. Los resultados permiten establecer cómo conseguidos los objetivos buscados: -Se han cuantificado las propiedades mecánicas del PFRC con el fin de demostrar su buen comportamiento en la fase fisurada de elementos estructurales sometidos a tensiones de tracción. -Contrastar los resultados obtenidos con las bases propuestas en la normativa existente y evaluar las posibilidades para el uso del PFRC con fin estructural para sustituir el armado tradicional con barras de acero corrugado para determinadas aplicaciones. -Se han desarrollado herramientas de cálculo con el fin de evaluar la capacidad del PFRC para sustituir al hormigón armado con las barras habituales de acero. -En base a la gran cantidad de ensayos experimentales y a alguna aplicación real en la construcción, se han podido establecer recomendaciones y consejos de diseño para que elementos de este material puedan ser proyectados y construidos con total fiabilidad. Se presentan, además, resultados prometedores en una nueva línea de trabajo en el campo del hormigón reforzado con fibras combinando dos tipologías de fibras. Se combinaron fibras de poliolefina con fibras de acero como refuerzo del mismo hormigón autocompactante detectándose sinergias que podrían ser la base del uso futuro de esta tecnología de hormigón. This thesis provides a significant amount of information on the use of a new advanced polymer (polyolefin-based) especially suitable in the form of fibres to be added to concrete. At the time of writing, there is a noteworthy lack of research and knowledge about use as a randomly distributed element to reinforce concrete. Fibres with an approximate 1 mm diameter, length of 48-60 mm, an embossed surface and improved mechanical properties are employed. The promising properties of the polyolefin material (low density, inexpensive, and with good strength behaviour and high chemical stability) justify the research effort involved and demonstrate the advantages for practical purposes. While most of the research has used self-compacting concrete, given that this type of matrix material is optimum in filling the concrete formwork, for comparison purposes standard vibration compacted mixes have also been used. In addition, the interest in fibre-reinforced concrete technology, in both research and application, support the significant interest in the results and considerations provided by the thesis. The resulting composite material, polyolefin fibre reinforced concrete (PFRC) has been extensively tested and studied. The results have allowed the following objectives to be met: -Assessment of the mechanical properties of PFRC in order to demonstrate the good performance in the post-cracking strength for structural elements subjected to tensile stresses. -- Assessment of the results in contrast with the existing structural codes, regulations and test methods. The evaluation of the potential of PFRC to meet the requirements and replace traditional steel-bar reinforcement applications. -Development of numerical tools designed to evaluate the capability of PFRC to substitute, either partially or totally, standard steel reinforcing bars either alone or in conjunction with steel fibres. -Provision, based on the large amount of experimental work and real applications, of a series of guidelines and recommendations for the practical and reliable design and use of PFRC. Furthermore, the thesis also reports promising results about an innovative line in the field of fibre-reinforced concrete: the design of a fibre cocktail to reinforce the concrete by using two types of fibres simultaneously. Polyolefin fibres were combined with steel fibres in self-compacting concrete, identifying synergies that could serve as the base in the future use of fibre-reinforced concrete technology.
Resumo:
El aprendizaje automático y la cienciometría son las disciplinas científicas que se tratan en esta tesis. El aprendizaje automático trata sobre la construcción y el estudio de algoritmos que puedan aprender a partir de datos, mientras que la cienciometría se ocupa principalmente del análisis de la ciencia desde una perspectiva cuantitativa. Hoy en día, los avances en el aprendizaje automático proporcionan las herramientas matemáticas y estadísticas para trabajar correctamente con la gran cantidad de datos cienciométricos almacenados en bases de datos bibliográficas. En este contexto, el uso de nuevos métodos de aprendizaje automático en aplicaciones de cienciometría es el foco de atención de esta tesis doctoral. Esta tesis propone nuevas contribuciones en el aprendizaje automático que podrían arrojar luz sobre el área de la cienciometría. Estas contribuciones están divididas en tres partes: Varios modelos supervisados (in)sensibles al coste son aprendidos para predecir el éxito científico de los artículos y los investigadores. Los modelos sensibles al coste no están interesados en maximizar la precisión de clasificación, sino en la minimización del coste total esperado derivado de los errores ocasionados. En este contexto, los editores de revistas científicas podrían disponer de una herramienta capaz de predecir el número de citas de un artículo en el fututo antes de ser publicado, mientras que los comités de promoción podrían predecir el incremento anual del índice h de los investigadores en los primeros años. Estos modelos predictivos podrían allanar el camino hacia nuevos sistemas de evaluación. Varios modelos gráficos probabilísticos son aprendidos para explotar y descubrir nuevas relaciones entre el gran número de índices bibliométricos existentes. En este contexto, la comunidad científica podría medir cómo algunos índices influyen en otros en términos probabilísticos y realizar propagación de la evidencia e inferencia abductiva para responder a preguntas bibliométricas. Además, la comunidad científica podría descubrir qué índices bibliométricos tienen mayor poder predictivo. Este es un problema de regresión multi-respuesta en el que el papel de cada variable, predictiva o respuesta, es desconocido de antemano. Los índices resultantes podrían ser muy útiles para la predicción, es decir, cuando se conocen sus valores, el conocimiento de cualquier valor no proporciona información sobre la predicción de otros índices bibliométricos. Un estudio bibliométrico sobre la investigación española en informática ha sido realizado bajo la cultura de publicar o morir. Este estudio se basa en una metodología de análisis de clusters que caracteriza la actividad en la investigación en términos de productividad, visibilidad, calidad, prestigio y colaboración internacional. Este estudio también analiza los efectos de la colaboración en la productividad y la visibilidad bajo diferentes circunstancias. ABSTRACT Machine learning and scientometrics are the scientific disciplines which are covered in this dissertation. Machine learning deals with the construction and study of algorithms that can learn from data, whereas scientometrics is mainly concerned with the analysis of science from a quantitative perspective. Nowadays, advances in machine learning provide the mathematical and statistical tools for properly working with the vast amount of scientometrics data stored in bibliographic databases. In this context, the use of novel machine learning methods in scientometrics applications is the focus of attention of this dissertation. This dissertation proposes new machine learning contributions which would shed light on the scientometrics area. These contributions are divided in three parts: Several supervised cost-(in)sensitive models are learned to predict the scientific success of articles and researchers. Cost-sensitive models are not interested in maximizing classification accuracy, but in minimizing the expected total cost of the error derived from mistakes in the classification process. In this context, publishers of scientific journals could have a tool capable of predicting the citation count of an article in the future before it is published, whereas promotion committees could predict the annual increase of the h-index of researchers within the first few years. These predictive models would pave the way for new assessment systems. Several probabilistic graphical models are learned to exploit and discover new relationships among the vast number of existing bibliometric indices. In this context, scientific community could measure how some indices influence others in probabilistic terms and perform evidence propagation and abduction inference for answering bibliometric questions. Also, scientific community could uncover which bibliometric indices have a higher predictive power. This is a multi-output regression problem where the role of each variable, predictive or response, is unknown beforehand. The resulting indices could be very useful for prediction purposes, that is, when their index values are known, knowledge of any index value provides no information on the prediction of other bibliometric indices. A scientometric study of the Spanish computer science research is performed under the publish-or-perish culture. This study is based on a cluster analysis methodology which characterizes the research activity in terms of productivity, visibility, quality, prestige and international collaboration. This study also analyzes the effects of collaboration on productivity and visibility under different circumstances.
Resumo:
El objetivo de este proyecto es la creación de un modelo de gestión y control de la integridad de pozo, basado en el seguimiento de la operación y el análisis de su diseño y construcción. Para ello se diseña una herramienta tecnológica que permita disponer de la información necesaria y gestionarla según estándares internacionales. En la fase inicial del proyecto se realizó una revisión bibliográfica de los estudios más relevantes sobre integridad de pozos, tanto de la asociación SPE (Society of Petroleum Engineers), de Universidades de Noruega y México, o de empresas como ECOPETROL, con el fin entrar en el contexto y analizar antecedentes de modelos y herramientas similares. En segundo lugar se ha realizado un estudio de las bases de datos, comenzando por un análisis de los estándares de PPDM (The Professional Petroleum Data Management Association) sobre los que se realiza la herramienta y se ha seleccionado la base de datos que más se adecua a la herramienta, considerando Access como la mejor opción por ser de fácil acceso e incorporar el módulo de informes. Para la creación de la solución se diseñó un modelo de datos, condicionado por los requisitos de integridad de pozos de la normativa noruega NORSOK D-010 y de la guía de recomendaciones para la integridad de pozos de la asociación de petróleo y gas de Noruega. La materialización de este modelo se realizó mediante la creación de una base de datos en la plataforma Access, y empleando lenguaje propio de Access combinado con programación SQL. Una vez definido y construido el modelo de datos, es posible crear una capa de visualización de la información. Para ello se define un informe de estado de pozo. Este informe será visible por el usuario según sus roles y responsabilidades. ABSTRACT The objective of this project is to create a model for managing and controlling the well integrity. It is based on the monitoring of the operation and the analysis of its design and construction. Therefore a technological tool is designed; to allow having the necessary information and to manage it according to international standards. At the beginning of the project, a literature review of the most relevant studies on well integrity was performed. It was made to enter into context and to analyze the history of the models and available similar tools. It included texts from the association SPE (Society of Petroleum Engineers), universities from Norway and Mexico, and companies like ECOPETROL. Secondly there has been a study of the databases. It began with an analysis of PPDM (The Professional Petroleum Data Management Association) standards; about which, the tool is made. After the analysis, Access was considered the best option because it is easily accessible and incorporates the reporting module. In order to create the solution a data model was designed. It was conditioned by the requirements of integrity of wells, in the Norwegian standards and in the NORSOK D010 recommendations guide for well integrity of the Norwegian oil and gas Association. This model was performed by a database in the Access platform, and using Access language, combined with SQL programming. Once the data model is defined and built, it is possible to create a layer of data visualization. A report of this well is defined to create it. This report will be visible for the user according to their roles and responsibilities.
Resumo:
El objetivo principal de esta tesis doctoral es profundizar en el análisis y diseño de un sistema inteligente para la predicción y control del acabado superficial en un proceso de fresado a alta velocidad, basado fundamentalmente en clasificadores Bayesianos, con el prop´osito de desarrollar una metodolog´ıa que facilite el diseño de este tipo de sistemas. El sistema, cuyo propósito es posibilitar la predicción y control de la rugosidad superficial, se compone de un modelo aprendido a partir de datos experimentales con redes Bayesianas, que ayudar´a a comprender los procesos dinámicos involucrados en el mecanizado y las interacciones entre las variables relevantes. Dado que las redes neuronales artificiales son modelos ampliamente utilizados en procesos de corte de materiales, también se incluye un modelo para fresado usándolas, donde se introdujo la geometría y la dureza del material como variables novedosas hasta ahora no estudiadas en este contexto. Por lo tanto, una importante contribución en esta tesis son estos dos modelos para la predicción de la rugosidad superficial, que se comparan con respecto a diferentes aspectos: la influencia de las nuevas variables, los indicadores de evaluación del desempeño, interpretabilidad. Uno de los principales problemas en la modelización con clasificadores Bayesianos es la comprensión de las enormes tablas de probabilidad a posteriori producidas. Introducimos un m´etodo de explicación que genera un conjunto de reglas obtenidas de árboles de decisión. Estos árboles son inducidos a partir de un conjunto de datos simulados generados de las probabilidades a posteriori de la variable clase, calculadas con la red Bayesiana aprendida a partir de un conjunto de datos de entrenamiento. Por último, contribuimos en el campo multiobjetivo en el caso de que algunos de los objetivos no se puedan cuantificar en números reales, sino como funciones en intervalo de valores. Esto ocurre a menudo en aplicaciones de aprendizaje automático, especialmente las basadas en clasificación supervisada. En concreto, se extienden las ideas de dominancia y frontera de Pareto a esta situación. Su aplicación a los estudios de predicción de la rugosidad superficial en el caso de maximizar al mismo tiempo la sensibilidad y la especificidad del clasificador inducido de la red Bayesiana, y no solo maximizar la tasa de clasificación correcta. Los intervalos de estos dos objetivos provienen de un m´etodo de estimación honesta de ambos objetivos, como e.g. validación cruzada en k rodajas o bootstrap.---ABSTRACT---The main objective of this PhD Thesis is to go more deeply into the analysis and design of an intelligent system for surface roughness prediction and control in the end-milling machining process, based fundamentally on Bayesian network classifiers, with the aim of developing a methodology that makes easier the design of this type of systems. The system, whose purpose is to make possible the surface roughness prediction and control, consists of a model learnt from experimental data with the aid of Bayesian networks, that will help to understand the dynamic processes involved in the machining and the interactions among the relevant variables. Since artificial neural networks are models widely used in material cutting proceses, we include also an end-milling model using them, where the geometry and hardness of the piecework are introduced as novel variables not studied so far within this context. Thus, an important contribution in this thesis is these two models for surface roughness prediction, that are then compared with respecto to different aspects: influence of the new variables, performance evaluation metrics, interpretability. One of the main problems with Bayesian classifier-based modelling is the understanding of the enormous posterior probabilitiy tables produced. We introduce an explanation method that generates a set of rules obtained from decision trees. Such trees are induced from a simulated data set generated from the posterior probabilities of the class variable, calculated with the Bayesian network learned from a training data set. Finally, we contribute in the multi-objective field in the case that some of the objectives cannot be quantified as real numbers but as interval-valued functions. This often occurs in machine learning applications, especially those based on supervised classification. Specifically, the dominance and Pareto front ideas are extended to this setting. Its application to the surface roughness prediction studies the case of maximizing simultaneously the sensitivity and specificity of the induced Bayesian network classifier, rather than only maximizing the correct classification rate. Intervals in these two objectives come from a honest estimation method of both objectives, like e.g. k-fold cross-validation or bootstrap.