16 resultados para L71 - Mining, Extraction, and Refining:
Resumo:
La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.
Resumo:
Twelve commercially available edible marine algae from France, Japan and Spain and the certified reference material (CRM) NIES No. 9 Sargassum fulvellum were analyzed for total arsenic and arsenic species. Total arsenic concentrations were determined by inductively coupled plasma atomic emission spectrometry (ICP-AES) after microwave digestion and ranged from 23 to 126 μg g−1. Arsenic species in alga samples were extracted with deionized water by microwave-assisted extraction and showed extraction efficiencies from 49 to 98%, in terms of total arsenic. The presence of eleven arsenic species was studied by high performance liquid chromatography–ultraviolet photo-oxidation–hydride generation atomic–fluorescence spectrometry (HPLC–(UV)–HG–AFS) developed methods, using both anion and cation exchange chromatography. Glycerol and phosphate sugars were found in all alga samples analyzed, at concentrations between 0.11 and 22 μg g−1, whereas sulfonate and sulfate sugars were only detected in three of them (0.6-7.2 μg g−1). Regarding arsenic toxic species, low concentration levels of dimethylarsinic acid (DMA) (<0.9 μg g−1) and generally high arsenate (As(V)) concentrations (up to 77 μg g−1) were found in most of the algae studied. The results obtained are of interest to highlight the need to perform speciation analysis and to introduce appropriate legislation to limit toxic arsenic species content in these food products.
Resumo:
In this work, an analytical method was developed for the determination of pharmaceutical drugs inbiosolids. Samples were extracted with an acidic mixture of water and acetone (1:2, v/v) and supportedliquid extraction was used for the clean-up of extracts, eluting with ethyl acetate:methanol (90:10, v/v).The compounds were determined by gas chromatography?tandem mass spectrometry using matrix-match calibration after silylation to form their t-butyldimethylsilyl derivatives. This method presentsvarious advantages, such as a fairly simple operation for the analysis of complex matrices, the use ofinexpensive glassware and low solvent volumes. Satisfactory mean recoveries were obtained with thedeveloped method ranging from 70 to 120% with relative standard deviations (RSDs) ? 13%, and limitsof detection between 0.5 and 3.6 ng g?1. The method was then successfully applied to biosolids samplescollected in Madrid and Catalonia (Spain). Eleven of the sixteen target compounds were detected in thestudied samples, at levels up to 1.1 ?g g?1(salicylic acid). Ibuprofen, caffeine, paracetamol and fenofibratewere detected in all of the samples analyzed.
Resumo:
A sustainable manufacturing process must rely on an also sustainable raw materials and energy supply. This paper is intended to show the results of the studies developed on sustainable business models for the minerals industry as a fundamental previous part of a sustainable manufacturing process. As it has happened in other economic activities, the mining and minerals industry has come under tremendous pressure to improve its social, developmental, and environmental performance. Mining, refining, and the use and disposal of minerals have in some instances led to significant local environmental and social damage. Nowadays, like in other parts of the corporate world, companies are more routinely expected to perform to ever higher standards of behavior, going well beyond achieving the best rate of return for shareholders. They are also increasingly being asked to be more transparent and subject to third-party audit or review, especially in environmental aspects. In terms of environment, there are three inter-related areas where innovation and new business models can make the biggest difference: carbon, water and biodiversity. The focus in these three areas is for two reasons. First, the industrial and energetic minerals industry has significant footprints in each of these areas. Second, these three areas are where the potential environmental impacts go beyond local stakeholders and communities, and can even have global impacts, like in the case of carbon. So prioritizing efforts in these areas will ultimately be a strategic differentiator as the industry businesses continues to grow. Over the next forty years, world?s population is predicted to rise from 6.300 million to 9.500 million people. This will mean a huge demand of natural resources. Indeed, consumption rates are such that current demand for raw materials will probably soon exceed the planet?s capacity. As awareness of the actual situation grows, the public is demanding goods and services that are even more environmentally sustainable. This means that massive efforts are required to reduce the amount of materials we use, including freshwater, minerals and oil, biodiversity, and marine resources. It?s clear that business as usual is no longer possible. Today, companies face not only the economic fallout of the financial crisis; they face the substantial challenge of transitioning to a low-carbon economy that is constrained by dwindling natural resources easily accessible. Innovative business models offer pioneering companies an early start toward the future. They can signal to consumers how to make sustainable choices and provide reward for both the consumer and the shareholder. Climate change and carbon remain major risk discontinuities that we need to better understand and deal with. In the absence of a global carbon solution, the principal objective of any individual country should be to reduce its global carbon emissions by encouraging conservation. The mineral industry internal response is to continue to focus on reducing the energy intensity of our existing operations through energy efficiency and the progressive introduction of new technology. Planning of the new projects must ensure that their energy footprint is minimal from the start. These actions will increase the long term resilience of the business to uncertain energy and carbon markets. This focus, combined with a strong demand for skills in this strategic area for the future requires an appropriate change in initial and continuing training of engineers and technicians and their awareness of the issue of eco-design. It will also need the development of measurement tools for consistent comparisons between companies and the assessments integration of the carbon footprint of mining equipments and services in a comprehensive impact study on the sustainable development of the Economy.
Resumo:
The focus of this chapter is to study feature extraction and pattern classification methods from two medical areas, Stabilometry and Electroencephalography (EEG). Stabilometry is the branch of medicine responsible for examining balance in human beings. Balance and dizziness disorders are probably two of the most common illnesses that physicians have to deal with. In Stabilometry, the key nuggets of information in a time series signal are concentrated within definite time periods are known as events. In this chapter, two feature extraction schemes have been developed to identify and characterise the events in Stabilometry and EEG signals. Based on these extracted features, an Adaptive Fuzzy Inference Neural network has been applied for classification of Stabilometry and EEG signals.
Resumo:
This paper presents the 2005 Miracle’s team approach to the Ad-Hoc Information Retrieval tasks. The goal for the experiments this year was twofold: to continue testing the effect of combination approaches on information retrieval tasks, and improving our basic processing and indexing tools, adapting them to new languages with strange encoding schemes. The starting point was a set of basic components: stemming, transforming, filtering, proper nouns extraction, paragraph extraction, and pseudo-relevance feedback. Some of these basic components were used in different combinations and order of application for document indexing and for query processing. Second-order combinations were also tested, by averaging or selective combination of the documents retrieved by different approaches for a particular query. In the multilingual track, we concentrated our work on the merging process of the results of monolingual runs to get the overall multilingual result, relying on available translations. In both cross-lingual tracks, we have used available translation resources, and in some cases we have used a combination approach.
Resumo:
En la presente tesis se estudian los contenidos geoquímicos de los sedimentos de llanura de inundación en diversas cuencas fluviales seleccionadas con el objetivo de contribuir a un mejor conocimiento de sus condiciones medioambientales. En cada cuenca, se ha muestreado un perfil vertical de llanura de inundación, dividiéndolo en tramos que generalmente corresponden a diferentes episodios de inundación. Estos sedimentos se depositan durante los episodios de crecidas una vez que la corriente sobrepasa los límites del canal. Se caracterizan normalmente por tener un tamaño de grano fino y una estructura en capas horizontales que corresponden a los sucesivos episodios de inundación. Muestran dos ventajas principales con respecto a otros medios de muestreo en geoquímica como suelos o sedimentos de corriente: • Pueden almacenar sedimento antiguo así como actual, con lo que se puede estudiar la historia geoquímica de una zona específica en un mismo punto de muestreo (perfil vertical). • Los sedimentos de llanuras de inundación son capaces de caracterizar grandes áreas de drenaje. El origen de los sedimentos es más diverso que en el sedimento de corriente, debido a las mayores áreas de donde proceden las aguas de las avenidas. Las cuencas han sido seleccionadas según las actividades antropogénicas que en ellas se llevan a cabo, en concreto, actividades urbanas e industriales, minería y agricultura. Así mismo, se han estudiado, como referencia, dos cuencas donde no se espera encontrar ningún tipo de actividad contaminante. Una vez hecha la selección, los sedimentos aluviales de las cuencas se han estudiado cuidadosamente para asegurar que no existen depósitos de acreción lateral en el punto seleccionado. Posteriormente se ha procedido al muestreo del perfil vertical. Las muestras han sido analizadas mediante ICP-MS (ataque total) e INAA para conocer los contenidos totales de los elementos traza y mayoritarios. Los análisis de la fracción extraíble se han llevado a cabo mediante ICP-MS (ataque con agua regia). Así mismo, algunas muestras seleccionadas han sido sometidas a una extracción secuencial para un estudio más detallado. La presencia de materia orgánica ha sido estimada mediante el análisis de Carbono Orgánico Total (TOC). Finalmente, se ha llevado a cabo un análisis de isótopos de Pb en muestras escogidas en los perfiles, con el objetivo de hacer una evaluación ambiental. Los contenidos metálicos aumentan hacia la superficie en algunos de los perfiles, mientras en otros muestran una distribución muy constante exceptuando algún nivel específico con un aumento de los contenidos de la mayoría de los metales. Ha sido posible determinar la influencia de las actividades antropogénicas en algunos de los perfiles. Aquellos que pertenecen a cuencas mineras, urbanas o industrializadas muestran generalmente altos contenidos en elementos metálicos. Es el caso de los perfiles muestreados en los ríos Odiel y Tinto, Besaya, Besós y Manzanares. Algunos de estos perfiles pueden incluso correlacionarse con periodos de tiempo en los que ha tenido lugar una actividad antropogénica más intensa. Los perfiles que mejor se correlacionan con la actividad antropogénica de la cuenca son el perfil de Rivas en el río Manzanares (Madrid), que refleja un crecimiento de la contaminación producida por las actividades urbana e industrial en las últimas décadas, y el río Tinto, que muestra un crecimiento llamativo de los contenidos en su mayoría metálicos que puede estar relacionado con el incremento de la actividad minera que tuvo lugar hace aproximadamente 125 años. El análisis de los isótopos de Pb ha resultado ser una herramienta útil en la evaluación ambiental de estos sedimentos. Con este estudio y mediante la comparación con fuentes naturales y antropogénicas, ha sido posible diferenciar las muestras afectadas por diferentes fuentes de plomo, así como detectar las más afectadas antropogénicamente. ABSTRACT The geochemical composition of overbank sediments of some selected river basins is studied in this thesis in order to contribute to a better knowledge of the environmental conditions surrounding them. In each basin a vertical overbank profile has been sampled, dividing it into stretches that usually correspond to different flood events. The overbank sediments are those deposited during a flood event once the flow spills over the channel banks. They are usually characterized by a very fine grain size and a structure of horizontal layers, which correspond to successive flood events. These sediments show two main advantages regarding other sampling media in geochemistry, like soils or stream sediments: • They can store sediment deposited in the past as well as in current times, so that the history of a specific location can be studied at the very same point (vertical profile). • The overbank sediments are able to characterize a large drainage area. The origin of the sediment is wider than in the stream sediments due to the larger areas where the flood water comes from. The basins have been selected depending on the anthropogenic activities developed in them, namely, urban and industrial activities, mining activities and agricultural activities. As well, two pristine basins have been studied as a reference. Afterwards, the alluvial sediments in the basins have been carefully studied in order to sample a vertical profile and make sure that lateral accretion materials are not present in the profile. The samples have been analysed by ICP-MS (total digestion) and INAA to know the total contents of trace and major elements. Analysis of the mobile fraction has been carried out by ICP-MS (aqua regia); as well some of the samples have been subjected to sequential extraction for a more detailed study. The presence of organic matter has been estimated by the analysis of the Total Organic Carbon (TOC). Finally, a lead isotope analysis of some of the samples in the profiles was carried out in order to make an environmental assessment. Metal contents grow towards the surface in some of the profiles, while others show a very steady distribution, except for some of them with a growth of most of the metals in a specific level. XI It has been possible to determine the influence of the anthropogenic activities in some of the profiles. The ones that belong to mining and urban or industrialized basins show generally high contents of metal elements. This is the case of the profiles sampled in the Odiel and Tinto Rivers, the Besaya River, the Besós River and the Manzanares River. Some of these profiles can even correlate with the periods of time when a more intense activity in their respective basins has taken place. The profiles which best correlate with the anthropogenic activity are the Rivas profile in the Manzanares River, which reflects a growth of the pollution produced by urban and industrial activities in the city of Madrid in the last decades and the Tinto profile, which shows a very dramatic growth of the elemental contents (mostly metals) which can be related to the increase of the mining activities that took place in the last 125 years. The analysis of lead isotopes has turned out to be a powerful tool in the environmental assessment in this type of sediments. With this study and through the comparison with natural and anthropogenic sources, it has been possible to determine samples affected by different sources of lead and to detect the most anthropogenicaly affected ones.
Resumo:
Abstract Due to recent scientific and technological advances in information sys¬tems, it is now possible to perform almost every application on a mobile device. The need to make sense of such devices more intelligent opens an opportunity to design data mining algorithm that are able to autonomous execute in local devices to provide the device with knowledge. The problem behind autonomous mining deals with the proper configuration of the algorithm to produce the most appropriate results. Contextual information together with resource information of the device have a strong impact on both the feasibility of a particu¬lar execution and on the production of the proper patterns. On the other hand, performance of the algorithm expressed in terms of efficacy and efficiency highly depends on the features of the dataset to be analyzed together with values of the parameters of a particular implementation of an algorithm. However, few existing approaches deal with autonomous configuration of data mining algorithms and in any case they do not deal with contextual or resources information. Both issues are of particular significance, in particular for social net¬works application. In fact, the widespread use of social networks and consequently the amount of information shared have made the need of modeling context in social application a priority. Also the resource consumption has a crucial role in such platforms as the users are using social networks mainly on their mobile devices. This PhD thesis addresses the aforementioned open issues, focusing on i) Analyzing the behavior of algorithms, ii) mapping contextual and resources information to find the most appropriate configuration iii) applying the model for the case of a social recommender. Four main contributions are presented: - The EE-Model: is able to predict the behavior of a data mining algorithm in terms of resource consumed and accuracy of the mining model it will obtain. - The SC-Mapper: maps a situation defined by the context and resource state to a data mining configuration. - SOMAR: is a social activity (event and informal ongoings) recommender for mobile devices. - D-SOMAR: is an evolution of SOMAR which incorporates the configurator in order to provide updated recommendations. Finally, the experimental validation of the proposed contributions using synthetic and real datasets allows us to achieve the objectives and answer the research questions proposed for this dissertation.
Resumo:
This paper introduces a semantic language developed with the objective to be used in a semantic analyzer based on linguistic and world knowledge. Linguistic knowledge is provided by a Combinatorial Dictionary and several sets of rules. Extra-linguistic information is stored in an Ontology. The meaning of the text is represented by means of a series of RDF-type triples of the form predicate (subject, object). Semantic analyzer is one of the options of the multifunctional ETAP-3 linguistic processor. The analyzer can be used for Information Extraction and Question Answering. We describe semantic representation of expressions that provide an assessment of the number of objects involved and/or give a quantitative evaluation of different types of attributes. We focus on the following aspects: 1) parametric and non-parametric attributes; 2) gradable and non-gradable attributes; 3) ontological representation of different classes of attributes; 4) absolute and relative quantitative assessment; 5) punctual and interval quantitative assessment; 6) intervals with precise and fuzzy boundaries
Resumo:
Esta tesis propone un sistema biométrico de geometría de mano orientado a entornos sin contacto junto con un sistema de detección de estrés capaz de decir qué grado de estrés tiene una determinada persona en base a señales fisiológicas Con respecto al sistema biométrico, esta tesis contribuye con el diseño y la implementación de un sistema biométrico de geometría de mano, donde la adquisición se realiza sin ningún tipo de contacto, y el patrón del usuario se crea considerando únicamente datos del propio individuo. Además, esta tesis propone un algoritmo de segmentación multiescala para solucionar los problemas que conlleva la adquisición de manos en entornos reales. Por otro lado, respecto a la extracción de características y su posterior comparación esta tesis tiene una contribución específica, proponiendo esquemas adecuados para llevar a cabo tales tareas con un coste computacional bajo pero con una alta precisión en el reconocimiento de personas. Por último, este sistema es evaluado acorde a la norma estándar ISO/IEC 19795 considerando seis bases de datos públicas. En relación al método de detección de estrés, esta tesis propone un sistema basado en dos señales fisiológicas, concretamente la tasa cardiaca y la conductancia de la piel, así como la creación de un innovador patrón de estrés que recoge el comportamiento de ambas señales bajo las situaciones de estrés y no-estrés. Además, este sistema está basado en lógica difusa para decidir el grado de estrés de un individuo. En general, este sistema es capaz de detectar estrés de forma precisa y en tiempo real, proporcionando una solución adecuada para sistemas biométricos actuales, donde la aplicación del sistema de detección de estrés es directa para evitar situaciónes donde los individuos sean forzados a proporcionar sus datos biométricos. Finalmente, esta tesis incluye un estudio de aceptabilidad del usuario, donde se evalúa cuál es la aceptación del usuario con respecto a la técnica biométrica propuesta por un total de 250 usuarios. Además se incluye un prototipo implementado en un dispositivo móvil y su evaluación. ABSTRACT: This thesis proposes a hand biometric system oriented to unconstrained and contactless scenarios together with a stress detection method able to elucidate to what extent an individual is under stress based on physiological signals. Concerning the biometric system, this thesis contributes with the design and implementation of a hand-based biometric system, where the acquisition is carried out without contact and the template is created only requiring information from a single individual. In addition, this thesis proposes an algorithm based on multiscale aggregation in order to tackle with the problem of segmentation in real unconstrained environments. Furthermore, feature extraction and matching are also a specific contributions of this thesis, providing adequate schemes to carry out both actions with low computational cost but with certain recognition accuracy. Finally, this system is evaluated according to international standard ISO/IEC 19795 considering six public databases. In relation to the stress detection method, this thesis proposes a system based on two physiological signals, namely heart rate and galvanic skin response, with the creation of an innovative stress detection template which gathers the behaviour of both physiological signals under both stressing and non-stressing situations. Besides, this system is based on fuzzy logic to elucidate the level of stress of an individual. As an overview, this system is able to detect stress accurately and in real-time, providing an adequate solution for current biometric systems, where the application of a stress detection system is direct to avoid situations where individuals are forced to provide the biometric data. Finally, this thesis includes a user acceptability evaluation, where the acceptance of the proposed biometric technique is assessed by a total of 250 individuals. In addition, this thesis includes a mobile implementation prototype and its evaluation.
Resumo:
Sterile coal is a low-value residue associated to the coal extraction and mining activity. According to the type and origin of the coal bed configuration, sterile coal production can mainly vary on quantity, calorific value and presence of sulphur compounds. In addition, the potential availability of sterile coal within Spain is apparently high and its contribution to the local power generation would be of interest playing a significant role. The proposed study evaluates the availability and deployment of gasification technologies to drive clean electricity generation from waste coal and sterile rock coal, incorporating greenhouse gas emission mitigation systems, like CO2, H2S and NOx removal systems. It establishes the target facility and its conceptual basic design proposal. The syngas obtained after the gasification of sterile coal is processed through specific conditioning units before entering into the combustion chamber of a gas turbine. Flue gas leaving the gas turbine is ducted to a heat recovery steam generation boiler; the steam produced within the boilerdrives a steam turbine. The target facility resembles a singular Integrated Gasification in Combined Cycle (IGCC) power station. The evaluation of the conceptual basic design according to the power output set for a maximum sterile contribution, established that rates over 95% H2S and 90% CO2 removal can be achieved. Noticeable decrease of NOx compounds can be also achieved by the use of commercial technology. A techno-economic approach of the conceptual basic design is made evaluating the integration of potential unitsand their implementation within the target facility aiming toachieve clean power generation. The criterion to be compliant with the most restrictive regulation regarding environmental emissions is setting to carry out this analysis.
Resumo:
La caracterización de los cultivos cubierta (cover crops) puede permitir comparar la idoneidad de diferentes especies para proporcionar servicios ecológicos como el control de la erosión, el reciclado de nutrientes o la producción de forrajes. En este trabajo se estudiaron bajo condiciones de campo diferentes técnicas para caracterizar el dosel vegetal con objeto de establecer una metodología para medir y comparar las arquitecturas de los cultivos cubierta más comunes. Se estableció un ensayo de campo en Madrid (España central) para determinar la relación entre el índice de área foliar (LAI) y la cobertura del suelo (GC) para un cultivo de gramínea, uno de leguminosa y uno de crucífera. Para ello se sembraron doce parcelas con cebada (Hordeum vulgare L.), veza (Vicia sativa L.), y colza (Brassica napus L.). En 10 fechas de muestreo se midieron el LAI (con estimaciones directas y del LAI-2000), la fracción interceptada de la radiación fotosintéticamente activa (FIPAR) y la GC. Un experimento de campo de dos años (Octubre-Abril) se estableció en la misma localización para evaluar diferentes especies (Hordeum vulgare L., Secale cereale L., x Triticosecale Whim, Sinapis alba L., Vicia sativa L.) y cultivares (20) en relación con su idoneidad para ser usadas como cultivos cubierta. La GC se monitorizó mediante análisis de imágenes digitales con 21 y 22 muestreos, y la biomasa se midió 8 y 10 veces, respectivamente para cada año. Un modelo de Gompertz caracterizó la cobertura del suelo hasta el decaimiento observado tras las heladas, mientras que la biomasa se ajustó a ecuaciones de Gompertz, logísticas y lineales-exponenciales. Al final del experimento se determinaron el C, el N y el contenido en fibra (neutrodetergente, ácidodetergente y lignina), así como el N fijado por las leguminosas. Se aplicó el análisis de decisión multicriterio (MCDA) con objeto de obtener un ranking de especies y cultivares de acuerdo con su idoneidad para actuar como cultivos cubierta en cuatro modalidades diferentes: cultivo de cobertura, cultivo captura, abono verde y forraje. Las asociaciones de cultivos leguminosas con no leguminosas pueden afectar al crecimiento radicular y a la absorción de N de ambos componentes de la mezcla. El conocimiento de cómo los sistemas radiculares específicos afectan al crecimiento individual de las especies es útil para entender las interacciones en las asociaciones, así como para planificar estrategias de cultivos cubierta. En un tercer ensayo se combinaron estudios en rhizotrones con extracción de raíces e identificación de especies por microscopía, así como con estudios de crecimiento, absorción de N y 15N en capas profundas del suelo. Las interacciones entre raíces en su crecimiento y en el aprovisionamiento de N se estudiaron para dos de los cultivares mejor valorados en el estudio previo: uno de cebada (Hordeum vulgare L. cv. Hispanic) y otro de veza (Vicia sativa L. cv. Aitana). Se añadió N en dosis de 0 (N0), 50 (N1) y 150 (N2) kg N ha-1. Como resultados del primer estudio, se ajustaron correctamente modelos lineales y cuadráticos a la relación entre la GC y el LAI para todos los cultivos, pero en la gramínea alcanzaron una meseta para un LAI>4. Antes de alcanzar la cobertura total, la pendiente de la relación lineal entre ambas variables se situó en un rango entre 0.025 y 0.030. Las lecturas del LAI-2000 estuvieron correlacionadas linealmente con el LAI, aunque con tendencia a la sobreestimación. Las correcciones basadas en el efecto de aglutinación redujeron el error cuadrático medio del LAI estimado por el LAI-2000 desde 1.2 hasta 0.5 para la crucífera y la leguminosa, no siendo efectivas para la cebada. Esto determinó que para los siguientes estudios se midieran únicamente la GC y la biomasa. En el segundo experimento, las gramíneas alcanzaron la mayor cobertura del suelo (83-99%) y la mayor biomasa (1226-1928 g m-2) al final del mismo. Con la mayor relación C/N (27-39) y contenido en fibra digestible (53-60%) y la menor calidad de residuo (~68%). La mostaza presentó elevadas GC, biomasa y absorción de N en el año más templado en similitud con las gramíneas, aunque escasa calidad como forraje en ambos años. La veza presentó la menor absorción de N (2.4-0.7 g N m-2) debido a la fijación de N (9.8-1.6 g N m-2) y escasa acumulación de N. El tiempo térmico hasta alcanzar el 30% de GC constituyó un buen indicador de especies de rápida cubrición. La cuantificación de las variables permitió hallar variabilidad entre las especies y proporcionó información para posteriores decisiones sobre la selección y manejo de los cultivos cubierta. La agregación de dichas variables a través de funciones de utilidad permitió confeccionar rankings de especies y cultivares para cada uso. Las gramíneas fueron las más indicadas para los usos de cultivo de cobertura, cultivo captura y forraje, mientras que las vezas fueron las mejor como abono verde. La mostaza alcanzó altos valores como cultivo de cobertura y captura en el primer año, pero el segundo decayó debido a su pobre actuación en los inviernos fríos. Hispanic fue el mejor cultivar de cebada como cultivo de cobertura y captura, mientras que Albacete como forraje. El triticale Titania alcanzó la posición más alta como cultiva de cobertura, captura y forraje. Las vezas Aitana y BGE014897 mostraron buenas aptitudes como abono verde y cultivo captura. El MCDA permitió la comparación entre especies y cultivares proporcionando información relevante para la selección y manejo de cultivos cubierta. En el estudio en rhizotrones tanto la mezcla de especies como la cebada alcanzaron mayor intensidad de raíces (RI) y profundidad (RD) que la veza, con valores alrededor de 150 cruces m-1 y 1.4 m respectivamente, comparados con 50 cruces m-1 y 0.9 m para la veza. En las capas más profundas del suelo, la asociación de cultivos mostró valores de RI ligeramente mayores que la cebada en monocultivo. La cebada y la asociación obtuvieron mayores valores de densidad de raíces (RLD) (200-600 m m-3) que la veza (25-130) entre 0.8 y 1.2 m de profundidad. Los niveles de N no mostraron efectos claros en RI, RD ó RLD, sin embargo, el incremento de N favoreció la proliferación de raíces de veza en la asociación en capas profundas del suelo, con un ratio cebada/veza situado entre 25 a N0 y 5 a N2. La absorción de N de la cebada se incrementó en la asociación a expensas de la veza (de ~100 a 200 mg planta-1). Las raíces de cebada en la asociación absorbieron también más nitrógeno marcado de las capas profundas del suelo (0.6 mg 15N planta-1) que en el monocultivo (0.3 mg 15N planta-1). ABSTRACT Cover crop characterization may allow comparing the suitability of different species to provide ecological services such as erosion control, nutrient recycling or fodder production. Different techniques to characterize plant canopy were studied under field conditions in order to establish a methodology for measuring and comparing cover crops canopies. A field trial was established in Madrid (central Spain) to determine the relationship between leaf area index (LAI) and ground cover (GC) in a grass, a legume and a crucifer crop. Twelve plots were sown with either barley (Hordeum vulgare L.), vetch (Vicia sativa L.), or rape (Brassica napus L.). On 10 sampling dates the LAI (both direct and LAI-2000 estimations), fraction intercepted of photosynthetically active radiation (FIPAR) and GC were measured. A two-year field experiment (October-April) was established in the same location to evaluate different species (Hordeum vulgare L., Secale cereale L., x Triticosecale Whim, Sinapis alba L., Vicia sativa L.) and cultivars (20) according to their suitability to be used as cover crops. GC was monitored through digital image analysis with 21 and 22 samples, and biomass measured 8 and 10 times, respectively for each season. A Gompertz model characterized ground cover until the decay observed after frosts, while biomass was fitted to Gompertz, logistic and linear-exponential equations. At the end of the experiment C, N, and fiber (neutral detergent, acid and lignin) contents, and the N fixed by the legumes were determined. Multicriteria decision analysis (MCDA) was applied in order to rank the species and cultivars according to their suitability to perform as cover crops in four different modalities: cover crop, catch crop, green manure and fodder. Intercropping legumes and non-legumes may affect the root growth and N uptake of both components in the mixture. The knowledge of how specific root systems affect the growth of the individual species is useful for understanding the interactions in intercrops as well as for planning cover cropping strategies. In a third trial rhizotron studies were combined with root extraction and species identification by microscopy and with studies of growth, N uptake and 15N uptake from deeper soil layers. The root interactions of root growth and N foraging were studied for two of the best ranked cultivars in the previous study: a barley (Hordeum vulgare L. cv. Hispanic) and a vetch (Vicia sativa L. cv. Aitana). N was added at 0 (N0), 50 (N1) and 150 (N2) kg N ha-1. As a result, linear and quadratic models fitted to the relationship between the GC and LAI for all of the crops, but they reached a plateau in the grass when the LAI > 4. Before reaching full cover, the slope of the linear relationship between both variables was within the range of 0.025 to 0.030. The LAI-2000 readings were linearly correlated with the LAI but they tended to overestimation. Corrections based on the clumping effect reduced the root mean square error of the estimated LAI from the LAI-2000 readings from 1.2 to less than 0.50 for the crucifer and the legume, but were not effective for barley. This determined that in the following studies only the GC and biomass were measured. In the second experiment, the grasses reached the highest ground cover (83- 99%) and biomass (1226-1928 g/m2) at the end of the experiment. The grasses had the highest C/N ratio (27-39) and dietary fiber (53-60%) and the lowest residue quality (~68%). The mustard presented high GC, biomass and N uptake in the warmer year with similarity to grasses, but low fodder capability in both years. The vetch presented the lowest N uptake (2.4-0.7 g N/m2) due to N fixation (9.8-1.6 g N/m2) and low biomass accumulation. The thermal time until reaching 30% ground cover was a good indicator of early coverage species. Variable quantification allowed finding variability among the species and provided information for further decisions involving cover crops selection and management. Aggregation of these variables through utility functions allowed ranking species and cultivars for each usage. Grasses were the most suitable for the cover crop, catch crop and fodder uses, while the vetches were the best as green manures. The mustard attained high ranks as cover and catch crop the first season, but the second decayed due to low performance in cold winters. Hispanic was the most suitable barley cultivar as cover and catch crop, and Albacete as fodder. The triticale Titania attained the highest rank as cover and catch crop and fodder. Vetches Aitana and BGE014897 showed good aptitudes as green manures and catch crops. MCDA allowed comparison among species and cultivars and might provide relevant information for cover crops selection and management. In the rhizotron study the intercrop and the barley attained slightly higher root intensity (RI) and root depth (RD) than the vetch, with values around 150 crosses m-1 and 1.4 m respectively, compared to 50 crosses m-1 and 0.9 m for the vetch. At deep soil layers, intercropping showed slightly larger RI values compared to the sole cropped barley. The barley and the intercropping had larger root length density (RLD) values (200-600 m m-3) than the vetch (25-130) at 0.8-1.2 m depth. The topsoil N supply did not show a clear effect on the RI, RD or RLD; however increasing topsoil N favored the proliferation of vetch roots in the intercropping at deep soil layers, with the barley/vetch root ratio ranging from 25 at N0 to 5 at N2. The N uptake of the barley was enhanced in the intercropping at the expense of the vetch (from ~100 mg plant-1 to 200). The intercropped barley roots took up more labeled nitrogen (0.6 mg 15N plant-1) than the sole-cropped barley roots (0.3 mg 15N plant-1) from deep layers.
Resumo:
The plant cuticle has traditionally been conceived as an independent hydrophobic layer that covers the external epidermal cell wall. Due to its complexity, the existing relationship between cuticle chemical composition and ultra-structure remains unclear to date. This study aimed to examine the link between chemical composition and structure of isolated, adaxial leaf cuticles of Eucalyptus camaldulensis and E. globulus by the gradual extraction and identification of lipid constituents (cutin and soluble lipids), coupled to spectroscopic and microscopic analyses. The soluble compounds and cutin monomers identified could not be assigned to a concrete internal cuticle ultra-structure. After cutin depolymerization, a cellulose network resembling the cell wall was observed, with different structural patterns in the regions ascribed to the cuticle proper and cuticular layer, respectively. Our results suggest that the current cuticle model should be revised, stressing the presence and major role of cell wall polysaccharides. It is concluded that the cuticle may be interpreted as a modified cell wall region which contains additional lipids. The major heterogeneity of the plant cuticle makes it difficult to establish a direct link between cuticle chemistry and structure with the existing methodologies.
Resumo:
BACKGROUND: Clinical Trials (CTs) are essential for bridging the gap between experimental research on new drugs and their clinical application. Just like CTs for traditional drugs and biologics have helped accelerate the translation of biomedical findings into medical practice, CTs for nanodrugs and nanodevices could advance novel nanomaterials as agents for diagnosis and therapy. Although there is publicly available information about nanomedicine-related CTs, the online archiving of this information is carried out without adhering to criteria that discriminate between studies involving nanomaterials or nanotechnology-based processes (nano), and CTs that do not involve nanotechnology (non-nano). Finding out whether nanodrugs and nanodevices were involved in a study from CT summaries alone is a challenging task. At the time of writing, CTs archived in the well-known online registry ClinicalTrials.gov are not easily told apart as to whether they are nano or non-nano CTs-even when performed by domain experts, due to the lack of both a common definition for nanotechnology and of standards for reporting nanomedical experiments and results. METHODS: We propose a supervised learning approach for classifying CT summaries from ClinicalTrials.gov according to whether they fall into the nano or the non-nano categories. Our method involves several stages: i) extraction and manual annotation of CTs as nano vs. non-nano, ii) pre-processing and automatic classification, and iii) performance evaluation using several state-of-the-art classifiers under different transformations of the original dataset. RESULTS AND CONCLUSIONS: The performance of the best automated classifier closely matches that of experts (AUC over 0.95), suggesting that it is feasible to automatically detect the presence of nanotechnology products in CT summaries with a high degree of accuracy. This can significantly speed up the process of finding whether reports on ClinicalTrials.gov might be relevant to a particular nanoparticle or nanodevice, which is essential to discover any precedents for nanotoxicity events or advantages for targeted drug therapy.
Resumo:
Cognitive linguistics is considered as one of the most appropriate approaches to the study of scientific and technical language formation and development, where metaphor is accepted to play an essential role. This paper, based on the Cognitive Theory of Metaphor, takes as the starting point the terminological metaphors established in the research project METACITEC(Note 1), which was developed with the purpose of unfolding constitutive metaphors and their function in the language of science and technology. After the analysis of metaphorical terms and using a mixed corpus from the fields of Agriculture, Geology, Mining, Metallurgy, and other related technical fields, this study presents a proposal for a hierarchy of the selected metaphors underlying the scientific conceptual system, based on the semantic distance found in the projection from the source domain to the target domain. We argue that this semantic distance can be considered as an important parameter to take into account in order to establish the metaphoricity of science and technology metaphorical terms. The findings contribute to expand on the CTM stance that metaphor is a matter of cognition by reviewing the abstract-concrete conceptual relationship between the target and source domains, and to determine the role of human creativity and imagination in the language of science and technology configuration