981 resultados para mining-related settlements
Resumo:
This report analyses the coastal and human settlements, tourism and transport sectors in Barbados to assess the potential economic impact of climate change on the sectors. The fundamental aim of this report is to assist with the development of strategies to deal with the potential impact of climate change on Barbados. Some of the key anticipated manifestations of climate change for the Caribbean include elevated air and sea-surface temperatures, sea-level rise, possible changes in extreme events and a reduction in freshwater resources. The economic impact of climate change on the three sectors was estimated for the A2 and B2 IPCC scenarios until 2050 (tourism and transport sectors) and 2100 (coastal and human settlements sector). An exploration of various adaptation strategies was also undertaken for each sector using standard evaluation techniques. The analysis has shown that based upon exposed assets and population, SLR can be classified as having the potential to create potential catastrophe in Barbados. The main contributing factor is the concentration of socioeconomic infrastructure along the coastline in vulnerable areas. The A2 and B2 projections have indicated that the number of catastrophes that can be classified as great is likely to be increased for the country. This is based upon the possible effects of the projected unscheduled impacts to the economy both in terms of loss of life and economic infrastructure. These results arise from the A2 and B2 projections, thereby indicating that growth in numbers and losses are largely due to socioeconomic changes over the projection period and hence the need for increased adaptation strategies. A key adaptation measure recommended is for the government of Barbados to begin reducing the infrastructure deficit by continuously investing in protective infrastructure to decrease the country’s vulnerability to changes in the climate. With regard to the tourism sector, it was found that by combining the impacts due to a reduction in tourist arrivals, coral reef loss and SLR, estimated total economic impact of climate change is US $7,648 million (A2 scenario) and US $5,127 million (B2 scenario). An economic analysis of the benefits and costs of several adaptation options was undertaken to determine the cost effectiveness of each one and it was found that four (4) out of nine (9) options had high cost-benefit ratios. It is therefore recommended that the strategies that were most attractive in terms of the cost-benefit ratios be pursued first and these were: (1) enhanced reef monitoring systems to provide early warning alerts of bleaching events; (2) artificial reefs or fish-aggregating devices; (3) development of national adaptation plans (levee, sea wall and boardwalk); (4) revision of policies related to financing carbon neutral tourism; and (5) increasing recommended design wind speeds for new tourism-related structures. The total cost of climate change on international transportation in Barbados aggregated the impacts of changes in temperature and precipitation, new climate policies and SLR. The impact for air transportation ranges from US$10,727 million (B2 scenario) to US$12,279 million (A2 scenario) and for maritime transportation impact estimates range from US$1,992 million (B2 scenario) to US$2,606 million (A2 scenario). For international transportation as a whole, the impact of climate change varies from US$12,719 million under the B2 scenario to US$14,885 million under the A2 scenario. Barbados has the institutions set up to implement adaptive strategies to strengthen the resilience of the existing international transportation system to climate change impacts. Air and sea terminals and facilities can be made more robust, raised, or even relocated as need be, and where critical to safety and mobility, expanded redundant systems may be considered.
Resumo:
The mercury rejected in the water system, from mining operations and lixiviation of soils after deforestation, is considered to be the main contributors to the contamination of the ecosystem in the Amazon Basin. The objectives of the present study were to examine cytogenetic functions in peripheral lymphocytes within a population living on the banks of the Tapajós River with respect to methylmercury (MeHg) contamination, using hair mercury as a biological indicator of exposure. Our investigation shows a clear relation between methylmercury contamination and cytogenetic damage in lymphocytes at levels well below 50 micrograms/gram, the level at which initial clinical signs and symptoms of mercury poisoning occur. The first apparent biological effect with increasing MeHg hair level was the impairment of lymphocyte proliferation measured as mitotic index (MI). The relation between mercury concentration in hair and MI suggests that this parameter, an indicator of changes in lymphocytes and their ability to respond to culture conditions, may be an early marker of cytotoxicity and genotoxicity in humans and should be taken into account in the preliminary evaluation of the risks to populations exposed in vivo. This is the first report showing clear cytotoxic effects of long-term exposure to MeHg. Although the results strongly suggest that, under the conditions examined here, MeHg is both a spindle poison and a clastogen, the biological significance of these observations are as yet unknown. A long-term follow-up of these subjects should be undertaken.
Resumo:
Abstract Background Mycelium-to-yeast transition in the human host is essential for pathogenicity by the fungus Paracoccidioides brasiliensis and both cell types are therefore critical to the establishment of paracoccidioidomycosis (PCM), a systemic mycosis endemic to Latin America. The infected population is of about 10 million individuals, 2% of whom will eventually develop the disease. Previously, transcriptome analysis of mycelium and yeast cells resulted in the assembly of 6,022 sequence groups. Gene expression analysis, using both in silico EST subtraction and cDNA microarray, revealed genes that were differential to yeast or mycelium, and we discussed those involved in sugar metabolism. To advance our understanding of molecular mechanisms of dimorphic transition, we performed an extended analysis of gene expression profiles using the methods mentioned above. Results In this work, continuous data mining revealed 66 new differentially expressed sequences that were MIPS(Munich Information Center for Protein Sequences)-categorised according to the cellular process in which they are presumably involved. Two well represented classes were chosen for further analysis: (i) control of cell organisation – cell wall, membrane and cytoskeleton, whose representatives were hex (encoding for a hexagonal peroxisome protein), bgl (encoding for a 1,3-β-glucosidase) in mycelium cells; and ags (an α-1,3-glucan synthase), cda (a chitin deacetylase) and vrp (a verprolin) in yeast cells; (ii) ion metabolism and transport – two genes putatively implicated in ion transport were confirmed to be highly expressed in mycelium cells – isc and ktp, respectively an iron-sulphur cluster-like protein and a cation transporter; and a putative P-type cation pump (pct) in yeast. Also, several enzymes from the cysteine de novo biosynthesis pathway were shown to be up regulated in the yeast form, including ATP sulphurylase, APS kinase and also PAPS reductase. Conclusion Taken together, these data show that several genes involved in cell organisation and ion metabolism/transport are expressed differentially along dimorphic transition. Hyper expression in yeast of the enzymes of sulphur metabolism reinforced that this metabolic pathway could be important for this process. Understanding these changes by functional analysis of such genes may lead to a better understanding of the infective process, thus providing new targets and strategies to control PCM.
Resumo:
Advances in biomedical signal acquisition systems for motion analysis have led to lowcost and ubiquitous wearable sensors which can be used to record movement data in different settings. This implies the potential availability of large amounts of quantitative data. It is then crucial to identify and to extract the information of clinical relevance from the large amount of available data. This quantitative and objective information can be an important aid for clinical decision making. Data mining is the process of discovering such information in databases through data processing, selection of informative data, and identification of relevant patterns. The databases considered in this thesis store motion data from wearable sensors (specifically accelerometers) and clinical information (clinical data, scores, tests). The main goal of this thesis is to develop data mining tools which can provide quantitative information to the clinician in the field of movement disorders. This thesis will focus on motor impairment in Parkinson's disease (PD). Different databases related to Parkinson subjects in different stages of the disease were considered for this thesis. Each database is characterized by the data recorded during a specific motor task performed by different groups of subjects. The data mining techniques that were used in this thesis are feature selection (a technique which was used to find relevant information and to discard useless or redundant data), classification, clustering, and regression. The aims were to identify high risk subjects for PD, characterize the differences between early PD subjects and healthy ones, characterize PD subtypes and automatically assess the severity of symptoms in the home setting.
Resumo:
This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).
Development of glass-ceramics from combination of industrial wastes together with boron mining waste
Resumo:
The utilization of borate mineral wastes with glass-ceramic technology was first time studied and primarily not investigated combinations of wastes were incorporated into the research. These wastes consist of; soda lime silica glass, meat bone and meal ash and fly ash. In order to investigate possible and relevant application areas in ceramics, kaolin clay, an essential raw material for ceramic industry was also employed in some studied compositions. As a result, three different glass-ceramic articles obtained by using powder sintering method via individual sintering processes. Light weight micro porous glass-ceramic from borate mining waste, meat bone and meal ash and kaolin clay was developed. In some compositions in related study, soda lime silica glass waste was used as an additive providing lightweight structure with a density below 0.45 g/cm3 and a crushing strength of 1.8±0.1 MPa. In another study within the research, compositions respecting the B2O3–P2O5–SiO2 glass-ceramic ternary system were prepared from; borate wastes, meat bone and meal ash and soda lime silica glass waste and sintered up to 950ºC. Low porous, highly crystallized glass-ceramic structures with density ranging between 1.8 ± 0,7 to 2.0 ± 0,3 g/cm3 and tensile strength ranging between 8,0 ± 2 to 15,0 ± 0,5 MPa were achieved. Lastly, diopside - wollastonite (SiO2-Al2O3-CaO )glass-ceramics from borate wastes, fly ash and soda lime silica glass waste were successfully obtained with controlled rapid sintering between 950 and 1050ºC. The wollastonite and diopside crystal sizes were improved by adopting varied combinations of formulations and heating rates. The properties of the obtained materials show; the articles with a uniform pore structure could be useful for thermal and acoustic insulations and can be embedded in lightweight concrete where low porous glass-ceramics can be employed as building blocks or additive in cement and ceramic industries.
Resumo:
Public participation is an important component of Michigan’s Part 632 Nonferrous Mining law and is identified by researchers as important to decision-making processes. The Kennecott Eagle Project, which is located near Marquette, Michigan, is the first mine permitted under Michigan’s new mining regulation, and this research examines how public participation is structured in regulations, how the permitting process occurred during the permitting of the Eagle Project, and how participants in the permitting process perceived their participation. To understand these issues, this research implemented a review of existing mining policy and public participation policy literature, examination of documents related to the Kennecott Eagle Project and completion of semi-structured, ethnographic interviews with participants in the decision-making process. Interviewees identified issues with the structure of participation, the technical nature of the permitting process, concerns about the Michigan Department of Environmental Quality’s (DEQ) handling of mine permitting, and trust among participants. This research found that the permitting of the Kennecott Eagle Mine progressed as structured by regulation and collected technical input on the mine permit application, but did not meet the expectations of some participants who opposed the project. Findings from this research indicated that current mining regulation in Michigan is resilient to public opposition, there is need for more transparency from the Michigan DEQ during the permitting process, and current participatory structures limit the opportunities for some stakeholder groups to influence decision-making.
Resumo:
The purpose of this thesis is to analyze the evolution of an early 20th century mining system in Spitsbergen as applied by Boston-based Arctic Coal Company (ACC). This analysis will address the following questions: Did the system evolve in a linear, technological-based fashion? Or was the progression more a product of interactions and negotiations with the natural and human landscapes present during the time of occupation? Answers to these questions will be sought through review of historical records and material residues identified during the 2008 field examination on Spitsbergen. The Arctic Coal Company’s flagship mine, ACC Mine No. 1, will serve as the focus for this analysis. The mine was the company’s largest undertaking during its occupation of Longyear Valley and today exhibits a large collection of related features and artifacts. The study will emphasize on the material record within an analysis of technical, environmental and social influences that guided the course of the mining system. The intent of this thesis is a better understanding of how a particular resource extraction industry took root in the Arctic.
Resumo:
In a study of Lunar and Mars settlement concepts, an analysis was made of fundamental design assumptions in five technical areas against a model list of occupational and environmental health concerns. The technical areas included the proposed science projects to be supported, habitat and construction issues, closed ecosystem issues, the "MMM" issues--mining, material-processing, and manufacturing, and the human elements of physiology, behavior and mission approach. Four major lessons were learned. First it is possible to relate public health concerns to complex technological development in a proactive design mode, which has the potential for long-term cost savings. Second, it became very apparent that prior to committing any nation or international group to spending the billions to start and complete a lunar settlement, over the next century, that a significantly different approach must be taken from those previously proposed, to solve the closed ecosystem and "MMM" problems. Third, it also appears that the health concerns and technology issues to be addressed for human exploration into space are fundamentally those to be solved for human habitation of the earth (as a closed ecosystem) in the 21st century. Finally, it is proposed that ecosystem design modeling must develop new tools, based on probabilistic models as a step up from closed circuit models. ^
Resumo:
Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^
Resumo:
The access to medical literature collections such as PubMed, MedScape or Cochrane has been increased notably in the last years by the web-based tools that provide instant access to the information. However, more sophisticated methodologies are needed to exploit efficiently all that information. The lack of advanced search methods in clinical domain produce that even using well-defined questions for a particular disease, clinicians receive too many results. Since no information analysis is applied afterwards, some relevant results which are not presented in the top of the resultant collection could be ignored by the expert causing an important loose of information. In this work we present a new method to improve scientific article search using patient information for query generation. Using federated search strategy, it is able to simultaneously search in different resources and present a unique relevant literature collection. And applying NLP techniques it presents semantically similar publications together, facilitating the identification of relevant information to clinicians. This method aims to be the foundation of a collaborative environment for sharing clinical knowledge related to patients and scientific publications.
Resumo:
A sustainable manufacturing process must rely on an also sustainable raw materials and energy supply. This paper is intended to show the results of the studies developed on sustainable business models for the minerals industry as a fundamental previous part of a sustainable manufacturing process. As it has happened in other economic activities, the mining and minerals industry has come under tremendous pressure to improve its social, developmental, and environmental performance. Mining, refining, and the use and disposal of minerals have in some instances led to significant local environmental and social damage. Nowadays, like in other parts of the corporate world, companies are more routinely expected to perform to ever higher standards of behavior, going well beyond achieving the best rate of return for shareholders. They are also increasingly being asked to be more transparent and subject to third-party audit or review, especially in environmental aspects. In terms of environment, there are three inter-related areas where innovation and new business models can make the biggest difference: carbon, water and biodiversity. The focus in these three areas is for two reasons. First, the industrial and energetic minerals industry has significant footprints in each of these areas. Second, these three areas are where the potential environmental impacts go beyond local stakeholders and communities, and can even have global impacts, like in the case of carbon. So prioritizing efforts in these areas will ultimately be a strategic differentiator as the industry businesses continues to grow. Over the next forty years, world?s population is predicted to rise from 6.300 million to 9.500 million people. This will mean a huge demand of natural resources. Indeed, consumption rates are such that current demand for raw materials will probably soon exceed the planet?s capacity. As awareness of the actual situation grows, the public is demanding goods and services that are even more environmentally sustainable. This means that massive efforts are required to reduce the amount of materials we use, including freshwater, minerals and oil, biodiversity, and marine resources. It?s clear that business as usual is no longer possible. Today, companies face not only the economic fallout of the financial crisis; they face the substantial challenge of transitioning to a low-carbon economy that is constrained by dwindling natural resources easily accessible. Innovative business models offer pioneering companies an early start toward the future. They can signal to consumers how to make sustainable choices and provide reward for both the consumer and the shareholder. Climate change and carbon remain major risk discontinuities that we need to better understand and deal with. In the absence of a global carbon solution, the principal objective of any individual country should be to reduce its global carbon emissions by encouraging conservation. The mineral industry internal response is to continue to focus on reducing the energy intensity of our existing operations through energy efficiency and the progressive introduction of new technology. Planning of the new projects must ensure that their energy footprint is minimal from the start. These actions will increase the long term resilience of the business to uncertain energy and carbon markets. This focus, combined with a strong demand for skills in this strategic area for the future requires an appropriate change in initial and continuing training of engineers and technicians and their awareness of the issue of eco-design. It will also need the development of measurement tools for consistent comparisons between companies and the assessments integration of the carbon footprint of mining equipments and services in a comprehensive impact study on the sustainable development of the Economy.
Resumo:
In the last few years there has been a heightened interest in data treatment and analysis with the aim of discovering hidden knowledge and eliciting relationships and patterns within this data. Data mining techniques (also known as Knowledge Discovery in Databases) have been applied over a wide range of fields such as marketing, investment, fraud detection, manufacturing, telecommunications and health. In this study, well-known data mining techniques such as artificial neural networks (ANN), genetic programming (GP), forward selection linear regression (LR) and k-means clustering techniques, are proposed to the health and sports community in order to aid with resistance training prescription. Appropriate resistance training prescription is effective for developing fitness, health and for enhancing general quality of life. Resistance exercise intensity is commonly prescribed as a percent of the one repetition maximum. 1RM, dynamic muscular strength, one repetition maximum or one execution maximum, is operationally defined as the heaviest load that can be moved over a specific range of motion, one time and with correct performance. The safety of the 1RM assessment has been questioned as such an enormous effort may lead to muscular injury. Prediction equations could help to tackle the problem of predicting the 1RM from submaximal loads, in order to avoid or at least, reduce the associated risks. We built different models from data on 30 men who performed up to 5 sets to exhaustion at different percentages of the 1RM in the bench press action, until reaching their actual 1RM. Also, a comparison of different existing prediction equations is carried out. The LR model seems to outperform the ANN and GP models for the 1RM prediction in the range between 1 and 10 repetitions. At 75% of the 1RM some subjects (n = 5) could perform 13 repetitions with proper technique in the bench press action, whilst other subjects (n = 20) performed statistically significant (p < 0:05) more repetitions at 70% than at 75% of their actual 1RM in the bench press action. Rate of perceived exertion (RPE) seems not to be a good predictor for 1RM when all the sets are performed until exhaustion, as no significant differences (p < 0:05) were found in the RPE at 75%, 80% and 90% of the 1RM. Also, years of experience and weekly hours of strength training are better correlated to 1RM (p < 0:05) than body weight. O'Connor et al. 1RM prediction equation seems to arise from the data gathered and seems to be the most accurate 1RM prediction equation from those proposed in literature and used in this study. Epley's 1RM prediction equation is reproduced by means of data simulation from 1RM literature equations. Finally, future lines of research are proposed related to the problem of the 1RM prediction by means of genetic algorithms, neural networks and clustering techniques. RESUMEN En los últimos años ha habido un creciente interés en el tratamiento y análisis de datos con el propósito de descubrir relaciones, patrones y conocimiento oculto en los mismos. Las técnicas de data mining (también llamadas de \Descubrimiento de conocimiento en bases de datos\) se han aplicado consistentemente a lo gran de un gran espectro de áreas como el marketing, inversiones, detección de fraude, producción industrial, telecomunicaciones y salud. En este estudio, técnicas bien conocidas de data mining como las redes neuronales artificiales (ANN), programación genética (GP), regresión lineal con selección hacia adelante (LR) y la técnica de clustering k-means, se proponen a la comunidad del deporte y la salud con el objetivo de ayudar con la prescripción del entrenamiento de fuerza. Una apropiada prescripción de entrenamiento de fuerza es efectiva no solo para mejorar el estado de forma general, sino para mejorar la salud e incrementar la calidad de vida. La intensidad en un ejercicio de fuerza se prescribe generalmente como un porcentaje de la repetición máxima. 1RM, fuerza muscular dinámica, una repetición máxima o una ejecución máxima, se define operacionalmente como la carga máxima que puede ser movida en un rango de movimiento específico, una vez y con una técnica correcta. La seguridad de las pruebas de 1RM ha sido cuestionada debido a que el gran esfuerzo requerido para llevarlas a cabo puede derivar en serias lesiones musculares. Las ecuaciones predictivas pueden ayudar a atajar el problema de la predicción de la 1RM con cargas sub-máximas y son empleadas con el propósito de eliminar o al menos, reducir los riesgos asociados. En este estudio, se construyeron distintos modelos a partir de los datos recogidos de 30 hombres que realizaron hasta 5 series al fallo en el ejercicio press de banca a distintos porcentajes de la 1RM, hasta llegar a su 1RM real. También se muestra una comparación de algunas de las distintas ecuaciones de predicción propuestas con anterioridad. El modelo LR parece superar a los modelos ANN y GP para la predicción de la 1RM entre 1 y 10 repeticiones. Al 75% de la 1RM algunos sujetos (n = 5) pudieron realizar 13 repeticiones con una técnica apropiada en el ejercicio press de banca, mientras que otros (n = 20) realizaron significativamente (p < 0:05) más repeticiones al 70% que al 75% de su 1RM en el press de banca. El ínndice de esfuerzo percibido (RPE) parece no ser un buen predictor del 1RM cuando todas las series se realizan al fallo, puesto que no existen diferencias signifiativas (p < 0:05) en el RPE al 75%, 80% y el 90% de la 1RM. Además, los años de experiencia y las horas semanales dedicadas al entrenamiento de fuerza están más correlacionadas con la 1RM (p < 0:05) que el peso corporal. La ecuación de O'Connor et al. parece surgir de los datos recogidos y parece ser la ecuación de predicción de 1RM más precisa de aquellas propuestas en la literatura y empleadas en este estudio. La ecuación de predicción de la 1RM de Epley es reproducida mediante simulación de datos a partir de algunas ecuaciones de predicción de la 1RM propuestas con anterioridad. Finalmente, se proponen futuras líneas de investigación relacionadas con el problema de la predicción de la 1RM mediante algoritmos genéticos, redes neuronales y técnicas de clustering.
Resumo:
La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.
Resumo:
En la actualidad existe un gran conocimiento en la caracterización de rellenos hidráulicos, tanto en su caracterización estática, como dinámica. Sin embargo, son escasos en la literatura estudios más generales y globales de estos materiales, muy relacionados con sus usos y principales problemáticas en obras portuarias y mineras. Los procedimientos semi‐empíricos para la evaluación del efecto silo en las celdas de cajones portuarios, así como para el potencial de licuefacción de estos suelos durantes cargas instantáneas y terremotos, se basan en estudios donde la influencia de los parámetros que los rigen no se conocen en gran medida, dando lugar a resultados con considerable dispersión. Este es el caso, por ejemplo, de los daños notificados por el grupo de investigación del Puerto de Barcelona, la rotura de los cajones portuarios en el Puerto de Barcelona en 2007. Por estos motivos y otros, se ha decidido desarrollar un análisis para la evaluación de estos problemas mediante la propuesta de una metodología teórico‐numérica y empírica. El enfoque teórico‐numérico desarrollado en el presente estudio se centra en la determinación del marco teórico y las herramientas numéricas capaces de solventar los retos que presentan estos problemas. La complejidad del problema procede de varios aspectos fundamentales: el comportamiento no lineal de los suelos poco confinados o flojos en procesos de consolidación por preso propio; su alto potencial de licuefacción; la caracterización hidromecánica de los contactos entre estructuras y suelo (camino preferencial para el flujo de agua y consolidación lateral); el punto de partida de los problemas con un estado de tensiones efectivas prácticamente nulo. En cuanto al enfoque experimental, se ha propuesto una metodología de laboratorio muy sencilla para la caracterización hidromecánica del suelo y las interfaces, sin la necesidad de usar complejos aparatos de laboratorio o procedimientos excesivamente complicados. Este trabajo incluye por tanto un breve repaso a los aspectos relacionados con la ejecución de los rellenos hidráulicos, sus usos principales y los fenómenos relacionados, con el fin de establecer un punto de partida para el presente estudio. Este repaso abarca desde la evolución de las ecuaciones de consolidación tradicionales (Terzaghi, 1943), (Gibson, English & Hussey, 1967) y las metodologías de cálculo (Townsend & McVay, 1990) (Fredlund, Donaldson and Gitirana, 2009) hasta las contribuciones en relación al efecto silo (Ranssen, 1985) (Ravenet, 1977) y sobre el fenómeno de la licuefacción (Casagrande, 1936) (Castro, 1969) (Been & Jefferies, 1985) (Pastor & Zienkiewicz, 1986). Con motivo de este estudio se ha desarrollado exclusivamente un código basado en el método de los elementos finitos (MEF) empleando el programa MATLAB. Para ello, se ha esablecido un marco teórico (Biot, 1941) (Zienkiewicz & Shiomi, 1984) (Segura & Caron, 2004) y numérico (Zienkiewicz & Taylor, 1989) (Huerta & Rodríguez, 1992) (Segura & Carol, 2008) para resolver problemas de consolidación multidimensional con condiciones de contorno friccionales, y los correspondientes modelos constitutivos (Pastor & Zienkiewicz, 1986) (Fiu & Liu, 2011). Asimismo, se ha desarrollado una metodología experimental a través de una serie de ensayos de laboratorio para la calibración de los modelos constitutivos y de la caracterización de parámetros índice y de flujo (Castro, 1969) (Bahda 1997) (Been & Jefferies, 2006). Para ello se han empleado arenas de Hostun como material (relleno hidráulico) de referencia. Como principal aportación se incluyen una serie de nuevos ensayos de corte directo para la caracterización hidromecánica de la interfaz suelo – estructura de hormigón, para diferentes tipos de encofrados y rugosidades. Finalmente, se han diseñado una serie de algoritmos específicos para la resolución del set de ecuaciones diferenciales de gobierno que definen este problema. Estos algoritmos son de gran importancia en este problema para tratar el procesamiento transitorio de la consolidación de los rellenos hidráulicos, y de otros efectos relacionados con su implementación en celdas de cajones, como el efecto silo y la licuefacciones autoinducida. Para ello, se ha establecido un modelo 2D axisimétrico, con formulación acoplada u‐p para elementos continuos y elementos interfaz (de espesor cero), que tratan de simular las condiciones de estos rellenos hidráulicos cuando se colocan en las celdas portuarias. Este caso de estudio hace referencia clara a materiales granulares en estado inicial muy suelto y con escasas tensiones efectivas, es decir, con prácticamente todas las sobrepresiones ocasionadas por el proceso de autoconsolidación (por peso propio). Por todo ello se requiere de algoritmos numéricos específicos, así como de modelos constitutivos particulares, para los elementos del continuo y para los elementos interfaz. En el caso de la simulación de diferentes procedimientos de puesta en obra de los rellenos se ha requerido la modificacion de los algoritmos empleados para poder así representar numéricamente la puesta en obra de estos materiales, además de poder realizar una comparativa de los resultados para los distintos procedimientos. La constante actualización de los parámetros del suelo, hace también de este algoritmo una potente herramienta que permite establecer un interesante juego de perfiles de variables, tales como la densidad, el índice de huecos, la fracción de sólidos, el exceso de presiones, y tensiones y deformaciones. En definitiva, el modelo otorga un mejor entendimiento del efecto silo, término comúnmente usado para definir el fenómeno transitorio del gradiente de presiones laterales en las estructuras de contención en forma de silo. Finalmente se incluyen una serie de comparativas entre los resultados del modelo y de diferentes estudios de la literatura técnica, tanto para el fenómeno de las consolidaciones por preso propio (Fredlund, Donaldson & Gitirana, 2009) como para el estudio del efecto silo (Puertos del Estado, 2006, EuroCódigo (2006), Japan Tech, Stands. (2009), etc.). Para concluir, se propone el diseño de un prototipo de columna de decantación con paredes friccionales, como principal propuesta de futura línea de investigación. Wide research is nowadays available on the characterization of hydraulic fills in terms of either static or dynamic behavior. However, reported comprehensive analyses of these soils when meant for port or mining works are scarce. Moreover, the semi‐empirical procedures for assessing the silo effect on cells in floating caissons, and the liquefaction potential of these soils during sudden loads or earthquakes are based on studies where the underlying influence parameters are not well known, yielding results with significant scatter. This is the case, for instance, of hazards reported by the Barcelona Liquefaction working group, with the failure of harbor walls in 2007. By virtue of this, a complex approach has been undertaken to evaluate the problem by a proposal of numerical and laboratory methodology. Within a theoretical and numerical scope, the study is focused on the numerical tools capable to face the different challenges of this problem. The complexity is manifold; the highly non‐linear behavior of consolidating soft soils; their potentially liquefactable nature, the significance of the hydromechanics of the soil‐structure contact, the discontinuities as preferential paths for water flow, setting “negligible” effective stresses as initial conditions. Within an experimental scope, a straightforward laboratory methodology is introduced for the hydromechanical characterization of the soil and the interface without the need of complex laboratory devices or cumbersome procedures. Therefore, this study includes a brief overview of the hydraulic filling execution, main uses (land reclamation, filled cells, tailing dams, etc.) and the underlying phenomena (self‐weight consolidation, silo effect, liquefaction, etc.). It comprises from the evolution of the traditional consolidation equations (Terzaghi, 1943), (Gibson, English, & Hussey, 1967) and solving methodologies (Townsend & McVay, 1990) (Fredlund, Donaldson and Gitirana, 2009) to the contributions in terms of silo effect (Ranssen, 1895) (Ravenet, 1977) and liquefaction phenomena (Casagrande, 1936) (Castro, 1969) (Been & Jefferies, 1985) (Pastor & Zienkiewicz, 1986). The novelty of the study lies on the development of a Finite Element Method (FEM) code, exclusively formulated for this problem. Subsequently, a theoretical (Biot, 1941) (Zienkiewicz and Shiomi, 1984) (Segura and Carol, 2004) and numerical approach (Zienkiewicz and Taylor, 1989) (Huerta, A. & Rodriguez, A., 1992) (Segura, J.M. & Carol, I., 2008) is introduced for multidimensional consolidation problems with frictional contacts and the corresponding constitutive models (Pastor & Zienkiewicz, 1986) (Fu & Liu, 2011). An experimental methodology is presented for the laboratory test and material characterization (Castro 1969) (Bahda 1997) (Been & Jefferies 2006) using Hostun sands as reference hydraulic fill. A series of singular interaction shear tests for the interface calibration is included. Finally, a specific model algorithm for the solution of the set of differential equations governing the problem is presented. The process of consolidation and settlements involves a comprehensive simulation of the transient process of decantation and the build‐up of the silo effect in cells and certain phenomena related to self‐compaction and liquefaction. For this, an implementation of a 2D axi‐syimmetric coupled model with continuum and interface elements, aimed at simulating conditions and self‐weight consolidation of hydraulic fills once placed into floating caisson cells or close to retaining structures. This basically concerns a loose granular soil with a negligible initial effective stress level at the onset of the process. The implementation requires a specific numerical algorithm as well as specific constitutive models for both the continuum and the interface elements. The simulation of implementation procedures for the fills has required the modification of the algorithm so that a numerical representation of these procedures is carried out. A comparison of the results for the different procedures is interesting for the global analysis. Furthermore, the continuous updating of the model provides an insightful logging of variable profiles such as density, void ratio and solid fraction profiles, total and excess pore pressure, stresses and strains. This will lead to a better understanding of complex phenomena such as the transient gradient in lateral pressures due to silo effect in saturated soils. Interesting model and literature comparisons for the self‐weight consolidation (Fredlund, Donaldson, & Gitirana, 2009) and the silo effect results (Puertos del Estado (2006), EuroCode (2006), Japan Tech, Stands. (2009)). This study closes with the design of a decantation column prototype with frictional walls as the main future line of research.