901 resultados para L71 - Mining, Extraction, and Refining:
Resumo:
Coal mining and incineration of solid residues of health services (SRHS) generate several contaminants that are delivered into the environment, such as heavy metals and dioxins. These xenobiotics can lead to oxidative stress overgeneration in organisms and cause different kinds of pathologies, including cancer. In the present study the concentrations of heavy metals such as lead, copper, iron, manganese and zinc in the urine, as well as several enzymatic and non-enzymatic biomarkers of oxidative stress in the blood (contents of lipoperoxidation = TBARS, protein carbonyls = PC, protein thiols = PT, alpha-tocopherol = AT, reduced glutathione = GSH, and the activities of glutathione S-transferase = GST, glutathione reductase = GR, glutathione peroxidase = GPx, catalase = CAT and superoxide dismutase = SOD), in the blood of six different groups (n = 20 each) of subjects exposed to airborne contamination related to coal mining as well as incineration of solid residues of health services (SRHS) after vitamin E (800 mg/day) and vitamin C (500 mg/day) supplementation during 6 months, which were compared to the situation before the antioxidant intervention (Avila et al., Ecotoxicology 18:1150-1157, 2009; Possamai et al., Ecotoxicology 18:1158-1164, 2009). Except for the decreased manganese contents, heavy metal concentrations were elevated in all groups exposed to both sources of airborne contamination when compared to controls. TBARS and PC concentrations, which were elevated before the antioxidant intervention decreased after the antioxidant supplementation. Similarly, the contents of PC, AT and GSH, which were decreased before the antioxidant intervention, reached values near those found in controls, GPx activity was reestablished in underground miners, and SOD, CAT and GST activities were reestablished in all groups. The results showed that the oxidative stress condition detected previously to the antioxidant supplementation in both directly and indirectly subjects exposed to the airborne contamination from coal dusts and SRHS incineration, was attenuated after the antioxidant intervention.
Resumo:
La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.
Resumo:
A multi-phase framework is typically required for the CFD modelling of metals reduction processes. Such processes typically involve the interaction of liquid metals, a gas (often air) top space, liquid droplets in the top space and injection of both solid particles and gaseous bubbles into the bath. The exchange of mass, momentum and energy between the phases is fundamental to these processes. Multi-phase algorithms are complex and can be unreliable in terms of either or both convergence behaviour or in the extent to which the physics is captured. In this contribution, we discuss these multi-phase flow issues and describe an example of each of the main “single phase” approaches to modelling this class of problems (i.e., Eulerian–Lagrangian and Eulerian–Eulerian). Their utility is illustrated in the context of two problems – one involving the injection of sparging gases into a steel continuous slab caster and the other based on the development of a novel process for aluminium electrolysis. In the steel caster, the coupling of the Lagrangian tracking of the gas phase with the continuum enables the simulation of the transient motion of the metal–flux interface. The model of the electrolysis process employs a novel method for the calculation of slip velocities of oxygen bubbles, resulting from the dissolution of alumina, which allows the efficiency of the process to be predicted.
Resumo:
Two approaches were undertaken to characterize the arsenic (As) content of Chinese rice. First, a national market basket survey (n = 240) was conducted in provincial capitals, sourcing grain from China's premier rice production areas. Second, to reflect rural diets, paddy rice (n = 195) directly from farmers fields were collected from three regions in Hunan, a key rice producing province located in southern China. Two of the sites were within mining and smeltery districts, and the third was devoid of large-scale metal processing industries. Arsenic levels were determined in all the samples while a subset (n = 33) were characterized for As species, using a new simple and rapid extraction method suitable for use with Hamilton PRP-X100 anion exchange columns and HPLC-ICP-MS. The vast majority (85%) of the market rice grains possessed total As levels <150 ng g(-1). The rice collected from mine-impacted regions, however, were found to be highly enriched in As, reaching concentrations of up to 624 ng g(-1). Inorganic As (As(i)) was the predominant species detected in all of the speciated grain, with As(i) levels in some samples exceeding 300 ng g(-1). The As(i) concentration in polished and unpolished Chinese rice was successfully predicted from total As levels. The mean baseline concentrations for As(i) in Chinese market rice based on this survey were estimated to be 96 ng g(-1) while levels in mine-impacted areas were higher with ca. 50% of the rice in one region predicted to fail the national standard.
Resumo:
The purpose of this thesis is to analyze the evolution of an early 20th century mining system in Spitsbergen as applied by Boston-based Arctic Coal Company (ACC). This analysis will address the following questions: Did the system evolve in a linear, technological-based fashion? Or was the progression more a product of interactions and negotiations with the natural and human landscapes present during the time of occupation? Answers to these questions will be sought through review of historical records and material residues identified during the 2008 field examination on Spitsbergen. The Arctic Coal Company’s flagship mine, ACC Mine No. 1, will serve as the focus for this analysis. The mine was the company’s largest undertaking during its occupation of Longyear Valley and today exhibits a large collection of related features and artifacts. The study will emphasize on the material record within an analysis of technical, environmental and social influences that guided the course of the mining system. The intent of this thesis is a better understanding of how a particular resource extraction industry took root in the Arctic.
Resumo:
Lithium is used in the cathode and electrolyte of rechargeable batteries in many portable electronics and electric vehicles, and is thus seen as a critical component of modern technology (Gruber et al., 2011). Electric vehicles are promoted as a way to reduce carbon emissions associated with the transportation sector, which accounts for 14.3% of anthropogenic greenhouse gas emissions (OECD International Transport Forum, 2010). However, the sustainability of lithium procurement will influence the overall environmental impact of this proposed “green” solution. It is estimated that 66% of the world’s lithium resource is contained in natural brines, 24% in pegmatites, and 8% in sedimentary rocks such as hectorite clays (Gruber et al., 2011). It has been shown that “[r]ecycling of lithium from Li-ion batteries may be a critical factor in balancing the supply of lithium with future demand” (Gruber et al., 2011). In an attempt to quantify energy and materials consumption associated with production of a unit of useful lithium compounds, industry reports and peer-reviewed scientific literature concerning lithium mining and lithium recycling were reviewed and compared. Other aspects of sustainability, such as waste or by-products produced in the production of a unit of useful lithium, were also explored. Thus, this paper will serve to further the evaluation of the comparative environmental consequences associated with lithium production via extraction versus recycling. Efficiencies must be made in both processes to maximize productivity while minimizing ecological harm.
Resumo:
No Spanish ed. in: NUC pre-1956, BLC, Sabin, Palau y Dulcet (2nd ed.).
Resumo:
Lithium is used in the cathode and electrolyte of rechargeable batteries in many portable electronics and electric vehicles, and is thus seen as a critical component of modern technology (Gruber et al., 2011). Electric vehicles are promoted as a way to reduce carbon emissions associated with the transportation sector, which accounts for 14.3% of anthropogenic greenhouse gas emissions (OECD International Transport Forum, 2010). However, the sustainability of lithium procurement will influence the overall environmental impact of this proposed “green” solution. It is estimated that 66% of the world’s lithium resource is contained in natural brines, 24% in pegmatites, and 8% in sedimentary rocks such as hectorite clays (Gruber et al., 2011). It has been shown that “[r]ecycling of lithium from Li-ion batteries may be a critical factor in balancing the supply of lithium with future demand” (Gruber et al., 2011). In an attempt to quantify energy and materials consumption associated with production of a unit of useful lithium compounds, industry reports and peer-reviewed scientific literature concerning lithium mining and lithium recycling were reviewed and compared. Other aspects of sustainability, such as waste or by-products produced in the production of a unit of useful lithium, were also explored. Thus, this paper will serve to further the evaluation of the comparative environmental consequences associated with lithium production via extraction versus recycling. Efficiencies must be made in both processes to maximize productivity while minimizing ecological harm.
Resumo:
Data mining can be defined as the extraction of implicit, previously un-known, and potentially useful information from data. Numerous re-searchers have been developing security technology and exploring new methods to detect cyber-attacks with the DARPA 1998 dataset for Intrusion Detection and the modified versions of this dataset KDDCup99 and NSL-KDD, but until now no one have examined the performance of the Top 10 data mining algorithms selected by experts in data mining. The compared classification learning algorithms in this thesis are: C4.5, CART, k-NN and Naïve Bayes. The performance of these algorithms are compared with accuracy, error rate and average cost on modified versions of NSL-KDD train and test dataset where the instances are classified into normal and four cyber-attack categories: DoS, Probing, R2L and U2R. Additionally the most important features to detect cyber-attacks in all categories and in each category are evaluated with Weka’s Attribute Evaluator and ranked according to Information Gain. The results show that the classification algorithm with best performance on the dataset is the k-NN algorithm. The most important features to detect cyber-attacks are basic features such as the number of seconds of a network connection, the protocol used for the connection, the network service used, normal or error status of the connection and the number of data bytes sent. The most important features to detect DoS, Probing and R2L attacks are basic features and the least important features are content features. Unlike U2R attacks, where the content features are the most important features to detect attacks.
Resumo:
Road curves are an important feature of road infrastructure and many serious crashes occur on road curves. In Queensland, the number of fatalities is twice as many on curves as that on straight roads. Therefore, there is a need to reduce drivers’ exposure to crash risk on road curves. Road crashes in Australia and in the Organisation for Economic Co-operation and Development(OECD) have plateaued in the last five years (2004 to 2008) and the road safety community is desperately seeking innovative interventions to reduce the number of crashes. However, designing an innovative and effective intervention may prove to be difficult as it relies on providing theoretical foundation, coherence, understanding, and structure to both the design and validation of the efficiency of the new intervention. Researchers from multiple disciplines have developed various models to determine the contributing factors for crashes on road curves with a view towards reducing the crash rate. However, most of the existing methods are based on statistical analysis of contributing factors described in government crash reports. In order to further explore the contributing factors related to crashes on road curves, this thesis designs a novel method to analyse and validate these contributing factors. The use of crash claim reports from an insurance company is proposed for analysis using data mining techniques. To the best of our knowledge, this is the first attempt to use data mining techniques to analyse crashes on road curves. Text mining technique is employed as the reports consist of thousands of textual descriptions and hence, text mining is able to identify the contributing factors. Besides identifying the contributing factors, limited studies to date have investigated the relationships between these factors, especially for crashes on road curves. Thus, this study proposed the use of the rough set analysis technique to determine these relationships. The results from this analysis are used to assess the effect of these contributing factors on crash severity. The findings obtained through the use of data mining techniques presented in this thesis, have been found to be consistent with existing identified contributing factors. Furthermore, this thesis has identified new contributing factors towards crashes and the relationships between them. A significant pattern related with crash severity is the time of the day where severe road crashes occur more frequently in the evening or night time. Tree collision is another common pattern where crashes that occur in the morning and involves hitting a tree are likely to have a higher crash severity. Another factor that influences crash severity is the age of the driver. Most age groups face a high crash severity except for drivers between 60 and 100 years old, who have the lowest crash severity. The significant relationship identified between contributing factors consists of the time of the crash, the manufactured year of the vehicle, the age of the driver and hitting a tree. Having identified new contributing factors and relationships, a validation process is carried out using a traffic simulator in order to determine their accuracy. The validation process indicates that the results are accurate. This demonstrates that data mining techniques are a powerful tool in road safety research, and can be usefully applied within the Intelligent Transport System (ITS) domain. The research presented in this thesis provides an insight into the complexity of crashes on road curves. The findings of this research have important implications for both practitioners and academics. For road safety practitioners, the results from this research illustrate practical benefits for the design of interventions for road curves that will potentially help in decreasing related injuries and fatalities. For academics, this research opens up a new research methodology to assess crash severity, related to road crashes on curves.
Resumo:
In a seminal data mining article, Leo Breiman [1] argued that to develop effective predictive classification and regression models, we need to move away from the sole dependency on statistical algorithms and embrace a wider toolkit of modeling algorithms that include data mining procedures. Nevertheless, many researchers still rely solely on statistical procedures when undertaking data modeling tasks; the sole reliance on these procedures has lead to the development of irrelevant theory and questionable research conclusions ([1], p.199). We will outline initiatives that the HPC & Research Support group is undertaking to engage researchers with data mining tools and techniques; including a new range of seminars, workshops, and one-on-one consultations covering data mining algorithms, the relationship between data mining and the research cycle, and limitations and problems with these new algorithms. Organisational limitations and restrictions to these initiatives are also discussed.
Resumo:
It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of the large number of terms, patterns, and noise. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern-based methods should perform better than term-based ones in describing user preferences, but many experiments do not support this hypothesis. The innovative technique presented in paper makes a breakthrough for this difficulty. This technique discovers both positive and negative patterns in text documents as higher level features in order to accurately weight low-level features (terms) based on their specificity and their distributions in the higher level features. Substantial experiments using this technique on Reuters Corpus Volume 1 and TREC topics show that the proposed approach significantly outperforms both the state-of-the-art term-based methods underpinned by Okapi BM25, Rocchio or Support Vector Machine and pattern based methods on precision, recall and F measures.