970 resultados para 650200 Mining and Extraction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two mechanisms of conduction were identified from temperature dependent (120 K-340 K) DC electrical resistivity measurements of composites of poly(c-caprolactone) (PCL) and multi-walled carbon nanotubes (MWCNTs). Activation of variable range hopping (VRH) occurred at lower temperatures than that for temperature fluctuation induced tunneling (TFIT). Experimental data was in good agreement with the VRH model in contrast to the TFIT model, where broadening of tunnel junctions and increasing electrical resistivity at T > T-g is a consequence of a large difference in the coefficients of thermal expansion of PCL and MWCNTs. A numerical model was developed to explain this behavior accounting for a thermal expansion effect by supposing the large increase in electrical resistivity corresponds to the larger relative deformation due to thermal expansion associated with disintegration of the conductive MWCNT network. MWCNTs had a significant nucleating effect on PCL resulting in increased PCL crystallinity and an electrically insulating layer between MWCNTs. The onset of rheological percolation at similar to 0.18 vol% MWCNTs was clearly evident as storage modulus, G' and complex viscosity, vertical bar eta*vertical bar increased by several orders of magnitude. From Cole-Cole and Van Gurp-Palmen plots, and extraction of crossover points (G(c)) from overlaying plots of G' and G '' as a function of frequency, the onset of rheological percolation at 0.18 vol% MWCNTs was confirmed, a similar MWCNT loading to that determined for electrical percolation. 

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present invention relates to a novel class of water compatible molecularly imprinted polymers (AquaMIPs) capable of selectively binding target molecules such as riboflavin, or analogues thereof, in water or aqueous media, their synthesis and use thereof in food processing and extraction or separation processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho teve como objectivo caracterizar quimicamente a água da chuva recolhida na cidade de Aveiro, localizada a sudoeste da Europa, no período de Setembro de 2008 a Setembro de 2009. Para matrizes diluídas como a da água da chuva, as metodologias analíticas a utilizar para se conseguir uma rigorosa caracterização química são de grande importância e ainda não estão uniformizadas. Assim, para caracterizar a fracção orgânica, primeiramente foram comparadas duas metodologias de filtração (0.22 e 0.45 μm) e foram estudados dois procedimentos de preservação da água da chuva (refrigeração e congelação), utilizando a espectroscopia de fluorescência molecular. Além disso, foram comparados dois procedimentos de isolamento e extracção da matéria orgânica dissolvida (DOM) da água da chuva, baseados na sorção nos sorbentes DAX-8 e C-18, utilizando as espectroscopias de ultravioleta-visível e fluorescência molecular. Relativamente aos resultados das metodologias de filtração e preservação, é recomendada a filtração por 0.45 μm, assim como, as amostras de água da chuva deverão ser mantidas no escuro a 4ºC, no máximo até 4 dias, até às análises espectroscópicas. Relativamente à metodologia de isolamento e extracção da DOM, os resultados mostraram que o procedimento de isolamento baseado na C-18 extraiu a DOM que é representativa da matriz global, enquanto que o procedimento da DAX-8 extraiu preferencialmente a fracção do tipo húmico. Como no presente trabalho pretendíamos caracterizar a fracção do tipo húmico da DOM da água da chuva, foi escolhida a metodologia de isolamento e extracção baseada na sorção no sorvente DAX-8. Previamente ao isolamento e extracção da DOM da água da chuva, toda a fracção orgânica das amostras de água da chuva foi caracterizada pelas técnicas de ultravioleta-visível e de fluorescência molecular. As amostras mostraram características semelhantes às de outras águas naturais, e a água da chuva do Verão e Outono apresentou maior conteúdo da matéria orgânica dissolvida cromofórica que a do Inverno e Primavera. Posteriormente, a fracção do tipo húmico de algumas amostras de água da chuva, isolada e extraída pelo procedimento baseado na DAX-8, foi caracterizada utilizando as técnicas espectroscópicas de ultravioleta-visível, fluorescência molecular e ressonância magnética nuclear de protão. Todos os extractos continham uma mistura complexa de compostos hidroxilados e ácidos carboxílicos, com uma predominância da componente alifática e um baixo conteúdo da componente aromática. A fracção inorgânica da água da chuva foi caracterizada determinando a concentração das seguintes espécies iónicas: H+, NH4 +, Cl-, NO3 -, SO4 2-. Os resultados foram comparados com os obtidos na chuva colectada no mesmo local entre 1986-1989 e mostraram que de todos os iões determinados a concentração de NO3 - foi a única que aumentou (cerca do dobro) em 20 anos, tendo sido atribuído ao aumento de veículos e emissões industriais na área de amostragem. Durante o período de amostragem cerca de 80% da precipitação esteve associada a massas de ar oceânicas, enquanto a restante esteve relacionada com massas que tiveram uma influência antropogénica e terrestre. De um modo geral, para as fracções orgânica e inorgânica da água da chuva analisadas, o conteúdo químico foi menor para as amostras que estiveram associadas a massas de ar marítimas do que para as amostras que tiveram contribuições terrestres e antropogénicas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development and applications of thermoset polymeric composites, namely fiber reinforced polymers (FRP), have shifted in the last decades more and more into the mass market [1]. Production and consume have increased tremendously mainly for the construction, transportation and automobile sectors [2, 3]. Although the many successful uses of thermoset composite materials, recycling process of byproducts and end of lifecycle products constitutes a more difficult issue. The perceived lack of recyclability of composite materials is now increasingly important and seen as a key barrier to the development or even continued used of these materials in some markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile augmented reality applications are increasingly utilized as a medium for enhancing learning and engagement in history education. Although these digital devices facilitate learning through immersive and appealing experiences, their design should be driven by theories of learning and instruction. We provide an overview of an evidence-based approach to optimize the development of mobile augmented reality applications that teaches students about history. Our research aims to evaluate and model the impacts of design parameters towards learning and engagement. The research program is interdisciplinary in that we apply techniques derived from design-based experiments and educational data mining. We outline the methodological and analytical techniques as well as discuss the implications of the anticipated findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and deterministic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel metaheuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS metaheuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and determinis- tic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel meta–heuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS meta–heuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Essai doctoral présenté à la Faculté des arts et des sciences en vue de l’obtention du grade de Doctorat en psychologie clinique (D.Psy.)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Triple quadrupole mass spectrometers coupled with high performance liquid chromatography are workhorses in quantitative bioanalyses. It provides substantial benefits including reproducibility, sensitivity and selectivity for trace analysis. Selected Reaction Monitoring allows targeted assay development but data sets generated contain very limited information. Data mining and analysis of non-targeted high-resolution mass spectrometry profiles of biological samples offer the opportunity to perform more exhaustive assessments, including quantitative and qualitative analysis. The objectives of this study was to test method precision and accuracy, statistically compare bupivacaine drug concentration in real study samples and verify if high resolution and accurate mass data collected in scan mode can actually permit retrospective data analysis, more specifically, extract metabolite related information. The precision and accuracy data presented using both instruments provided equivalent results. Overall, the accuracy was ranging from 106.2 to 113.2% and the precision observed was from 1.0 to 3.7%. Statistical comparisons using a linear regression between both methods reveal a coefficient of determination (R2) of 0.9996 and a slope of 1.02 demonstrating a very strong correlation between both methods. Individual sample comparison showed differences from -4.5% to 1.6% well within the accepted analytical error. Moreover, post acquisition extracted ion chromatograms at m/z 233.1648 ± 5 ppm (M-56) and m/z 305.2224 ± 5 ppm (M+16) revealed the presence of desbutyl-bupivacaine and three distinct hydroxylated bupivacaine metabolites. Post acquisition analysis allowed us to produce semiquantitative evaluations of the concentration-time profiles for bupicavaine metabolites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we discuss Conceptual Knowledge Discovery in Databases (CKDD) in its connection with Data Analysis. Our approach is based on Formal Concept Analysis, a mathematical theory which has been developed and proven useful during the last 20 years. Formal Concept Analysis has led to a theory of conceptual information systems which has been applied by using the management system TOSCANA in a wide range of domains. In this paper, we use such an application in database marketing to demonstrate how methods and procedures of CKDD can be applied in Data Analysis. In particular, we show the interplay and integration of data mining and data analysis techniques based on Formal Concept Analysis. The main concern of this paper is to explain how the transition from data to knowledge can be supported by a TOSCANA system. To clarify the transition steps we discuss their correspondence to the five levels of knowledge representation established by R. Brachman and to the steps of empirically grounded theory building proposed by A. Strauss and J. Corbin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Semantic Web Mining aims at combining the two fast-developing research areas Semantic Web and Web Mining. This survey analyzes the convergence of trends from both areas: an increasing number of researchers is working on improving the results of Web Mining by exploiting semantic structures in the Web, and they make use of Web Mining techniques for building the Semantic Web. Last but not least, these techniques can be used for mining the Semantic Web itself. The Semantic Web is the second-generation WWW, enriched by machine-processable information which supports the user in his tasks. Given the enormous size even of today’s Web, it is impossible to manually enrich all of these resources. Therefore, automated schemes for learning the relevant information are increasingly being used. Web Mining aims at discovering insights about the meaning of Web resources and their usage. Given the primarily syntactical nature of the data being mined, the discovery of meaning is impossible based on these data only. Therefore, formalizations of the semantics of Web sites and navigation behavior are becoming more and more common. Furthermore, mining the Semantic Web itself is another upcoming application. We argue that the two areas Web Mining and Semantic Web need each other to fulfill their goals, but that the full potential of this convergence is not yet realized. This paper gives an overview of where the two areas meet today, and sketches ways of how a closer integration could be profitable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the role of natural resource windfalls in explaining the efficiency of public expenditures. Using a rich dataset of expenditures and public good provision for 1,836 municipalities in Peru for period 2001-2010, we estimate a non-monotonic relationship between the efficiency of public good provision and the level of natural resource transfers. Local governments that were extremely favored by the boom of mineral prices were more efficient in using fiscal windfalls whereas those benefited with modest transfers were more inefficient. These results can be explained by the increase in political competition associated with the boom. However, the fact that increases in efficiency were related to reductions in public good provision casts doubts about the beneficial effects of political competition in promoting efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How do resource booms affect human capital accumulation? We exploit time and spatial variation generated by the commodity boom across local governments in Peru to measure the effect of natural resources on human capital formation. We explore the effect of both mining production and tax revenues on test scores, finding a substantial and statistically significant effect for the latter. Transfers to local governments from mining tax revenues are linked to an increase in math test scores of around 0.23 standard deviations. We find that the hiring of permanent teachers as well as the increases in parental employment and improvements in health outcomes of adults and children are plausible mechanisms for such large effect on learning. These findings suggest that redistributive policies could facilitate the accumulation of human capital in resource abundant developing countries as a way to avoid the natural resources curse.