874 resultados para granule mining
Resumo:
In this paper, we describe NewsCATS (news categorization and trading system), a system implemented to predict stock price trends for the time immediately after the publication of press releases. NewsCATS consists mainly of three components. The first component retrieves relevant information from press releases through the application of text preprocessing techniques. The second component sorts the press releases into predefined categories. Finally, appropriate trading strategies are derived by the third component by means of the earlier categorization. The findings indicate that a categorization of press releases is able to provide additional information that can be used to forecast stock price trends, but that an adequate trading strategy is essential for the results of the categorization to be fully exploited.
Resumo:
Hexanucleotide repeat expansions in the C9ORF72 gene are causally associated with frontotemporal lobar dementia (FTLD) and/or amyotrophic lateral sclerosis (ALS). The physiological function of the normal C9ORF72 protein remains unclear. In this study, we characterized the subcellular localization of C9ORF72 to processing bodies (P-bodies) and its recruitment to stress granules (SGs) upon stress-related stimuli. Gain of function and loss of function experiments revealed that the long isoform of C9ORF72 protein regulates SG assembly. CRISPR/Cas9-mediated knockdown of C9ORF72 completely abolished SG formation, negatively impacted the expression of SG-associated proteins such as TIA-1 and HuR, and accelerated cell death. Loss of C9ORF72 expression further compromised cellular recovery responses after the removal of stress. Additionally, mimicking the pathogenic condition via the expression of hexanucleotide expansion upstream of C9ORF72 impaired the expression of the C9ORF72 protein, caused an abnormal accumulation of RNA foci, and led to the spontaneous formation of SGs. Our study identifies a novel function for normal C9ORF72 in SG assembly and sheds light into how the mutant expansions might impair SG formation and cellular-stress-related adaptive responses.
Resumo:
Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^
Resumo:
At the end of the communist era, which was characterised as a closed social experiment, Romania found itself in the middle of a globalization process. Its industrial capacities have been considerably reduced through a poor and spendthrift management. There was a mass exodus of the labour force abroad and the educational background for the remaining part was no longer in agreement with the labour market. On these grounds, the vectors of globalization, in the form of foreign investments, entered Romania effortlessly. There even were local communities where the arrival of foreign investors was expected like a second coming of Christ. This is the context in which a Canadian company set forth the mining project Rosia Montana Gold Corporation. The implementation of the project should have started in 2005. Nevertheless, the project has not been effectively launched yet. This situation is based on what we call Romanian glocalization, namely a specific confrontation between global and local on Romanian land
Resumo:
At the end of the communist era, which was characterised as a closed social experiment, Romania found itself in the middle of a globalization process. Its industrial capacities have been considerably reduced through a poor and spendthrift management. There was a mass exodus of the labour force abroad and the educational background for the remaining part was no longer in agreement with the labour market. On these grounds, the vectors of globalization, in the form of foreign investments, entered Romania effortlessly. There even were local communities where the arrival of foreign investors was expected like a second coming of Christ. This is the context in which a Canadian company set forth the mining project Rosia Montana Gold Corporation. The implementation of the project should have started in 2005. Nevertheless, the project has not been effectively launched yet. This situation is based on what we call Romanian glocalization, namely a specific confrontation between global and local on Romanian land
Resumo:
At the end of the communist era, which was characterised as a closed social experiment, Romania found itself in the middle of a globalization process. Its industrial capacities have been considerably reduced through a poor and spendthrift management. There was a mass exodus of the labour force abroad and the educational background for the remaining part was no longer in agreement with the labour market. On these grounds, the vectors of globalization, in the form of foreign investments, entered Romania effortlessly. There even were local communities where the arrival of foreign investors was expected like a second coming of Christ. This is the context in which a Canadian company set forth the mining project Rosia Montana Gold Corporation. The implementation of the project should have started in 2005. Nevertheless, the project has not been effectively launched yet. This situation is based on what we call Romanian glocalization, namely a specific confrontation between global and local on Romanian land
Resumo:
El panel se divide en tres secciones : Minería histórica , Patrimonio Minero y Museos.
A Methodological model to assist the optimization and risk management of mining investment decisions
Resumo:
Identifying, quantifying, and minimizing technical risks associated with investment decisions is a key challenge for mineral industry decision makers and investors. However, risk analysis in most bankable mine feasibility studies are based on the stochastic modelling of project “Net Present Value” (NPV)which, in most cases, fails to provide decision makers with a truly comprehensive analysis of risks associated with technical and management uncertainty and, as a result, are of little use for risk management and project optimization. This paper presents a value-chain risk management approach where project risk is evaluated for each step of the project lifecycle, from exploration to mine closure, and risk management is performed as a part of a stepwise value-added optimization process.
Resumo:
Twenty production blasts in two open pit mines were monitored, in rocks with medium to very high strength. Three different blasting agents (ANFO, watergel and emulsion blend) were used, with powder factors ranging between 0.88 and 1.45 kg/m3. Excavators were front loaders and rope shovels. Mechanical properties of the rock, blasting characteristics and mucking rates were carefully measured. A model for the calculation of the productivity of excavators is developed thereof, in which the production rate results as a product of an ideal, maximum, productivity rate times an operating efficiency. The maximum rate is a function of the dipper capacity and the efficiency is a function of rock density, strength, and explosive energy concentration in the rock. The model is statistically significant and explains up to 92 % of the variance of the production rate measurements.
Resumo:
Abstract Due to recent scientific and technological advances in information sys¬tems, it is now possible to perform almost every application on a mobile device. The need to make sense of such devices more intelligent opens an opportunity to design data mining algorithm that are able to autonomous execute in local devices to provide the device with knowledge. The problem behind autonomous mining deals with the proper configuration of the algorithm to produce the most appropriate results. Contextual information together with resource information of the device have a strong impact on both the feasibility of a particu¬lar execution and on the production of the proper patterns. On the other hand, performance of the algorithm expressed in terms of efficacy and efficiency highly depends on the features of the dataset to be analyzed together with values of the parameters of a particular implementation of an algorithm. However, few existing approaches deal with autonomous configuration of data mining algorithms and in any case they do not deal with contextual or resources information. Both issues are of particular significance, in particular for social net¬works application. In fact, the widespread use of social networks and consequently the amount of information shared have made the need of modeling context in social application a priority. Also the resource consumption has a crucial role in such platforms as the users are using social networks mainly on their mobile devices. This PhD thesis addresses the aforementioned open issues, focusing on i) Analyzing the behavior of algorithms, ii) mapping contextual and resources information to find the most appropriate configuration iii) applying the model for the case of a social recommender. Four main contributions are presented: - The EE-Model: is able to predict the behavior of a data mining algorithm in terms of resource consumed and accuracy of the mining model it will obtain. - The SC-Mapper: maps a situation defined by the context and resource state to a data mining configuration. - SOMAR: is a social activity (event and informal ongoings) recommender for mobile devices. - D-SOMAR: is an evolution of SOMAR which incorporates the configurator in order to provide updated recommendations. Finally, the experimental validation of the proposed contributions using synthetic and real datasets allows us to achieve the objectives and answer the research questions proposed for this dissertation.
Resumo:
Expert systems are built from knowledge traditionally elicited from the human expert. It is precisely knowledge elicitation from the expert that is the bottleneck in expert system construction. On the other hand, a data mining system, which automatically extracts knowledge, needs expert guidance on the successive decisions to be made in each of the system phases. In this context, expert knowledge and data mining discovered knowledge can cooperate, maximizing their individual capabilities: data mining discovered knowledge can be used as a complementary source of knowledge for the expert system, whereas expert knowledge can be used to guide the data mining process. This article summarizes different examples of systems where there is cooperation between expert knowledge and data mining discovered knowledge and reports our experience of such cooperation gathered from a medical diagnosis project called Intelligent Interpretation of Isokinetics Data, which we developed. From that experience, a series of lessons were learned throughout project development. Some of these lessons are generally applicable and others pertain exclusively to certain project types.
Resumo:
Acquired brain injury (ABI) is one of the leading causes of death and disability in the world and is associated with high health care costs as a result of the acute treatment and long term rehabilitation involved. Different algorithms and methods have been proposed to predict the effectiveness of rehabilitation programs. In general, research has focused on predicting the overall improvement of patients with ABI. The purpose of this study is the novel application of data mining (DM) techniques to predict the outcomes of cognitive rehabilitation in patients with ABI. We generate three predictive models that allow us to obtain new knowledge to evaluate and improve the effectiveness of the cognitive rehabilitation process. Decision tree (DT), multilayer perceptron (MLP) and general regression neural network (GRNN) have been used to construct the prediction models. 10-fold cross validation was carried out in order to test the algorithms, using the Institut Guttmann Neurorehabilitation Hospital (IG) patients database. Performance of the models was tested through specificity, sensitivity and accuracy analysis and confusion matrix analysis. The experimental results obtained by DT are clearly superior with a prediction average accuracy of 90.38%, while MLP and GRRN obtained a 78.7% and 75.96%, respectively. This study allows to increase the knowledge about the contributing factors of an ABI patient recovery and to estimate treatment efficacy in individual patients.
Resumo:
The study examines the Capital Asset Pricing Model (CAPM) for the mining sector using weekly stock returns from 27 companies traded on the New York Stock Exchange (NYSE) or on the London Stock Exchange (LSE) for the period of December 2008 to December 2010. The results support the use of the CAPM for the allocation of risk to companies. Most companies involved in precious metals (particularly gold), which have a beta value less than unity (Table 1), have been actuated as shelter values during the financial crisis. Values of R2 do not shown very explanatory power of fitted models (R2 < 70 %). Estimated coefficients beta are not sufficient to determine the expected returns on securities but the results of the tests conducted on sample data for the period analysed do not appear to clearly reject the CAPM
Resumo:
Ubiquitous computing software needs to be autonomous so that essential decisions such as how to configure its particular execution are self-determined. Moreover, data mining serves an important role for ubiquitous computing by providing intelligence to several types of ubiquitous computing applications. Thus, automating ubiquitous data mining is also crucial. We focus on the problem of automatically configuring the execution of a ubiquitous data mining algorithm. In our solution, we generate configuration decisions in a resource aware and context aware manner since the algorithm executes in an environment in which the context often changes and computing resources are often severely limited. We propose to analyze the execution behavior of the data mining algorithm by mining its past executions. By doing so, we discover the effects of resource and context states as well as parameter settings on the data mining quality. We argue that a classification model is appropriate for predicting the behavior of an algorithm?s execution and we concentrate on decision tree classifier. We also define taxonomy on data mining quality so that tradeoff between prediction accuracy and classification specificity of each behavior model that classifies by a different abstraction of quality, is scored for model selection. Behavior model constituents and class label transformations are formally defined and experimental validation of the proposed approach is also performed.