939 resultados para 650200 Mining and Extraction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two hazard risk assessment matrices for the ranking of occupational health risks are described. The qualitative matrix uses qualitative measures of probability and consequence to determine risk assessment codes for hazard-disease combinations. A walk-through survey of an underground metalliferous mine and concentrator is used to demonstrate how the qualitative matrix can be applied to determine priorities for the control of occupational health hazards. The semi-quantitative matrix uses attributable risk as a quantitative measure of probability and uses qualitative measures of consequence. A practical application of this matrix is the determination of occupational health priorities using existing epidemiological studies. Calculated attributable risks from epidemiological studies of hazard-disease combinations in mining and minerals processing are used as examples. These historic response data do not reflect the risks associated with current exposures. A method using current exposure data, known exposure-response relationships and the semi-quantitative matrix is proposed for more accurate and current risk rankings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A numerical modelling strategy has been developed in order to quantify the magnitude of induced stresses at the boundaries of production level and undercut level drifts for various in situ stress environments and undercut scenarios. The results of the stress modelling were in line with qualitative experiential guidelines and a limited number of induced stress measurements documented from caving sites. A number of stress charts were developed which quantify the maximum boundary stresses in drift roofs for varying in situ stress regimes, depths and undercut scenarios. This enabled many of the experiential guidelines to be quantified and bounded. A limited number of case histories of support and support performance in cave mine drifts were compared to support recommendations using the NGI classification system, The stress charts were used to estimate the Stress Reduction Factor for this system. The back-analyses suggested that the NGI classification system might be able to give preliminary estimates of support requirements in caving mines with modifications relating to rock bolt length and the support of production level intersections. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Secreted anterior adhesives, used for temporary attachment to epithelial surfaces of fishes (skin and gills) by some monogenean (platyhelminth) parasites have been partially characterised. Adhesive is composed of protein. Amino acid composition has been determined for seven monopisthocotylean monogeneans. Six of these belong to the Monocotylidae and one species, Entobdella soleae (van Beneden et Hesse, 1864) Johnston, 1929, is a member of the Capsalidae. Histochemistry shows that the adhesive does not contain polysaccharides, including acid mucins, or lipids. The adhesive before secretion and in its secreted form contains no dihydroxyphenylalanine (dopa). Secreted adhesive is highly insoluble, but has a soft consistency and is mechanically removable from glass surfaces. Generally there are high levels of glycine and alanine, low levels of tyrosine and methionine, and histidine is often absent. However, amino acid content varies between species, the biggest differences evident when the monocotylid monogeneans were compared with E. soleae. Monogenean adhesive shows similarity in amino acid profile with adhesives from starfish, limpets and barnacles. However, there are some differences in individual amino acids in the temporary adhesive secretions of, on the one hand, the monogeneans and, on the other hand, the starfish and limpets. These differences may reflect the fact that monogeneans, unlike starfish and barnacles, attach to living tissue (tissue adhesion). A method of extracting unsecreted adhesive was investigated for use in further characterisation studies on monogenean glues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The most widely used method for predicting the onset of continuous caving is Laubscher's caving chart. A detailed examination of this method was undertaken which concluded that it had limitations which may impact on results, particularly when dealing with stronger rock masses that are outside current experience. These limitations relate to inadequate guidelines for adjustment factors to rock mass rating (RMR), concerns about the position on the chart of critical case history data, undocumented changes to the method and an inadequate number of data points to be confident of stability boundaries. A review was undertaken on the application and reliability of a numerical method of assessing cavability. The review highlighted a number of issues, which at this stage, make numerical continuum methods problematic for predicting cavability. This is in particular reference to sensitivity to input parameters that are difficult to determine accurately and mesh dependency. An extended version of the Mathews method for open stope design was developed as an alternative method of predicting the onset of continuous caving. A number of caving case histories were collected and analyzed and a caving boundary delineated statistically on the Mathews stability graph. The definition of the caving boundary was aided by the existence of a large and wide-ranging stability database from non-caving mines. A caving rate model was extrapolated from the extended Mathews stability graph but could only be partially validated due to a lack of reliable data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A number of authors concerned with the analysis of rock jointing have used the idea that the joint areal or diametral distribution can be linked to the trace length distribution through a theorem attributed to Crofton. This brief paper seeks to demonstrate why Crofton's theorem need not be used to link moments of the trace length distribution captured by scan line or areal mapping to the moments of the diametral distribution of joints represented as disks and that it is incorrect to do so. The valid relationships for areal or scan line mapping between all the moments of the trace length distribution and those of the joint size distribution for joints modeled as disks are recalled and compared with those that might be applied were Crofton's theorem assumed to apply. For areal mapping, the relationship is fortuitously correct but incorrect for scan line mapping.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

telligence applications for the banking industry. Searches were performed in relevant journals resulting in 219 articles published between 2002 and 2013. To analyze such a large number of manuscripts, text mining techniques were used in pursuit for relevant terms on both business intelligence and banking domains. Moreover, the latent Dirichlet allocation modeling was used in or- der to group articles in several relevant topics. The analysis was conducted using a dictionary of terms belonging to both banking and business intelli- gence domains. Such procedure allowed for the identification of relationships between terms and topics grouping articles, enabling to emerge hypotheses regarding research directions. To confirm such hypotheses, relevant articles were collected and scrutinized, allowing to validate the text mining proce- dure. The results show that credit in banking is clearly the main application trend, particularly predicting risk and thus supporting credit approval or de- nial. There is also a relevant interest in bankruptcy and fraud prediction. Customer retention seems to be associated, although weakly, with targeting, justifying bank offers to reduce churn. In addition, a large number of ar- ticles focused more on business intelligence techniques and its applications, using the banking industry just for evaluation, thus, not clearly acclaiming for benefits in the banking business. By identifying these current research topics, this study also highlights opportunities for future research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Earthworks tasks aim at levelling the ground surface at a target construction area and precede any kind of structural construction (e.g., road and railway construction). It is comprised of sequential tasks, such as excavation, transportation, spreading and compaction, and it is strongly based on heavy mechanical equipment and repetitive processes. Under this context, it is essential to optimize the usage of all available resources under two key criteria: the costs and duration of earthwork projects. In this paper, we present an integrated system that uses two artificial intelligence based techniques: data mining and evolutionary multi-objective optimization. The former is used to build data-driven models capable of providing realistic estimates of resource productivity, while the latter is used to optimize resource allocation considering the two main earthwork objectives (duration and cost). Experiments held using real-world data, from a construction site, have shown that the proposed system is competitive when compared with current manual earthwork design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Polysaccharides and oligosaccharides can improve quality and enhance nutritional value of final food products due to their technological and nutritional features ranging from their capacity to improve texture to their effect as dietary fibers. For this reason, they are among the most studied ingredients in the food industry. The use of natural polysaccharides and oligosaccharides as food additives has been a reality since the food industry understood their potential technological and nutritional applications. Currently, the replacement of traditional ingredients and/or the synergy between traditional ingredients and polysaccharides and oligosaccharides are perceived as promising approaches by the food industry. Traditionally, polysaccharides have been used as thickening, emulsifying, and stabilizing agents, however, at this moment polysaccharides and oligosaccharides claim health and nutritional advantages, thus opening a new market of nutritional and functional foods. Indeed, their use as nutritional food ingredients enabled the food industry to develop a countless number of applications, e.g., fat replacers, prebiotics, dietary fiber, and antiulcer agents. Based on this, among the scientific community and food industry, in the last years many research studies and commercial products showed the possibility of using either new or already used sources (though with changed properties) of polysaccharides for the production of food additives with new and enhanced properties. The increasing interest in such products is clearly illustrated by the market figures and consumption trends. As an example, the sole market of hydrocolloids is estimated to reach $7 billion in 2018. Moreover, oligosaccharides can be found in more than 500 food products resulting in a significant daily consumption. A recent study from the Transparency Market Research on Prebiotic Ingredients Market reported that prebiotics' demand was worth $2.3 billion in 2012 and it is estimated to reach $4.5 billion in 2018, growing at a compound annual growth rate of 11.4% between 2012 and 2018. The entrance of this new generation of food additives in the market, often claiming health and nutritional benefits, imposes an impartial analysis by the legal authorities regarding the accomplishment of requirements that have been established for introducing novel ingredients/food, including new poly- and oligosaccharides. This chapter deals with the potential use of polysaccharides and oligosaccharides as food additives, as well as alternative sources of these compounds and their possible applications in food products. Moreover, the regulation process to introduce novel polysaccharides and oligosaccharides in the market as food additives and to assign them health claims is discussed.