983 resultados para Methods: numerical


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are many techniques for electricity market price forecasting. However, most of them are designed for expected price analysis rather than price spike forecasting. An effective method of predicting the occurrence of spikes has not yet been observed in the literature so far. In this paper, a data mining based approach is presented to give a reliable forecast of the occurrence of price spikes. Combined with the spike value prediction techniques developed by the same authors, the proposed approach aims at providing a comprehensive tool for price spike forecasting. In this paper, feature selection techniques are firstly described to identify the attributes relevant to the occurrence of spikes. A simple introduction to the classification techniques is given for completeness. Two algorithms: support vector machine and probability classifier are chosen to be the spike occurrence predictors and are discussed in details. Realistic market data are used to test the proposed model with promising results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The artificial dissipation effects in some solutions obtained with a Navier-Stokes flow solver are demonstrated. The solvers were used to calculate the flow of an artificially dissipative fluid, which is a fluid having dissipative properties which arise entirely from the solution method itself. This was done by setting the viscosity and heat conduction coefficients in the Navier-Stokes solvers to zero everywhere inside the flow, while at the same time applying the usual no-slip and thermal conducting boundary conditions at solid boundaries. An artificially dissipative flow solution is found where the dissipation depends entirely on the solver itself. If the difference between the solutions obtained with the viscosity and thermal conductivity set to zero and their correct values is small, it is clear that the artificial dissipation is dominating and the solutions are unreliable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conferences that deliver interactive sessions designed to enhance physician participation, such as role play, small discussion groups, workshops, hands-on training, problem- or case-based learning and individualised training sessions, are effective for physician education.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An investigation was undertaken to test the effectiveness of two procedures for recording boundaries and plot positions for scientific studies on farms on Leyte Island, the Philippines. The accuracy of a Garmin 76 Global Positioning System (GPS) unit and a compass and chain was checked under the same conditions. Tree canopies interfered with the ability of the satellite signal to reach the GPS and therefore the GPS survey was less accurate than the compass and chain survey. Where a high degree of accuracy is required, a compass and chain survey remains the most effective method of surveying land underneath tree canopies, providing operator error is minimised. For a large number of surveys and thus large amounts of data, a GPS is more appropriate than a compass and chain survey because data are easily up-loaded into a Geographic Information System (GIS). However, under dense canopies where satellite signals cannot reach the GPS, it may be necessary to revert to a compass survey or a combination of both methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The second edition of An Introduction to Efficiency and Productivity Analysis is designed to be a general introduction for those who wish to study efficiency and productivity analysis. The book provides an accessible, well-written introduction to the four principal methods involved: econometric estimation of average response models; index numbers, data envelopment analysis (DEA); and stochastic frontier analysis (SFA). For each method, a detailed introduction to the basic concepts is presented, numerical examples are provided, and some of the more important extensions to the basic methods are discussed. Of special interest is the systematic use of detailed empirical applications using real-world data throughout the book. In recent years, there have been a number of excellent advance-level books published on performance measurement. This book, however, is the first systematic survey of performance measurement with the express purpose of introducing the field to a wide audience of students, researchers, and practitioners. Indeed, the 2nd Edition maintains its uniqueness: (1) It is a well-written introduction to the field. (2) It outlines, discusses and compares the four principal methods for efficiency and productivity analysis in a well-motivated presentation. (3) It provides detailed advice on computer programs that can be used to implement these performance measurement methods. The book contains computer instructions and output listings for the SHAZAM, LIMDEP, TFPIP, DEAP and FRONTIER computer programs. More extensive listings of data and computer instruction files are available on the book's website: (www.uq.edu.au/economics/cepa/crob2005).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a progressive asymptotic approach procedure is presented for solving the steady-state Horton-Rogers-Lapwood problem in a fluid-saturated porous medium. The Horton-Rogers-Lapwood problem possesses a bifurcation and, therefore, makes the direct use of conventional finite element methods difficult. Even if the Rayleigh number is high enough to drive the occurrence of natural convection in a fluid-saturated porous medium, the conventional methods will often produce a trivial non-convective solution. This difficulty can be overcome using the progressive asymptotic approach procedure associated with the finite element method. The method considers a series of modified Horton-Rogers-Lapwood problems in which gravity is assumed to tilt a small angle away from vertical. The main idea behind the progressive asymptotic approach procedure is that through solving a sequence of such modified problems with decreasing tilt, an accurate non-zero velocity solution to the Horton-Rogers-Lapwood problem can be obtained. This solution provides a very good initial prediction for the solution to the original Horton-Rogers-Lapwood problem so that the non-zero velocity solution can be successfully obtained when the tilted angle is set to zero. Comparison of numerical solutions with analytical ones to a benchmark problem of any rectangular geometry has demonstrated the usefulness of the present progressive asymptotic approach procedure. Finally, the procedure has been used to investigate the effect of basin shapes on natural convection of pore-fluid in a porous medium. (C) 1997 by John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to predict leaf area and leaf area index is crucial in crop simulation models that predict crop growth and yield. Previous studies have shown existing methods of predicting leaf area to be inadequate when applied to a broad range of cultivars with different numbers of leaves. The objectives of the study were to (i) develop generalised methods of modelling individual and total plant leaf area, and leaf senescence, that do not require constants that are specific to environments and/or genotypes, (ii) re-examine the base, optimum, and maximum temperatures for calculation of thermal time for leaf senescence, and (iii) assess the method of calculation of individual leaf area from leaf length and leaf width in experimental work. Five cultivars of maize differing widely in maturity and adaptation were planted in October 1994 in south-eastern Queensland, and grown under non-limiting conditions of water and plant nutrient supplies. Additional data for maize plants with low total leaf number (12-17) grown at Katumani Research Centre, Kenya, were included to extend the range in the total leaf number per plant. The equation for the modified (slightly skewed) bell curve could be generalised for modelling individual leaf area, as all coefficients in it were related to total leaf number. Use of coefficients for individual genotypes can be avoided, and individual and total plant leaf area can be calculated from total leaf number. A single, logistic equation, relying on maximum plant leaf area and thermal time from emergence, was developed to predict leaf senescence. The base, optimum, and maximum temperatures for calculation of thermal time for leaf senescence were 8, 34, and 40 degrees C, and apply for the whole crop-cycle when used in modelling of leaf senescence. Thus, the modelling of leaf production and senescence is simplified, improved, and generalised. Consequently, the modelling of leaf area index (LAI) and variables that rely on LAI will be improved. For experimental purposes, we found that the calculation of leaf area from leaf length and leaf width remains appropriate, though the relationship differed slightly from previously published equations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new method of poly-beta-hydroxybutyrate (PHB) extraction from recombinant E. coli is proposed, using homogenization and centrifugation coupled with sodium hypochlorite treatment. The size of PHB granules and cell debris in homogenates was characterised as a function of the number of homogenization passes. Simulation was used to develop the PHB and cell debris fractionation system, enabling numerical examination of the effects of repeated homogenization and centrifuge-feedrate variation. The simulation provided a good prediction of experimental performance. Sodium hypochlorite treatment was necessary to optimise PHB fractionation. A PHB recovery of 80% at a purity of 96.5% was obtained with the final optimised process. Protein and DNA contained in the resultant product were negligible. The developed process holds promise for significantly reducing the recovery cost associated with PHB manufacture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The moving finite element collocation method proposed by Kill et al. (1995) Chem. Engng Sci. 51 (4), 2793-2799 for solution of problems with steep gradients is further developed to solve transient problems arising in the field of adsorption. The technique is applied to a model of adsorption in solids with bidisperse pore structures. Numerical solutions were found to match the analytical solution when it exists (i.e. when the adsorption isotherm is linear). The method is simple yet sufficiently accurate for use in adsorption problems, where global collocation methods fail. (C) 1998 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conotoxins are valuable probes of receptors and ion channels because of their small size and highly selective activity. alpha-Conotoxin EpI, a 16-residue peptide from the mollusk-hunting Conus episcopatus, has the amino acid sequence GCCSDPRCNMNNPDY(SO3H)C-NH2 and appears to be an extremely potent and selective inhibitor of the alpha 3 beta 2 and alpha 3 beta 4 neuronal subtypes of the nicotinic acetylcholine receptor (nAChR). The desulfated form of EpI ([Tyr(15)]EpI) has a potency and selectivity for the nAChR receptor similar to those of EpI. Here we describe the crystal structure of [Tyr(15)]EpI solved at a resolution of 1.1 Angstrom using SnB. The asymmetric unit has a total of 284 non-hydrogen atoms, making this one of the largest structures solved de novo try direct methods. The [Tyr(15)]EpI structure brings to six the number of alpha-conotoxin structures that have been determined to date. Four of these, [Tyr(15)]EpI, PnIA, PnIB, and MII, have an alpha 4/7 cysteine framework and are selective for the neuronal subtype of the nAChR. The structure of [Tyr(15)]EpI has the same backbone fold as the other alpha 4/7-conotoxin structures, supporting the notion that this conotoxin cysteine framework and spacing give rise to a conserved fold. The surface charge distribution of [Tyr(15)]EpI is similar to that of PnIA and PnIB but is likely to be different from that of MII, suggesting that [Tyr(15)]EpI and MII may have different binding modes for the same receptor subtype.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Expokit provides a set of routines aimed at computing matrix exponentials. More precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear ODEs with constant inhomogeneity. The backbone of the sparse routines consists of matrix-free Krylov subspace projection methods (Arnoldi and Lanczos processes), and that is why the toolkit is capable of coping with sparse matrices of large dimension. The software handles real and complex matrices and provides specific routines for symmetric and Hermitian matrices. The computation of matrix exponentials is a numerical issue of critical importance in the area of Markov chains and furthermore, the computed solution is subject to probabilistic constraints. In addition to addressing general matrix exponentials, a distinct attention is assigned to the computation of transient states of Markov chains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the effective diagnostic technique(s) for assessing the condition of insulation in aged power transformers. A number of electrical, mechanical and chemical techniques were investigated. Many of these techniques are already used by the utility engineers and two comparatively new techniques are proposed in this paper. Results showing the effectiveness of these diagnostics are presented and correlation between the techniques are also presented. Finally, merits and suitability of different techniques are discussed in this paper.