994 resultados para predictive compensation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Predictive performance evaluation is a fundamental issue in design, development, and deployment of classification systems. As predictive performance evaluation is a multidimensional problem, single scalar summaries such as error rate, although quite convenient due to its simplicity, can seldom evaluate all the aspects that a complete and reliable evaluation must consider. Due to this, various graphical performance evaluation methods are increasingly drawing the attention of machine learning, data mining, and pattern recognition communities. The main advantage of these types of methods resides in their ability to depict the trade-offs between evaluation aspects in a multidimensional space rather than reducing these aspects to an arbitrarily chosen (and often biased) single scalar measure. Furthermore, to appropriately select a suitable graphical method for a given task, it is crucial to identify its strengths and weaknesses. This paper surveys various graphical methods often used for predictive performance evaluation. By presenting these methods in the same framework, we hope this paper may shed some light on deciding which methods are more suitable to use in different situations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of pharmacokinetic properties (PK) is of great importance in drug discovery and development. In the present work, PK/DB (a new freely available database for PK) was designed with the aim of creating robust databases for pharmacokinetic studies and in silico absorption, distribution, metabolism and excretion (ADME) prediction. Comprehensive, web-based and easy to access, PK/DB manages 1203 compounds which represent 2973 pharmacokinetic measurements, including five models for in silico ADME prediction (human intestinal absorption, human oral bioavailability, plasma protein binding, bloodbrain barrier and water solubility).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Canalizing genes possess such broad regulatory power, and their action sweeps across a such a wide swath of processes that the full set of affected genes are not highly correlated under normal conditions. When not active, the controlling gene will not be predictable to any significant degree by its subject genes, either alone or in groups, since their behavior will be highly varied relative to the inactive controlling gene. When the controlling gene is active, its behavior is not well predicted by any one of its targets, but can be very well predicted by groups of genes under its control. To investigate this question, we introduce in this paper the concept of intrinsically multivariate predictive (IMP) genes, and present a mathematical study of IMP in the context of binary genes with respect to the coefficient of determination (CoD), which measures the predictive power of a set of genes with respect to a target gene. A set of predictor genes is said to be IMP for a target gene if all properly contained subsets of the predictor set are bad predictors of the target but the full predictor set predicts the target with great accuracy. We show that logic of prediction, predictive power, covariance between predictors, and the entropy of the joint probability distribution of the predictors jointly affect the appearance of IMP genes. In particular, we show that high-predictive power, small covariance among predictors, a large entropy of the joint probability distribution of predictors, and certain logics, such as XOR in the 2-predictor case, are factors that favor the appearance of IMP. The IMP concept is applied to characterize the behavior of the gene DUSP1, which exhibits control over a central, process-integrating signaling pathway, thereby providing preliminary evidence that IMP can be used as a criterion for discovery of canalizing genes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biological rhythms are regulated by homeostatic mechanisms that assure that physiological clocks function reliably independent of temperature changes in the environment. Temperature compensation, the independence of the oscillatory period on temperature, is known to play a central role in many biological rhythms, but it is rather rare in chemical oscillators. We study the influence of temperature on the oscillatory dynamics during the catalytic oxidation of formic acid on a polycrystalline platinum electrode. The experiments are performed at five temperatures from 5 to 25 degrees C, and the oscillations are studied under galvanostatic control. Under oscillatory conditions, only non-Arrhenius behavior is observed. Overcompensation with temperature coefficient (q(10), defined as the ratio between the rate constants at temperature T + 10 degrees C and at T) < I is found in most cases, except that temperature compensation with q(10) approximate to I predominates at high applied currents. The behavior of the period and the amplitude result from a complex interplay between temperature and applied current or, equivalently, the distance from thermodynamic equilibrium. High, positive apparent activation energies were obtained under voltammetric, nonoscillatory conditions, which implies that the non-Arrhenius behavior observed under oscillatory conditions results from the interplay among reaction steps rather than, from a weak temperature dependence of the individual steps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data mining can be used in healthcare industry to “mine” clinical data to discover hidden information for intelligent and affective decision making. Discovery of hidden patterns and relationships often goes intact, yet advanced data mining techniques can be helpful as remedy to this scenario. This thesis mainly deals with Intelligent Prediction of Chronic Renal Disease (IPCRD). Data covers blood, urine test, and external symptoms applied to predict chronic renal disease. Data from the database is initially transformed to Weka (3.6) and Chi-Square method is used for features section. After normalizing data, three classifiers were applied and efficiency of output is evaluated. Mainly, three classifiers are analyzed: Decision Tree, Naïve Bayes, K-Nearest Neighbour algorithm. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. Efficiency of Decision Tree and KNN was almost same but Naïve Bayes proved a comparative edge over others. Further sensitivity and specificity tests are used as statistical measures to examine the performance of a binary classification. Sensitivity (also called recall rate in some fields) measures the proportion of actual positives which are correctly identified while Specificity measures the proportion of negatives which are correctly identified. CRISP-DM methodology is applied to build the mining models. It consists of six major phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 1995 the Federal Commissioner of Taxation released Taxation Ruling TR 95/35 - an attempt to comprehensively address the appropriate capital gains tax treatment of a receipt of compensation awarded either by the courts or via a settlement - still a lack of consensus regarding the appropriate treatment of such awards - a private binding ruling presently the only way a taxpayer can determine their liability with any certainty - the Australian position compared to that of the United Kingdom and Canada.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports on the psychometric properties of the Social Phobic Inventory (SoPhI) a 21-item scale that was designed to measure social anxiety according to the criteria of DSM-IV (American Psychiatric Association, APA (1994) Diagnostic and Statistical Manual of Mental Disorder , 4th Edn., Washington). Factor analysis of the SoPhI using data from a clinical sample of respondents with social phobia revealed one factor which explained approximately 59% of variance and which demonstrated strong internal reliability ( agr= 0.93). The SoPhI demonstrated concurrent validity with the SPAI ( r = 0.86) and convergent validity with the Fear of Negative Evaluations-Revised ( r = 0.68). The predictive utility of the scale was demonstrated in a sample of university students classified as extroverted, normal, shy/introverted, and phobic/withdrawn ( -2 57%). Multivariate Analysis of Variance (MANOVA) revealed that the combined university sample differed from the clinical sample on the summated scores on the SoPhI and that 43% ( -2 ) of this difference was attributable to group membership. This figure rose to 58% attributable to group membership when these same groups were compared for differences on the 21 individual items. Scores of the SoPhI that are indicative of concern and of possible diagnostic criteria, as well as suggestions for future research, are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An angle measuring device using a high performance and very compact accelerometer provides a new and exciting method for producing highly compact and accurate angle measuring devices. Accelerometers are micro-machined and are able to measure acceleration to a very high accuracy. By using gravity as a reference these compact devices can also be used for measuring angles of rotation. The inherent problem with these devices is that their response characteristic changes with temperature which is detrimental to measurement accuracy. This paper describes an effective method to overcome this problem using a temperature sensor and intelligent software to compensate for this drift characteristic. In order to demonstrate the effectiveness of this work, experiments have been developed and conducted with the results and analysis provided at the end
of this paper for discussion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Land-use patterns in the catchment areas of Sri Lankan reservoirs, which were quantified using Geographical Information Systems (GIS), were used to develop quantitative models for yield prediction. The validity of these models was evaluated through the application to five reservoirs that were not used in the development of the models, and by comparing with the actual fish yield data of these reservoirs collected by an independent body. The robustness of the predictive models developed was tested by principal component analysis (PCA) on limnological characteristics, land-use patterns of the catchments and fish yields. The predicted fish yields in five Sri Lankan reservoirs, using the empirical models based on the ratios of forest cover and/or shrub cover to reservoir capacity or reservoir area were in close agreement with the observed fish yields. The scores of PCA ordination of productivity-related limnological parameters and those of land-use patterns were linearly related to fish yields. The relationship between the PCA scores of limnological characteristics and land-use types had the appropriate algebraic form, which substantiates the influence of the limnological factors and land-use types on reservoir fish yields. It is suggested that the relatively high predictive power of the models developed on the basis of GIS methodologies can be used for more accurate assessment of reservoir fisheries. The study supports the importance and the need for an integrated management strategy for the whole watershed to enhance fish yields.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During knowledge acquisition multiple alternative potential rules all appear equally credible. This paper addresses the dearth of formal analysis about how to select between such alternatives. It presents two hypotheses about the expected impact of selecting between classification rules of differing levels of generality in the absence of other evidence about their likely relative performance on unseen data. It is argued that the accuracy on unseen data of the more general rule will tend to be closer to that of a default rule for the class than will that of the more specific rule. It is also argued that in comparison to the more general rule, the accuracy of the more specific rule on unseen cases will tend to be closer to the accuracy obtained on training data. Experimental evidence is provided in support of these hypotheses. We argue that these hypotheses can be of use in selecting between rules in order to achieve specific knowledge acquisition objectives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wildlife managers are often faced with the difficult task of determining the distribution of species, and their preferred habitats, at large spatial scales. This task is even more challenging when the species of concern is in low abundance and/or the terrain is largely inaccessible. Spatially explicit distribution models, derived from multivariate statistical analyses and implemented in a geographic information system (GIS), can be used to predict the distributions of species and their habitats, thus making them a useful conservation tool. We present two such models: one for a dasyurid, the Swamp Antechinus (Antechinus minimus), and the other for a ground-dwelling bird, the Rufous Bristlebird (Dasyornis broadbenti), both of which are rare species occurring in the coastal heathlands of south-western Victoria. Models were generated using generalized linear modelling (GLM) techniques with species presence or absence as the independent variable and a series of landscape variables derived from GIS layers and high-resolution imagery as the predictors. The most parsimonious model, based on the Akaike Information Criterion, for each species then was extrapolated spatially in a GIS. Probability of species presence was used as an index of habitat suitability. Because habitat fragmentation is thought to be one of the major threats to these species, an assessment of the spatial distribution of suitable habitat across the landscape is vital in prescribing management actions to prevent further habitat fragmentation.