999 resultados para Universal Decimal Classification
Resumo:
This paper is concerned with the use of a genetic algorithm to select financial ratios for corporate distress classification models. For this purpose, the fitness value associated to a set of ratios is made to reflect the requirements of maximizing the amount of information available for the model and minimizing the collinearity between the model inputs. A case study involving 60 failed and continuing British firms in the period 1997-2000 is used for illustration. The classification model based on ratios selected by the genetic algorithm compares favorably with a model employing ratios usually found in the financial distress literature.
Resumo:
Diabetes like many diseases and biological processes is not mono-causal. On the one hand multifactorial studies with complex experimental design are required for its comprehensive analysis. On the other hand, the data from these studies often include a substantial amount of redundancy such as proteins that are typically represented by a multitude of peptides. Coping simultaneously with both complexities (experimental and technological) makes data analysis a challenge for Bioinformatics.
Resumo:
Over many millions of years of independent evolution, placental, marsupial and monotreme mammals have diverged conspicuously in physiology, life history and reproductive ecology. The differences in life histories are particularly striking. Compared with placentals, marsupials exhibit shorter pregnancy, smaller size of offspring at birth and longer period of lactation in the pouch. Monotremes also exhibit short pregnancy, but incubate embryos in eggs, followed by a long period of post-hatching lactation. Using a large sample of mammalian species, we show that, remarkably, despite their very different life histories, the scaling of production rates is statistically indistinguishable across mammalian lineages. Apparently all mammals are subject to the same fundamental metabolic constraints on productivity, because they share similar body designs, vascular systems and costs of producing new tissue.
Resumo:
Deep Brain Stimulation has been used in the study of and for treating Parkinson’s Disease (PD) tremor symptoms since the 1980s. In the research reported here we have carried out a comparative analysis to classify tremor onset based on intraoperative microelectrode recordings of a PD patient’s brain Local Field Potential (LFP) signals. In particular, we compared the performance of a Support Vector Machine (SVM) with two well known artificial neural network classifiers, namely a Multiple Layer Perceptron (MLP) and a Radial Basis Function Network (RBN). The results show that in this study, using specifically PD data, the SVM provided an overall better classification rate achieving an accuracy of 81% recognition.
Resumo:
In a development from material introduced in recent work, we discuss the interconnections between ternary rings of operators (TROs) and right C*-algebras generated by JC*-triples, deducing that every JC*-triple possesses a largest universally reversible ideal, that the universal TRO commutes with appropriate tensor products and establishing a reversibility criterion for type I JW*-triples.
Resumo:
Obesity is a key factor in the development of the metabolic syndrome (MetS), which is associated with increased cardiometabolic risk. We investigated whether obesity classification by body mass index (BMI) and body fat percentage (BF%) influences cardiometabolic profile and dietary responsiveness in 486 MetS subjects (LIPGENE dietary intervention study). Anthropometric measures, markers of inflammation and glucose metabolism, lipid profiles, adhesion molecules and haemostatic factors were determined at baseline and after 12 weeks of 4 dietary interventions (high saturated fat (SFA), high monounsaturated fat (MUFA) and 2 low fat high complex carbohydrate (LFHCC) diets, 1 supplemented with long chain n-3 polyunsaturated fatty acids (LC n-3 PUFAs)). 39% and 87% of subjects classified as normal and overweight by BMI were obese according to their BF%. Individuals classified as obese by BMI (± 30 kg/m2) and BF% (± 25% (men) and ± 35% (women)) (OO, n = 284) had larger waist and hip measurements, higher BMI and were heavier (P < 0.001) than those classified as non-obese by BMI but obese by BF% (NOO, n = 92). OO individuals displayed a more pro-inflammatory (higher C reactive protein (CRP) and leptin), pro-thrombotic (higher plasminogen activator inhibitor-1 (PAI-1)), pro-atherogenic (higher leptin/adiponectin ratio) and more insulin resistant (higher HOMA-IR) metabolic profile relative to the NOO group (P < 0.001). Interestingly, tumour necrosis factor alpha (TNF-α) concentrations were lower post-intervention in NOO individuals compared to OO subjects (P < 0.001). In conclusion, assessing BF% and BMI as part of a metabotype may help identify individuals at greater cardiometabolic risk than BMI alone.
Resumo:
This paper reviews the ways that quality can be assessed in standing waters, a subject that has hitherto attracted little attention but which is now a legal requirement in Europe. It describes a scheme for the assessment and monitoring of water and ecological quality in standing waters greater than about I ha in area in England & Wales although it is generally relevant to North-west Europe. Thirteen hydrological, chemical and biological variables are used to characterise the standing water body in any current sampling. These are lake volume, maximum depth, onductivity, Secchi disc transparency, pH, total alkalinity, calcium ion concentration, total N concentration,winter total oxidised inorganic nitrogen (effectively nitrate) concentration, total P concentration, potential maximum chlorophyll a concentration, a score based on the nature of the submerged and emergent plant community, and the presence or absence of a fish community. Inter alia these variables are key indicators of the state of eutrophication, acidification, salinisation and infilling of a water body.
Resumo:
In a world where massive amounts of data are recorded on a large scale we need data mining technologies to gain knowledge from the data in a reasonable time. The Top Down Induction of Decision Trees (TDIDT) algorithm is a very widely used technology to predict the classification of newly recorded data. However alternative technologies have been derived that often produce better rules but do not scale well on large datasets. Such an alternative to TDIDT is the PrismTCS algorithm. PrismTCS performs particularly well on noisy data but does not scale well on large datasets. In this paper we introduce Prism and investigate its scaling behaviour. We describe how we improved the scalability of the serial version of Prism and investigate its limitations. We then describe our work to overcome these limitations by developing a framework to parallelise algorithms of the Prism family and similar algorithms. We also present the scale up results of a first prototype implementation.