15 resultados para Machine Learning Algorithm

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Radiation metabolomics employing mass spectral technologies represents a plausible means of high-throughput minimally invasive radiation biodosimetry. A simplified metabolomics protocol is described that employs ubiquitous gas chromatography-mass spectrometry and open source software including random forests machine learning algorithm to uncover latent biomarkers of 3 Gy gamma radiation in rats. Urine was collected from six male Wistar rats and six sham-irradiated controls for 7 days, 4 prior to irradiation and 3 after irradiation. Water and food consumption, urine volume, body weight, and sodium, potassium, calcium, chloride, phosphate and urea excretion showed major effects from exposure to gamma radiation. The metabolomics protocol uncovered several urinary metabolites that were significantly up-regulated (glyoxylate, threonate, thymine, uracil, p-cresol) and down-regulated (citrate, 2-oxoglutarate, adipate, pimelate, suberate, azelaate) as a result of radiation exposure. Thymine and uracil were shown to derive largely from thymidine and 2'-deoxyuridine, which are known radiation biomarkers in the mouse. The radiation metabolomic phenotype in rats appeared to derive from oxidative stress and effects on kidney function. Gas chromatography-mass spectrometry is a promising platform on which to develop the field of radiation metabolomics further and to assist in the design of instrumentation for use in detecting biological consequences of environmental radiation release.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses an investigation with machine learning (ML) classification techniques to assist in the problem of flash flood now casting. We have been attempting to build a Wireless Sensor Network (WSN) to collect measurements from a river located in an urban area. The machine learning classification methods were investigated with the aim of allowing flash flood now casting, which in turn allows the WSN to give alerts to the local population. We have evaluated several types of ML taking account of the different now casting stages (i.e. Number of future time steps to forecast). We have also evaluated different data representation to be used as input of the ML techniques. The results show that different data representation can lead to results significantly better for different stages of now casting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Finite element (FE) analysis is an important computational tool in biomechanics. However, its adoption into clinical practice has been hampered by its computational complexity and required high technical competences for clinicians. In this paper we propose a supervised learning approach to predict the outcome of the FE analysis. We demonstrate our approach on clinical CT and X-ray femur images for FE predictions ( FEP), with features extracted, respectively, from a statistical shape model and from 2D-based morphometric and density information. Using leave-one-out experiments and sensitivity analysis, comprising a database of 89 clinical cases, our method is capable of predicting the distribution of stress values for a walking loading condition with an average correlation coefficient of 0.984 and 0.976, for CT and X-ray images, respectively. These findings suggest that supervised learning approaches have the potential to leverage the clinical integration of mechanical simulations for the treatment of musculoskeletal conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a shallow dialogue analysis model, aimed at human-human dialogues in the context of staff or business meetings. Four components of the model are defined, and several machine learning techniques are used to extract features from dialogue transcripts: maximum entropy classifiers for dialogue acts, latent semantic analysis for topic segmentation, or decision tree classifiers for discourse markers. A rule-based approach is proposed for solving cross-modal references to meeting documents. The methods are trained and evaluated thanks to a common data set and annotation format. The integration of the components into an automated shallow dialogue parser opens the way to multimodal meeting processing and retrieval applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There has been limited analysis of the effects of hepatocellular carcinoma (HCC) on liver metabolism and circulating endogenous metabolites. Here, we report the findings of a plasma metabolomic investigation of HCC patients by ultraperformance liquid chromatography-electrospray ionization-quadrupole time-of-flight mass spectrometry (UPLC-ESI-QTOFMS), random forests machine learning algorithm, and multivariate data analysis. Control subjects included healthy individuals as well as patients with liver cirrhosis or acute myeloid leukemia. We found that HCC was associated with increased plasma levels of glycodeoxycholate, deoxycholate 3-sulfate, and bilirubin. Accurate mass measurement also indicated upregulation of biliverdin and the fetal bile acids 7α-hydroxy-3-oxochol-4-en-24-oic acid and 3-oxochol-4,6-dien-24-oic acid in HCC patients. A quantitative lipid profiling of patient plasma was also conducted by ultraperformance liquid chromatography-electrospray ionization-triple quadrupole mass spectrometry (UPLC-ESI-TQMS). By this method, we found that HCC was also associated with reduced levels of lysophosphocholines and in 4 of 20 patients with increased levels of lysophosphatidic acid [LPA(16:0)], where it correlated with plasma α-fetoprotein levels. Interestingly, when fatty acids were quantitatively profiled by gas chromatography-mass spectrometry (GC-MS), we found that lignoceric acid (24:0) and nervonic acid (24:1) were virtually absent from HCC plasma. Overall, this investigation illustrates the power of the new discovery technologies represented in the UPLC-ESI-QTOFMS platform combined with the targeted, quantitative platforms of UPLC-ESI-TQMS and GC-MS for conducting metabolomic investigations that can engender new insights into cancer pathobiology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To enhance understanding of the metabolic indicators of type 2 diabetes mellitus (T2DM) disease pathogenesis and progression, the urinary metabolomes of well characterized rhesus macaques (normal or spontaneously and naturally diabetic) were examined. High-resolution ultra-performance liquid chromatography coupled with the accurate mass determination of time-of-flight mass spectrometry was used to analyze spot urine samples from normal (n = 10) and T2DM (n = 11) male monkeys. The machine-learning algorithm random forests classified urine samples as either from normal or T2DM monkeys. The metabolites important for developing the classifier were further examined for their biological significance. Random forests models had a misclassification error of less than 5%. Metabolites were identified based on accurate masses (<10 ppm) and confirmed by tandem mass spectrometry of authentic compounds. Urinary compounds significantly increased (p < 0.05) in the T2DM when compared with the normal group included glycine betaine (9-fold), citric acid (2.8-fold), kynurenic acid (1.8-fold), glucose (68-fold), and pipecolic acid (6.5-fold). When compared with the conventional definition of T2DM, the metabolites were also useful in defining the T2DM condition, and the urinary elevations in glycine betaine and pipecolic acid (as well as proline) indicated defective re-absorption in the kidney proximal tubules by SLC6A20, a Na(+)-dependent transporter. The mRNA levels of SLC6A20 were significantly reduced in the kidneys of monkeys with T2DM. These observations were validated in the db/db mouse model of T2DM. This study provides convincing evidence of the power of metabolomics for identifying functional changes at many levels in the omics pipeline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Activation of the peroxisome proliferator-activated receptor alpha (PPARalpha) is associated with increased fatty acid catabolism and is commonly targeted for the treatment of hyperlipidemia. To identify latent, endogenous biomarkers of PPARalpha activation and hence increased fatty acid beta-oxidation, healthy human volunteers were given fenofibrate orally for 2 weeks and their urine was profiled by UPLC-QTOFMS. Biomarkers identified by the machine learning algorithm random forests included significant depletion by day 14 of both pantothenic acid (>5-fold) and acetylcarnitine (>20-fold), observations that are consistent with known targets of PPARalpha including pantothenate kinase and genes encoding proteins involved in the transport and synthesis of acylcarnitines. It was also concluded that serum cholesterol (-12.7%), triglycerides (-25.6%), uric acid (-34.7%), together with urinary propylcarnitine (>10-fold), isobutyrylcarnitine (>2.5-fold), (S)-(+)-2-methylbutyrylcarnitine (5-fold), and isovalerylcarnitine (>5-fold) were all reduced by day 14. Specificity of these biomarkers as indicators of PPARalpha activation was demonstrated using the Ppara-null mouse. Urinary pantothenic acid and acylcarnitines may prove useful indicators of PPARalpha-induced fatty acid beta-oxidation in humans. This study illustrates the utility of a pharmacometabolomic approach to understand drug effects on lipid metabolism in both human populations and in inbred mouse models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose an intelligent method, named the Novelty Detection Power Meter (NodePM), to detect novelties in electronic equipment monitored by a smart grid. Considering the entropy of each device monitored, which is calculated based on a Markov chain model, the proposed method identifies novelties through a machine learning algorithm. To this end, the NodePM is integrated into a platform for the remote monitoring of energy consumption, which consists of a wireless sensors network (WSN). It thus should be stressed that the experiments were conducted in real environments different from many related works, which are evaluated in simulated environments. In this sense, the results show that the NodePM reduces by 13.7% the power consumption of the equipment we monitored. In addition, the NodePM provides better efficiency to detect novelties when compared to an approach from the literature, surpassing it in different scenarios in all evaluations that were carried out.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a non-rigid free-from 2D-3D registration approach using statistical deformation model (SDM). In our approach the SDM is first constructed from a set of training data using a non-rigid registration algorithm based on b-spline free-form deformation to encode a priori information about the underlying anatomy. A novel intensity-based non-rigid 2D-3D registration algorithm is then presented to iteratively fit the 3D b-spline-based SDM to the 2D X-ray images of an unseen subject, which requires a computationally expensive inversion of the instantiated deformation in each iteration. In this paper, we propose to solve this challenge with a fast B-spline pseudo-inversion algorithm that is implemented on graphics processing unit (GPU). Experiments conducted on C-arm and X-ray images of cadaveric femurs demonstrate the efficacy of the present approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a novel surrogate model-based global optimization framework allowing a large number of function evaluations. The method, called SpLEGO, is based on a multi-scale expected improvement (EI) framework relying on both sparse and local Gaussian process (GP) models. First, a bi-objective approach relying on a global sparse GP model is used to determine potential next sampling regions. Local GP models are then constructed within each selected region. The method subsequently employs the standard expected improvement criterion to deal with the exploration-exploitation trade-off within selected local models, leading to a decision on where to perform the next function evaluation(s). The potential of our approach is demonstrated using the so-called Sparse Pseudo-input GP as a global model. The algorithm is tested on four benchmark problems, whose number of starting points ranges from 102 to 104. Our results show that SpLEGO is effective and capable of solving problems with large number of starting points, and it even provides significant advantages when compared with state-of-the-art EI algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Robust and accurate identification of intervertebral discs from low resolution, sparse MRI scans is essential for the automated scan planning of the MRI spine scan. This paper presents a graphical model based solution for the detection of both the positions and orientations of intervertebral discs from low resolution, sparse MRI scans. Compared with the existing graphical model based methods, the proposed method does not need a training process using training data and it also has the capability to automatically determine the number of vertebrae visible in the image. Experiments on 25 low resolution, sparse spine MRI data sets verified its performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The task considered in this paper is performance evaluation of region segmentation algorithms in the ground-truth-based paradigm. Given a machine segmentation and a ground-truth segmentation, performance measures are needed. We propose to consider the image segmentation problem as one of data clustering and, as a consequence, to use measures for comparing clusterings developed in statistics and machine learning. By doing so, we obtain a variety of performance measures which have not been used before in image processing. In particular, some of these measures have the highly desired property of being a metric. Experimental results are reported on both synthetic and real data to validate the measures and compare them with others.