62 resultados para Semi-implicit methods
em Université de Lausanne, Switzerland
Resumo:
Advancements in high-throughput technologies to measure increasingly complex biological phenomena at the genomic level are rapidly changing the face of biological research from the single-gene single-protein experimental approach to studying the behavior of a gene in the context of the entire genome (and proteome). This shift in research methodologies has resulted in a new field of network biology that deals with modeling cellular behavior in terms of network structures such as signaling pathways and gene regulatory networks. In these networks, different biological entities such as genes, proteins, and metabolites interact with each other, giving rise to a dynamical system. Even though there exists a mature field of dynamical systems theory to model such network structures, some technical challenges are unique to biology such as the inability to measure precise kinetic information on gene-gene or gene-protein interactions and the need to model increasingly large networks comprising thousands of nodes. These challenges have renewed interest in developing new computational techniques for modeling complex biological systems. This chapter presents a modeling framework based on Boolean algebra and finite-state machines that are reminiscent of the approach used for digital circuit synthesis and simulation in the field of very-large-scale integration (VLSI). The proposed formalism enables a common mathematical framework to develop computational techniques for modeling different aspects of the regulatory networks such as steady-state behavior, stochasticity, and gene perturbation experiments.
Resumo:
Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.
Resumo:
OBJECTIVE: The principal aim of this study was to develop a Swiss Food Frequency Questionnaire (FFQ) for the elderly population for use in a study to investigate the influence of nutritional factors on bone health. The secondary aim was to assess its validity and both short-term and long-term reproducibility. DESIGN: A 4-day weighed record (4 d WR) was applied to 51 randomly selected women of a mean age of 80.3 years. Subsequently, a detailed FFQ was developed, cross-validated against a further 44 4-d WR, and the short- (1 month, n = 15) and long-term (12 months, n = 14) reproducibility examined. SETTING: French speaking part of Switzerland. SUBJECTS: The subjects were randomly selected women recruited from the Swiss Evaluation of the Methods of Measurement of Osteoporotic Fracture cohort study. RESULTS: Mean energy intakes by 4-d WR and FFQ showed no significant difference [1564.9 kcal (SD 351.1); 1641.3 kcal (SD 523.2) respectively]. Mean crude nutrient intakes were also similar (with nonsignifcant P-values examining the differences in intake) and ranged from 0.13 (potassium) to 0.48 (magnesium). Similar results were found in the reproducibility studies. CONCLUSION: These findings provide evidence that this FFQ adequately estimates nutrient intakes and can be used to rank individuals within distributions of intake in specific populations.
Resumo:
Background: In patients with cervical spine injury, a cervical collar may prevent cervical spine movements but renders tracheal intubation with a standard laryngoscope difficult if not impossible. We hypothesized that despite the presence of a semi-rigid cervical collar and with the patient's head taped to the trolley, we would be able to intubate all patients with the GlideScopeR and its dedicated stylet. Methods: 50 adult patients (ASA 1 or 2, BMI ≤35 kg/m2) scheduled for elective surgical procedures requiring tracheal intubation were included. After standardized induction of general anesthesia and neuromuscular blockade, the neck was immobilized with an appropriately sized semi-rigid Philadelphia Patriot® cervical collar, the head was taped to the trolley. Laryngoscopy was attempted using a Macintosh laryngoscope blade 4 and the modified Cormack Lehane grade was noted. Subsequently, laryngoscopy with the GlideScopeR was graded and followed by oro-tracheal intubation. Results: All patients were successfully intubated with the GlideScopeR and its dedicated stylet. The median intubation time was 50 sec [43; 61]. The modified Cormack Lehane grade was 3 or 4 at direct laryngoscopy. It was significantly reduced with the GlideScopeR (p <0.0001), reaching 2a in most of patients. Maximal mouth opening was significantly reduced with the cervical collar applied, 4.5 cm [4.5; 5.0] vs. 2.0 cm [1.8; 2.0] (p <0.0001). Conclusions: The GlideScope® allows oro-tracheal intubation in patients having their cervical spine immobilized by a semi-rigid collar and their head taped to the trolley. It furthermore decreases significantly the modified Cormack Lehane grade.
Resumo:
This review paper reports the consensus of a technical workshop hosted by the European network, NanoImpactNet (NIN). The workshop aimed to review the collective experience of working at the bench with manufactured nanomaterials (MNMs), and to recommend modifications to existing experimental methods and OECD protocols. Current procedures for cleaning glassware are appropriate for most MNMs, although interference with electrodes may occur. Maintaining exposure is more difficult with MNMs compared to conventional chemicals. A metal salt control is recommended for experiments with metallic MNMs that may release free metal ions. Dispersing agents should be avoided, but if they must be used, then natural or synthetic dispersing agents are possible, and dispersion controls essential. Time constraints and technology gaps indicate that full characterisation of test media during ecotoxicity tests is currently not practical. Details of electron microscopy, dark-field microscopy, a range of spectroscopic methods (EDX, XRD, XANES, EXAFS), light scattering techniques (DLS, SLS) and chromatography are discussed. The development of user-friendly software to predict particle behaviour in test media according to DLVO theory is in progress, and simple optical methods are available to estimate the settling behaviour of suspensions during experiments. However, for soil matrices such simple approaches may not be applicable. Alternatively, a Critical Body Residue approach may be taken in which body concentrations in organisms are related to effects, and toxicity thresholds derived. For microbial assays, the cell wall is a formidable barrier to MNMs and end points that rely on the test substance penetrating the cell may be insensitive. Instead assays based on the cell envelope should be developed for MNMs. In algal growth tests, the abiotic factors that promote particle aggregation in the media (e.g. ionic strength) are also important in providing nutrients, and manipulation of the media to control the dispersion may also inhibit growth. Controls to quantify shading effects, and precise details of lighting regimes, shaking or mixing should be reported in algal tests. Photosynthesis may be more sensitive than traditional growth end points for algae and plants. Tests with invertebrates should consider non-chemical toxicity from particle adherence to the organisms. The use of semi-static exposure methods with fish can reduce the logistical issues of waste water disposal and facilitate aspects of animal husbandry relevant to MMNs. There are concerns that the existing bioaccumulation tests are conceptually flawed for MNMs and that new test(s) are required. In vitro testing strategies, as exemplified by genotoxicity assays, can be modified for MNMs, but the risk of false negatives in some assays is highlighted. In conclusion, most protocols will require some modifications and recommendations are made to aid the researcher at the bench. [Authors]
Resumo:
Segmenting ultrasound images is a challenging problemwhere standard unsupervised segmentation methods such asthe well-known Chan-Vese method fail. We propose in thispaper an efficient segmentation method for this class ofimages. Our proposed algorithm is based on asemi-supervised approach (user labels) and the use ofimage patches as data features. We also consider thePearson distance between patches, which has been shown tobe robust w.r.t speckle noise present in ultrasoundimages. Our results on phantom and clinical data show avery high similarity agreement with the ground truthprovided by a medical expert.
Resumo:
A semisupervised support vector machine is presented for the classification of remote sensing images. The method exploits the wealth of unlabeled samples for regularizing the training kernel representation locally by means of cluster kernels. The method learns a suitable kernel directly from the image and thus avoids assuming a priori signal relations by using a predefined kernel structure. Good results are obtained in image classification examples when few labeled samples are available. The method scales almost linearly with the number of unlabeled samples and provides out-of-sample predictions.
Resumo:
Fluvial deposits are a challenge for modelling flow in sub-surface reservoirs. Connectivity and continuity of permeable bodies have a major impact on fluid flow in porous media. Contemporary object-based and multipoint statistics methods face a problem of robust representation of connected structures. An alternative approach to model petrophysical properties is based on machine learning algorithm ? Support Vector Regression (SVR). Semi-supervised SVR is able to establish spatial connectivity taking into account the prior knowledge on natural similarities. SVR as a learning algorithm is robust to noise and captures dependencies from all available data. Semi-supervised SVR applied to a synthetic fluvial reservoir demonstrated robust results, which are well matched to the flow performance
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.
Resumo:
Background: This trial was conducted to evaluate the safety and immunogenicity of two virosome formulated malaria peptidomimetics derived from Plasmodium falciparum AMA-1 and CSP in malaria semi-immune adults and children.Methods: The design was a prospective randomized, double-blind, controlled, age-deescalating study with two immunizations. 10 adults and 40 children (aged 5-9 years) living in a malaria endemic area were immunized with PEV3B or virosomal influenza vaccine Inflexal (R) V on day 0 and 90.Results: No serious or severe adverse events (AEs) related to the vaccines were observed. The only local solicited AE reported was pain at injection site, which affected more children in the Inflexal (R) V group compared to the PEV3B group (p = 0.014). In the PEV3B group, IgG ELISA endpoint titers specific for the AMA-1 and CSP peptide antigens were significantly higher for most time points compared to the Inflexal (R) V control group. Across all time points after first immunization the average ratio of endpoint titers to baseline values in PEV3B subjects ranged from 4 to 15 in adults and from 4 to 66 in children. As an exploratory outcome, we found that the incidence rate of clinical malaria episodes in children vaccinees was half the rate of the control children between study days 30 and 365 (0.0035 episodes per day at risk for PEV3B vs. 0.0069 for Inflexal (R) V; RR = 0.50 [95%-CI: 0.29-0.88], p = 0.02).Conclusion: These findings provide a strong basis for the further development of multivalent virosomal malaria peptide vaccines.
Resumo:
BACKGROUND: Endurance athletes are advised to optimize nutrition prior to races. Little is known about actual athletes' beliefs, knowledge and nutritional behaviour. We monitored nutritional behaviour of amateur ski-mountaineering athletes during 4 days prior to a major competition to compare it with official recommendations and with the athletes' beliefs. METHODS: Participants to the two routes of the 'Patrouille des Glaciers' were recruited (A, 26 km, ascent 1881 m, descent 2341 m, max altitude 3160 m; Z, 53 km, ascent 3994 m, descent 4090 m, max altitude 3650 m). Dietary intake diaries of 40 athletes (21 A, 19 Z) were analysed for energy, carbohydrate, fat, protein and liquid; ten were interviewed about their pre-race nutritional beliefs and behaviour. RESULTS: Despite belief that pre-race carbohydrate, energy and fluid intake should be increased, energy consumption was 2416 ± 696 (mean ± SD) kcal · day(-1), 83 ± 17 % of recommended intake, carbohydrate intake was only 46 ± 13 % of minimal recommended (10 g · kg(-1) · day(-1)) and fluid intake only 2.7 ± 1.0 l · day(-1). CONCLUSIONS: Our sample of endurance athletes did not comply with pre-race nutritional recommendations despite elementary knowledge and belief to be compliant. In these athletes a clear and reflective nutritional strategy was lacking. This suggests a potential for improving knowledge and compliance with recommendations. Alternatively, some recommendations may be unrealistic.
Resumo:
BACKGROUND: Several analysis software packages for myocardial blood flow (MBF) quantification from cardiac PET studies exist, but they have not been compared using concordance analysis, which can characterize precision and bias separately. Reproducible measurements are needed for quantification to fully develop its clinical potential. METHODS: Fifty-one patients underwent dynamic Rb-82 PET at rest and during adenosine stress. Data were processed with PMOD and FlowQuant (Lortie model). MBF and myocardial flow reserve (MFR) polar maps were quantified and analyzed using a 17-segment model. Comparisons used Pearson's correlation ρ (measuring precision), Bland and Altman limit-of-agreement and Lin's concordance correlation ρc = ρ·C b (C b measuring systematic bias). RESULTS: Lin's concordance and Pearson's correlation values were very similar, suggesting no systematic bias between software packages with an excellent precision ρ for MBF (ρ = 0.97, ρc = 0.96, C b = 0.99) and good precision for MFR (ρ = 0.83, ρc = 0.76, C b = 0.92). On a per-segment basis, no mean bias was observed on Bland-Altman plots, although PMOD provided slightly higher values than FlowQuant at higher MBF and MFR values (P < .0001). CONCLUSIONS: Concordance between software packages was excellent for MBF and MFR, despite higher values by PMOD at higher MBF values. Both software packages can be used interchangeably for quantification in daily practice of Rb-82 cardiac PET.
Resumo:
PURPOSE: To evaluate a diagnostic strategy for pulmonary embolism that combined clinical assessment, plasma D-dimer measurement, lower limb venous ultrasonography, and helical computed tomography (CT). METHODS: A cohort of 965 consecutive patients presenting to the emergency departments of three general and teaching hospitals with clinically suspected pulmonary embolism underwent sequential noninvasive testing. Clinical probability was assessed by a prediction rule combined with implicit judgment. All patients were followed for 3 months. RESULTS: A normal D-dimer level (<500 microg/L by a rapid enzyme-linked immunosorbent assay) ruled out venous thromboembolism in 280 patients (29%), and finding a deep vein thrombosis by ultrasonography established the diagnosis in 92 patients (9.5%). Helical CT was required in only 593 patients (61%) and showed pulmonary embolism in 124 patients (12.8%). Pulmonary embolism was considered ruled out in the 450 patients (46.6%) with a negative ultrasound and CT scan and a low-to-intermediate clinical probability. The 8 patients with a negative ultrasound and CT scan despite a high clinical probability proceeded to pulmonary angiography (positive: 2; negative: 6). Helical CT was inconclusive in 11 patients (pulmonary embolism: 4; no pulmonary embolism: 7). The overall prevalence of pulmonary embolism was 23%. Patients classified as not having pulmonary embolism were not anticoagulated during follow-up and had a 3-month thromboembolic risk of 1.0% (95% confidence interval: 0.5% to 2.1%). CONCLUSION: A noninvasive diagnostic strategy combining clinical assessment, D-dimer measurement, ultrasonography, and helical CT yielded a diagnosis in 99% of outpatients suspected of pulmonary embolism, and appeared to be safe, provided that CT was combined with ultrasonography to rule out the disease.
Resumo:
The mouse has emerged as an animal model for many diseases. At IRO, we have used this animal to understand the development of many eye diseases and treatment of some of them. Precise evaluation of vision is a prerequisite for both these approaches. In this unit we describe three ways to measure vision: testing the optokinetic response, and evaluating the fundus by direct observation and by fluorescent angiography.