878 resultados para Multi-resolution Method
Resumo:
Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.
Resumo:
UNLABELLED: In vivo transcriptional analyses of microbial pathogens are often hampered by low proportions of pathogen biomass in host organs, hindering the coverage of full pathogen transcriptome. We aimed to address the transcriptome profiles of Candida albicans, the most prevalent fungal pathogen in systemically infected immunocompromised patients, during systemic infection in different hosts. We developed a strategy for high-resolution quantitative analysis of the C. albicans transcriptome directly from early and late stages of systemic infection in two different host models, mouse and the insect Galleria mellonella. Our results show that transcriptome sequencing (RNA-seq) libraries were enriched for fungal transcripts up to 1,600-fold using biotinylated bait probes to capture C. albicans sequences. This enrichment biased the read counts of only ~3% of the genes, which can be identified and removed based on a priori criteria. This allowed an unprecedented resolution of C. albicans transcriptome in vivo, with detection of over 86% of its genes. The transcriptional response of the fungus was surprisingly similar during infection of the two hosts and at the two time points, although some host- and time point-specific genes could be identified. Genes that were highly induced during infection were involved, for instance, in stress response, adhesion, iron acquisition, and biofilm formation. Of the in vivo-regulated genes, 10% are still of unknown function, and their future study will be of great interest. The fungal RNA enrichment procedure used here will help a better characterization of the C. albicans response in infected hosts and may be applied to other microbial pathogens. IMPORTANCE: Understanding the mechanisms utilized by pathogens to infect and cause disease in their hosts is crucial for rational drug development. Transcriptomic studies may help investigations of these mechanisms by determining which genes are expressed specifically during infection. This task has been difficult so far, since the proportion of microbial biomass in infected tissues is often extremely low, thus limiting the depth of sequencing and comprehensive transcriptome analysis. Here, we adapted a technology to capture and enrich C. albicans RNA, which was next used for deep RNA sequencing directly from infected tissues from two different host organisms. The high-resolution transcriptome revealed a large number of genes that were so far unknown to participate in infection, which will likely constitute a focus of study in the future. More importantly, this method may be adapted to perform transcript profiling of any other microbes during host infection or colonization.
Resumo:
Abstract This work studies the multi-label classification of turns in simple English Wikipedia talk pages into dialog acts. The treated dataset was created and multi-labeled by (Ferschke et al., 2012). The first part analyses dependences between labels, in order to examine the annotation coherence and to determine a classification method. Then, a multi-label classification is computed, after transforming the problem into binary relevance. Regarding features, whereas (Ferschke et al., 2012) use features such as uni-, bi-, and trigrams, time distance between turns or the indentation level of the turn, other features are considered here: lemmas, part-of-speech tags and the meaning of verbs (according to WordNet). The dataset authors applied approaches such as Naive Bayes or Support Vector Machines. The present paper proposes, as an alternative, to use Schoenberg transformations which, following the example of kernel methods, transform original Euclidean distances into other Euclidean distances, in a space of high dimensionality. Résumé Ce travail étudie la classification supervisée multi-étiquette en actes de dialogue des tours de parole des contributeurs aux pages de discussion de Simple English Wikipedia (Wikipédia en anglais simple). Le jeu de données considéré a été créé et multi-étiqueté par (Ferschke et al., 2012). Une première partie analyse les relations entre les étiquettes pour examiner la cohérence des annotations et pour déterminer une méthode de classification. Ensuite, une classification supervisée multi-étiquette est effectuée, après recodage binaire des étiquettes. Concernant les variables, alors que (Ferschke et al., 2012) utilisent des caractéristiques telles que les uni-, bi- et trigrammes, le temps entre les tours de parole ou l'indentation d'un tour de parole, d'autres descripteurs sont considérés ici : les lemmes, les catégories morphosyntaxiques et le sens des verbes (selon WordNet). Les auteurs du jeu de données ont employé des approches telles que le Naive Bayes ou les Séparateurs à Vastes Marges (SVM) pour la classification. Cet article propose, de façon alternative, d'utiliser et d'étendre l'analyse discriminante linéaire aux transformations de Schoenberg qui, à l'instar des méthodes à noyau, transforment les distances euclidiennes originales en d'autres distances euclidiennes, dans un espace de haute dimensionnalité.
Resumo:
Aim The aim of this study was to test different modelling approaches, including a new framework, for predicting the spatial distribution of richness and composition of two insect groups. Location The western Swiss Alps. Methods We compared two community modelling approaches: the classical method of stacking binary prediction obtained fromindividual species distribution models (binary stacked species distribution models, bS-SDMs), and various implementations of a recent framework (spatially explicit species assemblage modelling, SESAM) based on four steps that integrate the different drivers of the assembly process in a unique modelling procedure. We used: (1) five methods to create bS-SDM predictions; (2) two approaches for predicting species richness, by summing individual SDM probabilities or by modelling the number of species (i.e. richness) directly; and (3) five different biotic rules based either on ranking probabilities from SDMs or on community co-occurrence patterns. Combining these various options resulted in 47 implementations for each taxon. Results Species richness of the two taxonomic groups was predicted with good accuracy overall, and in most cases bS-SDM did not produce a biased prediction exceeding the actual number of species in each unit. In the prediction of community composition bS-SDM often also yielded the best evaluation score. In the case of poor performance of bS-SDM (i.e. when bS-SDM overestimated the prediction of richness) the SESAM framework improved predictions of species composition. Main conclusions Our results differed from previous findings using community-level models. First, we show that overprediction of richness by bS-SDM is not a general rule, thus highlighting the relevance of producing good individual SDMs to capture the ecological filters that are important for the assembly process. Second, we confirm the potential of SESAM when richness is overpredicted by bS-SDM; limiting the number of species for each unit and applying biotic rules (here using the ranking of SDM probabilities) can improve predictions of species composition
Resumo:
This work describes the formation of transformation products (TPs) by the enzymatic degradation at laboratory scale of two highly consumed antibiotics: tetracycline (Tc) and erythromycin (ERY). The analysis of the samples was carried out by a fast and simple method based on the novel configuration of the on-line turbulent flow system coupled to a hybrid linear ion trap – high resolution mass spectrometer. The method was optimized and validated for the complete analysis of ERY, Tc and their transformation products within 10 min without any other sample manipulation. Furthermore, the applicability of the on-line procedure was evaluated for 25 additional antibiotics, covering a wide range of chemical classes in different environmental waters with satisfactory quality parameters. Degradation rates obtained for Tc by laccase enzyme and ERY by EreB esterase enzyme without the presence of mediators were ∼78% and ∼50%, respectively. Concerning the identification of TPs, three suspected compounds for Tc and five of ERY have been proposed. In the case of Tc, the tentative molecular formulas with errors mass within 2 ppm have been based on the hypothesis of dehydroxylation, (bi)demethylation and oxidation of the rings A and C as major reactions. In contrast, the major TP detected for ERY has been identified as the “dehydration ERY-A”, with the same molecular formula of its parent compound. In addition, the evaluation of the antibiotic activity of the samples along the enzymatic treatments showed a decrease around 100% in both cases
Resumo:
In fetal brain MRI, most of the high-resolution reconstruction algorithms rely on brain segmentation as a preprocessing step. Manual brain segmentation is however highly time-consuming and therefore not a realistic solution. In this work, we assess on a large dataset the performance of Multiple Atlas Fusion (MAF) strategies to automatically address this problem. Firstly, we show that MAF significantly increase the accuracy of brain segmentation as regards single-atlas strategy. Secondly, we show that MAF compares favorably with the most recent approach (Dice above 0.90). Finally, we show that MAF could in turn provide an enhancement in terms of reconstruction quality.
Resumo:
Simultaneous localization and mapping(SLAM) is a very important problem in mobile robotics. Many solutions have been proposed by different scientists during the last two decades, nevertheless few studies have considered the use of multiple sensors simultane¬ously. The solution is on combining several data sources with the aid of an Extended Kalman Filter (EKF). Two approaches are proposed. The first one is to use the ordinary EKF SLAM algorithm for each data source separately in parallel and then at the end of each step, fuse the results into one solution. Another proposed approach is the use of multiple data sources simultaneously in a single filter. The comparison of the computational com¬plexity of the two methods is also presented. The first method is almost four times faster than the second one.
Resumo:
Robotic platforms have advanced greatly in terms of their remote sensing capabilities, including obtaining optical information using cameras. Alongside these advances, visual mapping has become a very active research area, which facilitates the mapping of areas inaccessible to humans. This requires the efficient processing of data to increase the final mosaic quality and computational efficiency. In this paper, we propose an efficient image mosaicing algorithm for large area visual mapping in underwater environments using multiple underwater robots. Our method identifies overlapping image pairs in the trajectories carried out by the different robots during the topology estimation process, being this a cornerstone for efficiently mapping large areas of the seafloor. We present comparative results based on challenging real underwater datasets, which simulated multi-robot mapping
Resumo:
The notion that human beings face internal conflicts is very old in the field of psychotherapy. Also, it is common the idea that symptoms could be derived from those conflicts. However, attempts for developing ways of appraising those conflicts so that they can be measured and tested empirically are almost inexistent. Precisely, the Multi- Centre Dilemma Project is aimed at investigating the role of those conflicts, termed implicative dilemmas or dilemmatic constructs, in health using the Repertory Grid Technique as a method to identify them. So far, a higher presence of those conflicts has been found in a variety of clinical problems (depression, social phobia, somatic problems, etc.) in comparison to non-clinical samples. Therefore, it seems convenient to develop a form of intervention aimed to dealing and resolving these conflicts. In this paper a therapy manual focused on implicative dilemmas resolution is presented. It consists of a structured intervention for 15 sessions, designed mainly for research and training in psychotherapy, and based on Personal Construct Psychotherapy.
Resumo:
An HPLC method was developed and validated aiming to quantify the cyclosporine-A incorporated into intraocular implants, released from them; and in direct contact with the degradation products of PLGA. The separation was carried out in isocratic mode using acetonitrile/water (70:30) as mobile phase, a C18 column at 80 ºC and UV detection at 210 nm. The method provided selectivity based on resolution among peaks. It was linear over the range of 2.5-40.0 µg/mL. The quantitation and detection limits were 0.8 and 1.2 µg/mL, respectively. The recovery was 101.8% and intra-day and inter-day precision was close to 2%.
Resumo:
A reliable method using LC-UV to assay mometasone furoate (MF) in creams or nasal sprays using the same chromatographic conditions was set up. Methanol:water 80:20 (v/v) (1.0 mL min-1) was used as mobile phase. MF was detected at 248 nm and analyzed in a concentration range from 1.0 to 20.0 µg mL-1. The method provided acceptable theoretical plates, peak simmetry, peak tailing factor and peak resolution a short run (5 min). The method showed specificity, good linearity (r = 0.9999) and the quantification limit was 0.379 µg mL-1. Furthermore, the method was precise (RSD < 2.0%), accurate (recovery > 97%) and robust.
Resumo:
The purpose of this study was to develop a rapid, simple and sensitive quantitation method for pseudoephedrine (PSE), paracetamol (PAR) and loratadine (LOR) in plasma and pharmaceuticals using liquid chromatography-tandem mass spectrometry with a monolithic column. Separation was achieved using a gradient composition of methanol-0.1% formic acid at a flow rate of 1.0 mL min-1. Mass spectral transitions were recorded in SRM mode. System validation was evaluated for precision, specificity and linearity. Limit of detection for pseudoephedrine, paracetamol, and loratadine were determined to be 3.14, 1.86 and 1.44 ng mL-1, respectively, allowing easy determination in plasma with % recovery of 93.12 to 101.56%.
Resumo:
Large scale preparation of hybrid electrical actuators represents an important step for the production of low cost devices. Interfacial polymerization of polypyrrole in the presence of multi-walled carbon nanotubes represents a simple technique in which strong interaction between components is established, providing composite materials with potential applications as actuators due to the synergistic interaction between the individual components, i.e., fast response of carbon nanotubes, high strain of polypyrrole, and diversity in the available geometry of resulting samples.
Resumo:
Dirt counting and dirt particle characterisation of pulp samples is an important part of quality control in pulp and paper production. The need for an automatic image analysis system to consider dirt particle characterisation in various pulp samples is also very critical. However, existent image analysis systems utilise a single threshold to segment the dirt particles in different pulp samples. This limits their precision. Based on evidence, designing an automatic image analysis system that could overcome this deficiency is very useful. In this study, the developed Niblack thresholding method is proposed. The method defines the threshold based on the number of segmented particles. In addition, the Kittler thresholding is utilised. Both of these thresholding methods can determine the dirt count of the different pulp samples accurately as compared to visual inspection and the Digital Optical Measuring and Analysis System (DOMAS). In addition, the minimum resolution needed for acquiring a scanner image is defined. By considering the variation in dirt particle features, the curl shows acceptable difference to discriminate the bark and the fibre bundles in different pulp samples. Three classifiers, called k-Nearest Neighbour, Linear Discriminant Analysis and Multi-layer Perceptron are utilised to categorize the dirt particles. Linear Discriminant Analysis and Multi-layer Perceptron are the most accurate in classifying the segmented dirt particles by the Kittler thresholding with morphological processing. The result shows that the dirt particles are successfully categorized for bark and for fibre bundles.
Resumo:
This work describes a method to determine Cu at wide range concentrations in a single run without need of further dilutions employing high-resolution continuum source flame atomic absorption spectrometry. Different atomic lines for Cu at 324.754 nm, 327.396 nm, 222.570 nm, 249.215 nm and 224.426 nm were evaluated and main figures of merit established. Absorbance measurements at 324.754 nm, 249.215 nm and 224.426 nm allows the determination of Cu in the 0.07 - 5.0 mg L-1, 5.0 - 100 mg L-1 and 100 - 800 mg L-1 concentration intervals respectively with linear correlation coefficients better than 0.998. Limits of detection were 21 µg L-1, 310 µg L-1 and 1400 µg L-1 for 324.754 nm, 249.215 nm and 224.426 nm, respectively and relative standard deviations (n = 12) were £ 2.7%. The proposed method was applied to water samples spiked with Cu and the results were in agreement at a 95% of confidence level (paired t-test) with those obtained by line-source flame atomic absorption spectrometry.