820 resultados para Search-based algorithms
Resumo:
Until recently, the hard X-ray, phase-sensitive imaging technique called grating interferometry was thought to provide information only in real space. However, by utilizing an alternative approach to data analysis we demonstrated that the angular resolved ultra-small angle X-ray scattering distribution can be retrieved from experimental data. Thus, reciprocal space information is accessible by grating interferometry in addition to real space. Naturally, the quality of the retrieved data strongly depends on the performance of the employed analysis procedure, which involves deconvolution of periodic and noisy data in this context. The aim of this article is to compare several deconvolution algorithms to retrieve the ultra-small angle X-ray scattering distribution in grating interferometry. We quantitatively compare the performance of three deconvolution procedures (i.e., Wiener, iterative Wiener and Lucy-Richardson) in case of realistically modeled, noisy and periodic input data. The simulations showed that the algorithm of Lucy-Richardson is the more reliable and more efficient as a function of the characteristics of the signals in the given context. The availability of a reliable data analysis procedure is essential for future developments in grating interferometry.
Resumo:
The high complexity of cortical convolutions in humans is very challenging both for engineers to measure and compare it, and for biologists and physicians to understand it. In this paper, we propose a surface-based method for the quantification of cortical gyrification. Our method uses accurate 3-D cortical reconstruction and computes local measurements of gyrification at thousands of points over the whole cortical surface. The potential of our method to identify and localize precisely gyral abnormalities is illustrated by a clinical study on a group of children affected by 22q11 Deletion Syndrome, compared to control individuals.
Resumo:
The MAGIC collaboration has searched for high-energy gamma-ray emission of some of the most promising pulsar candidates above an energy threshold of 50 GeV, an energy not reachable up to now by other ground-based instruments. Neither pulsed nor steady gamma-ray emission has been observed at energies of 100 GeV from the classical radio pulsars PSR J0205+6449 and PSR J2229+6114 (and their nebulae 3C58 and Boomerang, respectively) and the millisecond pulsar PSR J0218+4232. Here, we present the flux upper limits for these sources and discuss their implications in the context of current model predictions.
Resumo:
In recent years, both homing endonucleases (HEases) and zinc-finger nucleases (ZFNs) have been engineered and selected for the targeting of desired human loci for gene therapy. However, enzyme engineering is lengthy and expensive and the off-target effect of the manufactured endonucleases is difficult to predict. Moreover, enzymes selected to cleave a human DNA locus may not cleave the homologous locus in the genome of animal models because of sequence divergence, thus hampering attempts to assess the in vivo efficacy and safety of any engineered enzyme prior to its application in human trials. Here, we show that naturally occurring HEases can be found, that cleave desirable human targets. Some of these enzymes are also shown to cleave the homologous sequence in the genome of animal models. In addition, the distribution of off-target effects may be more predictable for native HEases. Based on our experimental observations, we present the HomeBase algorithm, database and web server that allow a high-throughput computational search and assignment of HEases for the targeting of specific loci in the human and other genomes. We validate experimentally the predicted target specificity of candidate fungal, bacterial and archaeal HEases using cell free, yeast and archaeal assays.
Resumo:
Segmenting ultrasound images is a challenging problemwhere standard unsupervised segmentation methods such asthe well-known Chan-Vese method fail. We propose in thispaper an efficient segmentation method for this class ofimages. Our proposed algorithm is based on asemi-supervised approach (user labels) and the use ofimage patches as data features. We also consider thePearson distance between patches, which has been shown tobe robust w.r.t speckle noise present in ultrasoundimages. Our results on phantom and clinical data show avery high similarity agreement with the ground truthprovided by a medical expert.
Resumo:
BACKGROUND: Prospective data describing the appropriateness of use of colonoscopy based on detailed panel-based clinical criteria are not available. METHODS: In a cohort of 553 consecutive patients referred for colonoscopy to two university-based Swiss outpatient clinics, the percentage of patients who underwent colonoscopy for appropriate, equivocal, and inappropriate indications and the relationship between appropriateness of use and the presence of relevant endoscopic lesions was prospectively assessed. This assessment was based on criteria of the American Society for Gastrointestinal Endoscopy and explicit American and Swiss criteria developed in 1994 by a formal panel process using the RAND/UCLA appropriateness method. RESULTS: The procedures were rated appropriate or equivocal in 72.2% by criteria of the American Society for Gastrointestinal Endoscopy, in 68.5% by explicit American criteria, and in 74.4% by explicit Swiss criteria (not statistically significant, NS). Inappropriate use (overuse) of colonoscopy was found in 27.8%, 31.5%, and 25.6%, respectively (NS). The proportion of appropriate procedures was higher with increasing age. Almost all reasons for using colonoscopy could be assessed by the two explicit criteria sets, whereas 28.4% of reasons for using colonoscopy could not be evaluated by the criteria of the American Society for Gastrointestinal Endoscopy (p < 0.0001). The probability of finding a relevant endoscopic lesion was distinctly higher in the procedures rated appropriate or equivocal than in procedures judged inappropriate. CONCLUSIONS: The rate of inappropriate use of colonoscopy is substantial in Switzerland. Explicit criteria allow assessment of almost all indications encountered in clinical practice. In this study, all sets of appropriateness criteria significantly enhanced the probability of finding a relevant endoscopic lesion during colonoscopy.
Resumo:
We estimate the attainable limits on the coupling of a nonstandard Higgs boson to two photons taking into account the data collected by the Fermilab collaborations on diphoton events. We based our analysis on a general set of dimension-6 effective operators that give rise to anomalous couplings in the bosonic sector of the standard model. If the coefficients of all blind operators have the same magnitude, indirect bounds on the anomalous triple vector-boson couplings can also be inferred, provided there is no large cancellation in the Higgs-gamma-gamma coupling.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science- Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
The Organization of the Thesis The remainder of the thesis comprises five chapters and a conclusion. The next chapter formalizes the envisioned theory into a tractable model. Section 2.2 presents a formal description of the model economy: the individual heterogeneity, the individual objective, the UI setting, the population dynamics and the equilibrium. The welfare and efficiency criteria for qualifying various equilibrium outcomes are proposed in section 2.3. The fourth section shows how the model-generated information can be computed. Chapter 3 transposes the model from chapter 2 in conditions that enable its use in the analysis of individual labor market strategies and their implications for the labor market equilibrium. In section 3.2 the Swiss labor market data sets, stylized facts, and the UI system are presented. The third section outlines and motivates the parameterization method. In section 3.4 the model's replication ability is evaluated and some aspects of the parameter choice are discussed. Numerical solution issues can be found in the appendix. Chapter 4 examines the determinants of search-strategic behavior in the model economy and its implications for the labor market aggregates. In section 4.2, the unemployment duration distribution is examined and related to search strategies. Section 4.3 shows how the search- strategic behavior is influenced by the UI eligibility and section 4.4 how it is determined by individual heterogeneity. The composition effects generated by search strategies in labor market aggregates are examined in section 4.5. The last section evaluates the model's replication of empirical unemployment escape frequencies reported in Sheldon [67]. Chapter 5 applies the model economy to examine the effects on the labor market equilibrium of shocks to the labor market risk structure, to the deep underlying labor market structure and to the UI setting. Section 5.2 examines the effects of the labor market risk structure on the labor market equilibrium and the labor market strategic behavior. The effects of alterations in the labor market deep economic structural parameters, i.e. individual preferences and production technology, are shown in Section 5.3. Finally, the UI setting impacts on the labor market are studied in Section 5.4. This section also evaluates the role of the UI authority monitoring and the differences in the Way changes in the replacement rate and the UI benefit duration affect the labor market. In chapter 6 the model economy is applied in counterfactual experiments to assess several aspects of the Swiss labor market movements in the nineties. Section 6.2 examines the two equilibria characterizing the Swiss labor market in the nineties, the " growth" equilibrium with a "moderate" UI regime and the "recession" equilibrium with a more "generous" UI. Section 6.3 evaluates the isolated effects of the structural shocks, while the isolated effects of the UI reforms are analyzed in section 6.4. Particular dimensions of the UI reforms, the duration, replacement rate and the tax rate effects, are studied in section 6.5, while labor market equilibria without benefits are evaluated in section 6.6. In section 6.7 the structural and institutional interactions that may act as unemployment amplifiers are discussed in view of the obtained results. A welfare analysis based on individual welfare in different structural and UI settings is presented in the eighth section. Finally, the results are related to more favorable unemployment trends after 1997. The conclusion evaluates the features embodied in the model economy with respect to the resulting model dynamics to derive lessons from the model design." The thesis ends by proposing guidelines for future improvements of the model and directions for further research.
Resumo:
MOTIVATION: Analysis of millions of pyro-sequences is currently playing a crucial role in the advance of environmental microbiology. Taxonomy-independent, i.e. unsupervised, clustering of these sequences is essential for the definition of Operational Taxonomic Units. For this application, reproducibility and robustness should be the most sought after qualities, but have thus far largely been overlooked. RESULTS: More than 1 million hyper-variable internal transcribed spacer 1 (ITS1) sequences of fungal origin have been analyzed. The ITS1 sequences were first properly extracted from 454 reads using generalized profiles. Then, otupipe, cd-hit-454, ESPRIT-Tree and DBC454, a new algorithm presented here, were used to analyze the sequences. A numerical assay was developed to measure the reproducibility and robustness of these algorithms. DBC454 was the most robust, closely followed by ESPRIT-Tree. DBC454 features density-based hierarchical clustering, which complements the other methods by providing insights into the structure of the data. AVAILABILITY: An executable is freely available for non-commercial users at ftp://ftp.vital-it.ch/tools/dbc454. It is designed to run under MPI on a cluster of 64-bit Linux machines running Red Hat 4.x, or on a multi-core OSX system. CONTACT: dbc454@vital-it.ch or nicolas.guex@isb-sib.ch.
Resumo:
OBJECTIVE: To describe the determinants of self-initiated smoking cessation of duration of at least 6 months as identified in longitudinal population-based studies of adolescent and young adult smokers. METHODS: A systematic search of the PubMed and EMBASE databases using smoking, tobacco, cessation, quit and stop as keywords was performed. Limits included articles related to humans, in English, published between January 1984 and August 2010, and study population aged 10-29 years. A total of 4502 titles and 871 abstracts were reviewed independently by 2 and 3 reviewers, respectively. Nine articles were retained for data abstraction. Data on study location, timeframe, duration of follow-up, number of data collection points, sample size, age/grade of participants, number of quitters, smoking status at baseline, definition of cessation, covariates and analytic method were abstracted from each article. The number of studies that reported a statistically significant association between each determinant investigated and cessation were tabulated, from among all studies that assessed the determinant. RESULTS: Despite heterogeneity in methods across studies, five factors robustly predicted quitting across studies in which the factor was investigated: not having friends who smoke, not having intentions to smoke in the future, resisting peer pressure to smoke, being older at first use of cigarette and having negative beliefs about smoking. CONCLUSIONS: The literature on longitudinal predictors of cessation in adolescent and young adult smokers is not well developed. Cessation interventions for this population will remain less than optimally effective until there is a solid evidence base on which to develop interventions.
Resumo:
Evidence-based (EBP) aims for a new distribution of power centered on scientific evidence rather than clinical expertise. The present article describes the operational process of EBP by describing the implementation stages of this type of practise. This stage presentation is essential given that there are many conceptions end models of EBP and that some nurses have a limited knowledge of its rules ans implications. Given that number and formulation of the stages varies by author, the process presented here attempts to integrate the different stages reviewed.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
A novel approach for the identification of tumor antigen-derived sequences recognized by CD8(+) cytolytic T lymphocytes (CTL) consists in using synthetic combinatorial peptide libraries. Here we have screened a library composed of 3.1 x 10(11) nonapeptides arranged in a positional scanning format, in a cytotoxicity assay, to search the antigen recognized by melanoma-reactive CTL of unknown specificity. The results of this analysis enabled the identification of several optimal peptide ligands, as most of the individual nonapeptides deduced from the primary screening were efficiently recognized by the CTL. The results of the library screening were also analyzed with a mathematical approach based on a model of independent and additive contribution of individual amino acids to antigen recognition. This biometrical data analysis enabled the retrieval, in public databases, of the native antigenic peptide SSX-2(41-49), whose sequence is highly homologous to the ones deduced from the library screening, among the ones with the highest stimulatory score. These results underline the high predictive value of positional scanning synthetic combinatorial peptide library analysis and encourage its use for the identification of CTL ligands.
Resumo:
Intensity-modulated radiotherapy (IMRT) treatment plan verification by comparison with measured data requires having access to the linear accelerator and is time consuming. In this paper, we propose a method for monitor unit (MU) calculation and plan comparison for step and shoot IMRT based on the Monte Carlo code EGSnrc/BEAMnrc. The beamlets of an IMRT treatment plan are individually simulated using Monte Carlo and converted into absorbed dose to water per MU. The dose of the whole treatment can be expressed through a linear matrix equation of the MU and dose per MU of every beamlet. Due to the positivity of the absorbed dose and MU values, this equation is solved for the MU values using a non-negative least-squares fit optimization algorithm (NNLS). The Monte Carlo plan is formed by multiplying the Monte Carlo absorbed dose to water per MU with the Monte Carlo/NNLS MU. Several treatment plan localizations calculated with a commercial treatment planning system (TPS) are compared with the proposed method for validation. The Monte Carlo/NNLS MUs are close to the ones calculated by the TPS and lead to a treatment dose distribution which is clinically equivalent to the one calculated by the TPS. This procedure can be used as an IMRT QA and further development could allow this technique to be used for other radiotherapy techniques like tomotherapy or volumetric modulated arc therapy.