897 resultados para DNA Sequence, Hidden Markov Model, Bayesian Model, Sensitive Analysis, Markov Chain Monte Carlo
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).
Resumo:
We study the critical behavior of the one-dimensional pair contact process (PCP), using the Monte Carlo method for several lattice sizes and three different updating: random, sequential and parallel. We also added a small modification to the model, called Monte Carlo com Ressucitamento" (MCR), which consists of resuscitating one particle when the order parameter goes to zero. This was done because it is difficult to accurately determine the critical point of the model, since the order parameter(particle pair density) rapidly goes to zero using the traditional approach. With the MCR, the order parameter becomes null in a softer way, allowing us to use finite-size scaling to determine the critical point and the critical exponents β, ν and z. Our results are consistent with the ones already found in literature for this model, showing that not only the process of resuscitating one particle does not change the critical behavior of the system, it also makes it easier to determine the critical point and critical exponents of the model. This extension to the Monte Carlo method has already been used in other contact process models, leading us to believe its usefulness to study several others non-equilibrium models
Resumo:
The correct identification of all human genes, and their derived transcripts, has not yet been achieved, and it remains one of the major aims of the worldwide genomics community. Computational programs suggest the existence of 30,000 to 40,000 human genes. However, definitive gene identification can only be achieved by experimental approaches. We used two distinct methodologies, one based on the alignment of mouse orthologous sequences to the human genome, and another based on the construction of a high-quality human testis cDNA library, in an attempt to identify new human transcripts within the human genome sequence. We generated 47 complete human transcript sequences, comprising 27 unannotated and 20 annotated sequences. Eight of these transcripts are variants of previously known genes. These transcripts were characterized according to size, number of exons, and chromosomal localization, and a search for protein domains was undertaken based on their putative open reading frames. In silico expression analysis suggests that some of these transcripts are expressed at low levels and in a restricted set of tissues.
Resumo:
A method for context-sensitive analysis of binaries that may have obfuscated procedure call and return operations is presented. Such binaries may use operators to directly manipulate stack instead of using native call and ret instructions to achieve equivalent behavior. Since definition of context-sensitivity and algorithms for context-sensitive analysis have thus far been based on the specific semantics associated to procedure call and return operations, classic interprocedural analyses cannot be used reliably for analyzing programs in which these operations cannot be discerned. A new notion of context-sensitivity is introduced that is based on the state of the stack at any instruction. While changes in 'calling'-context are associated with transfer of control, and hence can be reasoned in terms of paths in an interprocedural control flow graph (ICFG), the same is not true of changes in 'stack'-context. An abstract interpretation based framework is developed to reason about stack-contexts and to derive analogues of call-strings based methods for the context-sensitive analysis using stack-context. The method presented is used to create a context-sensitive version of Venable et al.'s algorithm for detecting obfuscated calls. Experimental results show that the context-sensitive version of the algorithm generates more precise results and is also computationally more efficient than its context-insensitive counterpart. Copyright © 2010 ACM.
Resumo:
Pós-graduação em Física - IFT
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Aims: Guided tissue regeneration (GTR) and enamel matrix derivatives (EMD) are two popular regenerative treatments for periodontal infrabony lesions. Both have been used in conjunction with other regenerative materials. We conducted a Bayesian network meta-analysis of randomized controlled trials on treatment effects of GTR, EMD and their combination therapies. Material and Methods: A systematic literature search was conducted using the Medline, EMBASE, LILACS and CENTRAL databases up to and including June 2011. Treatment outcomes were changes in probing pocket depth (PPD), clinical attachment level (CAL) and infrabony defect depth. Different types of bone grafts were treated as one group and so were barrier membranes. Results: A total of 53 studies were included in this review, and we found small differences between regenerative therapies which were non-significant statistically and clinically. GTR and GTR-related combination therapies achieved greater PPD reduction than EMD and EMD-related combination therapies. Combination therapies achieved slightly greater CAL gain than the use of EMD or GTR alone. GTR with BG achieved greatest defect fill. Conclusion: Combination therapies performed better than single therapies, but the additional benefits were small. Bayesian network meta-analysis is a promising technique to compare multiple treatments. Further analysis of methodological characteristics will be required prior to clinical recommendations.
Resumo:
Experimental investigations of visible Smith-Purcell-radiation with a micro-focused high relativistic electron beam (E=855 MeV) are presented in the near region, in which the electron beam grazes the surface of the grating. The radiation intensity was measured as a function of the angle of observation and of the distance between electron beam axis and the surface of the grating simultaneously for two different wavelengths (360 nm, 546 nm).In the experiments Smith-Purcell-radiation was identified by the measured angular distribution fulfilling the characteristic coherence condition. By the observed distance dependence of the intensity two components of Smith-Purcell-radiation could be separated: one component with the theoretical predicted interaction length hint, which is produced by electrons passing over the surface of the grating, and an additional component in the near region leading to a strong enhancement of the intensity, which is produced by electrons hitting the surface. To describe the intensity of the observed additional radiation component a simple model for optical grating transition radiation, caused by the electrons passing through the grating structure, is presented. Taking into account the simple scalar model, the results of a Monte-Carlo calculation show that the additional radiation component could be explained by optical grating transition radiation.
Resumo:
In dieser Arbeit werden vier unterschiedliche, stark korrelierte, fermionische Mehrbandsysteme untersucht. Es handelt sich dabei um ein Mehrstörstellen-Anderson-Modell, zwei Hubbard-Modelle sowie ein Mehrbandsystem, wie es sich aus einer ab initio-Beschreibung für ein korreliertes Halbmetall ergibt.rnrnDie Betrachtung des Mehrstörstellen-Anderson-Modells konzentriert sich auf die Untersuchung des Einflusses der Austauschwechselwirkung und der nicht-lokalen Korrelationen zwischen zwei Störstellen in einem einfach-kubischen Gitter. Das zentrale Resultat ist die Abstandsabhängigkeit der Korrelationen der Störstellenelektronen, welche stark von der Gitterdimension und der relativen Position der Störstellen abhängen. Bemerkenswert ist hier die lange Reichweite der Korrelationen in der Diagonalrichtung des Gitters. Außerdem ergibt sich, dass eine antiferromagnetische Austauschwechselwirkung ein Singulett zwischen den Störstellenelektronen gegenüber den Kondo-Singuletts der einzelnen Störstellen favorisiert und so den Kondo-Effekt der einzelnen Störstellen behindert.rnrnEin Zweiband-Hubbard-Modell, das Jz-Modell, wird im Hinblick auf seine Mott-Phasen in Abhängigkeit von Dotierung und Kristallfeldaufspaltung auf dem Bethe-Gitter untersucht. Die Entartung der Bänder ist durch eine unterschiedliche Bandbreite aufgehoben. Wichtigstes Ergebnis sind die Phasendiagramme in Bezug auf Wechselwirkung, Gesamtfüllung und Kristallfeldparameter. Im Vergleich zu Einbandmodellen kommen im Jz-Modell sogenannte orbital-selektive Mott-Phasen hinzu, die, abhängig von Wechselwirkung, Gesamtfüllung und Kristallfeldparameter, einerseits metallischen und andererseits isolierenden Charakter haben. Ein neuer Aspekt ergibt sich durch den Kristallfeldparameter, der die ionischen Einteilchenniveaus relativ zueinander verschiebt, und für bestimmte Werte eine orbital-selektive Mott-Phase des breiten Bands ermöglicht. Im Vergleich mit analytischen Näherungslösungen und Einbandmodellen lassen sich generische Vielteilchen- und Korrelationseffekte von typischen Mehrband- und Einteilcheneffekten differenzieren.rnrnDas zweite untersuchte Hubbard-Modell beschreibt eine magneto-optische Falle mit einer endlichen Anzahl Gitterplätze, in welcher fermionische Atome platziert sind. Es wird eine z-antiferromagnetische Phase unter Berücksichtigung nicht-lokaler Vielteilchenkorrelationen erhalten, und dabei werden bekannte Ergebnisse einer effektiven Einteilchenbeschreibung verbessert.rnrnDas korrelierte Halbmetall wird im Rahmen einer Mehrbandrechnung im Hinblick auf Korrelationseffekte untersucht. Ausgangspunkt ist eine ab initio-Beschreibung durch die Dichtefunktionaltheorie (DFT), welche dann durch die Hinzunahme lokaler Korrelationen ergänzt wird. Die Vielteilcheneffekte werden an Hand einer einfachen Wechselwirkungsnäherung verdeutlicht, und für ein Wechselwirkungsmodell in sphärischer Symmetrie präzisiert. Es ergibt sich nur eine schwache Quasiteilchenrenormierung. Besonders für röntgenspektroskopische Experimente wird eine gute Übereinstimmung erzielt.rnrnDie numerischen Ergebnisse für das Jz-Modell basieren auf Quanten-Monte-Carlo-Simulationen im Rahmen der dynamischen Molekularfeldtheorie (DMFT). Für alle anderen Systeme wird ein Mehrband-Algorithmus entwickelt und implementiert, welcher explizit nicht-diagonale Mehrbandprozesse berücksichtigt.rnrn
Resumo:
Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging.
Resumo:
PURPOSE This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). METHODS As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. RESULTS The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V95% increased from 90% to 96% and V107% decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan led to a similar homogeneity in the PTV compared to the standard treatment plan while the mean body dose was lower for the MERT plan. Regarding the second clinical case, a whole breast treatment, MERT resulted in a reduction of the lung volume receiving more than 45% of the prescribed dose when compared to the standard plan. On the other hand, the MERT plan leads to a larger low-dose lung volume and a degraded dose homogeneity in the PTV. For the clinical cases evaluated in this work, treatment plans using the BolusECT technique resulted in a more homogenous PTV and CTV coverage but higher doses to the OARs than the MERT plans. CONCLUSIONS MERT treatments were successfully planned for phantom and clinical cases, applying a newly developed intuitive and efficient forward planning strategy that employs a MC based electron beam model for pMLC shaped electron beams. It is shown that MERT can lead to a dose reduction in OARs compared to other methods. The process of feathering MERT segments results in an improvement of the dose homogeneity in the PTV.
Resumo:
Luminescence and energy transfer in [Zn1-xRux(bpy)3][NaAl1-yCry(ox)3] (x ≈ 0.01, y = 0.006 − 0.22; bpy = 2,2‘-bipyridine, ox = C2O42-) and [Zn1-x-yRuxOsy(bpy)3][NaAl(ox)3] (x ≈ 0.01, y = 0.012) are presented and discussed. Surprisingly, the luminescence of the isolated luminophores [Ru(bpy)3]2+ and [Os(bpy)3]2+ in [Zn(bpy)3][NaAl(ox)3] is hardly quenched at room temperature. Steady-state luminescence spectra and decay curves show that energy transfer occurs between [Ru(bpy)3]2+ and [Cr(ox)3]3- and between [Ru(bpy)3]2+ and [Os(bpy)3]2+ in [Zn1-xRux(bpy)3][NaAl1-yCry(ox)3] and [Zn1-x-yRuxOsy(bpy)3] [NaAl(ox)3], respectively. For a quantitative investigation of the energy transfer, a shell type model is developed, using a Monte Carlo procedure and the structural parameters of the systems. A good description of the experimental data is obtained assuming electric dipole−electric dipole interaction between donors and acceptors, with a critical distance Rc for [Ru(bpy)3]2+ to [Cr(ox)3]3- energy transfer of 15 Å and for [Ru(bpy)3]2+ to [Os(bpy)3]2+ energy transfer of 33 Å. These values are in good agreement with those derived using the Förster−Dexter theory.
Resumo:
Several models for context-sensitive analysis of modular programs have been proposed, each with different characteristics and representing different trade-offs. The advantage of these context-sensitive analyses is that they provide information which is potentially more accurate than that provided by context-free analyses. Such information can then be applied to validating/debugging the program and/or to specializing the program in order to obtain important performance improvements. Some very preliminary experimental results have also been reported for some of these models which provided initial evidence on their potential. However, further experimentation, which is needed in order to understand the many issues left open and to show that the proposed modes scale and are usable in the context of large, real-life modular programs, was left as future work. The aim of this paper is two-fold. On one hand we provide an empirical comparison of the different models proposed in previous work, as well as experimental data on the different choices left open in those designs. On the other hand we explore the scalability of these models by using larger modular programs as benchmarks. The results have been obtained from a realistic implementation of the models, integrated in a production-quality compiler (CiaoPP/Ciao). Our experimental results shed light on the practical implications of the different design choices and of the models themselves. We also show that contextsensitive analysis of modular programs is indeed feasible in practice, and that in certain critical cases it provides better performance results than those achievable by analyzing the whole program at once, specially in terms of memory consumption and when reanalyzing after making changes to a program, as is often the case during program development.
Resumo:
Context-sensitive analysis provides information which is potentially more accurate than that provided by context-free analysis. Such information can then be applied in order to validate/debug the program and/or to specialize the program obtaining important improvements. Unfortunately, context-sensitive analysis of modular programs poses important theoretical and practical problems. One solution, used in several proposals, is to resort to context-free analysis. Other proposals do address context-sensitive analysis, but are only applicable when the description domain used satisfies rather restrictive properties. In this paper, we argüe that a general framework for context-sensitive analysis of modular programs, Le., one that allows using all the domains which have proved useful in practice in the non-modular setting, is indeed feasible and very useful. Driven by our experience in the design and implementation of analysis and specialization techniques in the context of CiaoPP, the Ciao system preprocessor, in this paper we discuss a number of design goals for context-sensitive analysis of modular programs as well as the problems which arise in trying to meet these goals. We also provide a high-level description of a framework for analysis of modular programs which does substantially meet these objectives. This framework is generic in that it can be instantiated in different ways in order to adapt to different contexts. Finally, the behavior of the different instantiations w.r.t. the design goals that motivate our work is also discussed.