982 resultados para Embedded-atom-method
Resumo:
This paper describes the concept, technical realisation and validation of a largely data-driven method to model events with Z→ττ decays. In Z→μμ events selected from proton-proton collision data recorded at s√=8 TeV with the ATLAS experiment at the LHC in 2012, the Z decay muons are replaced by τ leptons from simulated Z→ττ decays at the level of reconstructed tracks and calorimeter cells. The τ lepton kinematics are derived from the kinematics of the original muons. Thus, only the well-understood decays of the Z boson and τ leptons as well as the detector response to the τ decay products are obtained from simulation. All other aspects of the event, such as the Z boson and jet kinematics as well as effects from multiple interactions, are given by the actual data. This so-called τ-embedding method is particularly relevant for Higgs boson searches and analyses in ττ final states, where Z→ττ decays constitute a large irreducible background that cannot be obtained directly from data control samples.
Resumo:
BACKGROUND: Iron deficiency is a common and undertreated problem in inflammatory bowel disease (IBD). AIM: To develop an online tool to support treatment choice at the patient-specific level. METHODS: Using the RAND/UCLA Appropriateness Method (RUAM), a European expert panel assessed the appropriateness of treatment regimens for a variety of clinical scenarios in patients with non-anaemic iron deficiency (NAID) and iron deficiency anaemia (IDA). Treatment options included adjustment of IBD medication only, oral iron supplementation, high-/low-dose intravenous (IV) regimens, IV iron plus erythropoietin-stimulating agent (ESA), and blood transfusion. The panel process consisted of two individual rating rounds (1148 treatment indications; 9-point scale) and three plenary discussion meetings. RESULTS: The panel reached agreement on 71% of treatment indications. 'No treatment' was never considered appropriate, and repeat treatment after previous failure was generally discouraged. For 98% of scenarios, at least one treatment was appropriate. Adjustment of IBD medication was deemed appropriate in all patients with active disease. Use of oral iron was mainly considered an option in NAID and mildly anaemic patients without disease activity. IV regimens were often judged appropriate, with high-dose IV iron being the preferred option in 77% of IDA scenarios. Blood transfusion and IV+ESA were indicated in exceptional cases only. CONCLUSIONS: The RUAM revealed high agreement amongst experts on the management of iron deficiency in patients with IBD. High-dose IV iron was more often considered appropriate than other options. To facilitate dissemination of the recommendations, panel outcomes were embedded in an online tool, accessible via http://ferroscope.com/.
Resumo:
The present study was performed to assess the interlaboratory reproducibility of the molecular detection and identification of species of Zygomycetes from formalin-fixed paraffin-embedded kidney and brain tissues obtained from experimentally infected mice. Animals were infected with one of five species (Rhizopus oryzae, Rhizopus microsporus, Lichtheimia corymbifera, Rhizomucor pusillus, and Mucor circinelloides). Samples with 1, 10, or 30 slide cuts of the tissues were prepared from each paraffin block, the sample identities were blinded for analysis, and the samples were mailed to each of seven laboratories for the assessment of sensitivity. A protocol describing the extraction method and the PCR amplification procedure was provided. The internal transcribed spacer 1 (ITS1) region was amplified by PCR with the fungal universal primers ITS1 and ITS2 and sequenced. As negative results were obtained for 93% of the tissue specimens infected by M. circinelloides, the data for this species were excluded from the analysis. Positive PCR results were obtained for 93% (52/56), 89% (50/56), and 27% (15/56) of the samples with 30, 10, and 1 slide cuts, respectively. There were minor differences, depending on the organ tissue, fungal species, and laboratory. Correct species identification was possible for 100% (30 cuts), 98% (10 cuts), and 93% (1 cut) of the cases. With the protocol used in the present study, the interlaboratory reproducibility of ITS sequencing for the identification of major Zygomycetes species from formalin-fixed paraffin-embedded tissues can reach 100%, when enough material is available.
Resumo:
In this study we compared two polymerase chain reaction (PCR) methods using either 16S ribosomal RNA (rRNA) or 23S rRNA gene primers for the detection of different Leptospira interrogans serovars. The performance of these two methods was assessed using DNA extracted from bovine tissues previously inoculated with several bacterial suspensions. PCR was performed on the same tissues before and after the formalin-fixed, paraffin-embedding procedure (FFPE tissues). The 23S rDNA PCR detected all fresh and FFPE positive tissues while the 16S rDNA-based protocol detected primarily the positive fresh tissues. Both methods are specific for pathogenic L. interrogans. The 23S-based PCR method successfully detected Leptospira in four dubious cases of human leptospirosis from archival tissue specimens and one leptospirosis-positive canine specimen. A sensitive method for leptospirosis identification in FFPE tissues would be a useful tool to screen histological specimen archives and gain a better assessment of human leptospirosis prevalence, especially in tropical countries, where large outbreaks can occur following the rainy season.
Real-Time implementation of a blind authentication method using self-synchronous speech watermarking
Resumo:
A blind speech watermarking scheme that meets hard real-time deadlines is presented and implemented. In addition, one of the key issues in these block-oriented watermarking techniques is to preserve the synchronization. Namely, to recover the exact position of each block in the mark extract process. In fact, the presented scheme can be split up into two distinguished parts, the synchronization and the information mark methods. The former is embedded into the time domain and it is fast enough to be run meeting real-time requirements. The latter contains the authentication information and it is embedded into the wavelet domain. The synchronization and information mark techniques are both tunable in order to allow a con gurable method. Thus, capacity, transparency and robustness can be con gured depending on the needs. It makes the scheme useful for professional applications, such telephony authentication or even sending information throw radio applications.
Resumo:
Selected configuration interaction (SCI) for atomic and molecular electronic structure calculations is reformulated in a general framework encompassing all CI methods. The linked cluster expansion is used as an intermediate device to approximate CI coefficients BK of disconnected configurations (those that can be expressed as products of combinations of singly and doubly excited ones) in terms of CI coefficients of lower-excited configurations where each K is a linear combination of configuration-state-functions (CSFs) over all degenerate elements of K. Disconnected configurations up to sextuply excited ones are selected by Brown's energy formula, ΔEK=(E-HKK)BK2/(1-BK2), with BK determined from coefficients of singly and doubly excited configurations. The truncation energy error from disconnected configurations, Δdis, is approximated by the sum of ΔEKS of all discarded Ks. The remaining (connected) configurations are selected by thresholds based on natural orbital concepts. Given a model CI space M, a usual upper bound ES is computed by CI in a selected space S, and EM=E S+ΔEdis+δE, where δE is a residual error which can be calculated by well-defined sensitivity analyses. An SCI calculation on Ne ground state featuring 1077 orbitals is presented. Convergence to within near spectroscopic accuracy (0.5 cm-1) is achieved in a model space M of 1.4× 109 CSFs (1.1 × 1012 determinants) containing up to quadruply excited CSFs. Accurate energy contributions of quintuples and sextuples in a model space of 6.5 × 1012 CSFs are obtained. The impact of SCI on various orbital methods is discussed. Since ΔEdis can readily be calculated for very large basis sets without the need of a CI calculation, it can be used to estimate the orbital basis incompleteness error. A method for precise and efficient evaluation of ES is taken up in a companion paper
Resumo:
The present work provides a generalization of Mayer's energy decomposition for the density-functional theory (DFT) case. It is shown that one- and two-atom Hartree-Fock energy components in Mayer's approach can be represented as an action of a one-atom potential VA on a one-atom density ρ A or ρ B. To treat the exchange-correlation term in the DFT energy expression in a similar way, the exchange-correlation energy density per electron is expanded into a linear combination of basis functions. Calculations carried out for a number of density functionals demonstrate that the DFT and Hartree-Fock two-atom energies agree to a reasonable extent with each other. The two-atom energies for strong covalent bonds are within the range of typical bond dissociation energies and are therefore a convenient computational tool for assessment of individual bond strength in polyatomic molecules. For nonspecific nonbonding interactions, the two-atom energies are low. They can be either repulsive or slightly attractive, but the DFT results more frequently yield small attractive values compared to the Hartree-Fock case. The hydrogen bond in the water dimer is calculated to be between the strong covalent and nonbonding interactions on the energy scale
Resumo:
The correlation between the structural (average size and density) and optoelectronic properties [band gap and photoluminescence (PL)] of Si nanocrystals embedded in SiO2 is among the essential factors in understanding their emission mechanism. This correlation has been difficult to establish in the past due to the lack of reliable methods for measuring the size distribution of nanocrystals from electron microscopy, mainly because of the insufficient contrast between Si and SiO2. With this aim, we have recently developed a successful method for imaging Si nanocrystals in SiO2 matrices. This is done by using high-resolution electron microscopy in conjunction with conventional electron microscopy in dark field conditions. Then, by varying the time of annealing in a large time scale we have been able to track the nucleation, pure growth, and ripening stages of the nanocrystal population. The nucleation and pure growth stages are almost completed after a few minutes of annealing time at 1100°C in N2 and afterward the ensemble undergoes an asymptotic ripening process. In contrast, the PL intensity steadily increases and reaches saturation after 3-4 h of annealing at 1100°C. Forming gas postannealing considerably enhances the PL intensity but only for samples annealed previously in less time than that needed for PL saturation. The effects of forming gas are reversible and do not modify the spectral shape of the PL emission. The PL intensity shows at all times an inverse correlation with the amount of Pb paramagnetic centers at the Si-SiO2 nanocrystal-matrix interfaces, which have been measured by electron spin resonance. Consequently, the Pb centers or other centers associated with them are interfacial nonradiative channels for recombination and the emission yield largely depends on the interface passivation. We have correlated as well the average size of the nanocrystals with their optical band gap and PL emission energy. The band gap and emission energy shift to the blue as the nanocrystal size shrinks, in agreement with models based on quantum confinement. As a main result, we have found that the Stokes shift is independent of the average size of nanocrystals and has a constant value of 0.26±0.03 eV, which is almost twice the energy of the Si¿O vibration. This finding suggests that among the possible channels for radiative recombination, the dominant one for Si nanocrystals embedded in SiO2 is a fundamental transition spatially located at the Si¿SiO2 interface with the assistance of a local Si-O vibration.
Resumo:
This thesis deals with a hardware accelerated Java virtual machine, named REALJava. The REALJava virtual machine is targeted for resource constrained embedded systems. The goal is to attain increased computational performance with reduced power consumption. While these objectives are often seen as trade-offs, in this context both of them can be attained simultaneously by using dedicated hardware. The target level of the computational performance of the REALJava virtual machine is initially set to be as fast as the currently available full custom ASIC Java processors. As a secondary goal all of the components of the virtual machine are designed so that the resulting system can be scaled to support multiple co-processor cores. The virtual machine is designed using the hardware/software co-design paradigm. The partitioning between the two domains is flexible, allowing customizations to the resulting system, for instance the floating point support can be omitted from the hardware in order to decrease the size of the co-processor core. The communication between the hardware and the software domains is encapsulated into modules. This allows the REALJava virtual machine to be easily integrated into any system, simply by redesigning the communication modules. Besides the virtual machine and the related co-processor architecture, several performance enhancing techniques are presented. These include techniques related to instruction folding, stack handling, method invocation, constant loading and control in time domain. The REALJava virtual machine is prototyped using three different FPGA platforms. The original pipeline structure is modified to suit the FPGA environment. The performance of the resulting Java virtual machine is evaluated against existing Java solutions in the embedded systems field. The results show that the goals are attained, both in terms of computational performance and power consumption. Especially the computational performance is evaluated thoroughly, and the results show that the REALJava is more than twice as fast as the fastest full custom ASIC Java processor. In addition to standard Java virtual machine benchmarks, several new Java applications are designed to both verify the results and broaden the spectrum of the tests.
Resumo:
The purpose of this study is to develop a crowdsourced videographic research method for consumer culture research. Videography provides opportunities for expressing contextual and culturally embedded relations. Thus, developing new ways to conduct videographic research is meaningful. This study develops the crowdsourced videographic method based on a literature review and evaluation of a focal study. The literature review follows a qualitative systematic review process. Through the literature review, based on different methodological, crowdsourcing and consumer research related literature, this study defines the method, its application process and evaluation criteria. Furthermore, the evaluation of the focal study, where the method was applied, completes the study. This study applies professional review with self-evaluation as a form of evaluation, drawing from secondary data including research task description, screenshots of the mobile application used in the focal study, videos collected from the participants, and self-evaluation by the author. The focal study is analyzed according to its suitability to consumer culture research, research process and quality. Definitions and descriptions of the research method, its process and quality criteria form the theoretical contribution of this study. Evaluating the focal study using these definitions underlines some best practices of this type of research, generating the practical contribution of this study. Finally, this study provides ideas for future research. First, defining the boundaries of the use of crowdsourcing in various parts of conducting research. Second, improving the method by applying it to new research contexts. Third, testing how changes in one dimension of the crowdsourcing models interact with other dimension. Fourth, comparing the quality criteria applied in this study to various other quality criteria to improve the method’s usefulness. Overall, this study represents a starting point for further development of the crowdsourced videographic research method.
Resumo:
Numerical simulation of plasma sources is very important. Such models allows to vary different plasma parameters with high degree of accuracy. Moreover, they allow to conduct measurements not disturbing system balance.Recently, the scientific and practical interest increased in so-called two-chamber plasma sources. In one of them (small or discharge chamber) an external power source is embedded. In that chamber plasma forms. In another (large or diffusion chamber) plasma exists due to the transport of particles and energy through the boundary between chambers.In this particular work two-chamber plasma sources with argon and oxygen as active mediums were onstructed. This models give interesting results in electric field profiles and, as a consequence, in density profiles of charged particles.
Resumo:
The Zubarev equation of motion method has been applied to an anharmonic crystal of O( ,,4). All possible decoupling schemes have been interpreted in order to determine finite temperature expressions for the one phonon Green's function (and self energy) to 0()\4) for a crystal in which every atom is on a site of inversion symmetry. In order to provide a check of these results, the Helmholtz free energy expressions derived from the self energy expressions, have been shown to agree in the high temperature limit with the results obtained from the diagrammatic method. Expressions for the correlation functions that are related to the mean square displacement have been derived to 0(1\4) in the high temperature limit.
Resumo:
This is a Self-study about my role as a teacher, driven by the question: "How do I improve my practice?" (Whitehead, 1989)? In this study, I explored the discomfort that I had with the way that I had been teaching. Specifically, I worked to uncover the reasons behind my obsessive (mis)management of my students. I wrote of how I came to give my Self permission for this critique: how I came to know that all knowledge is a construction, and that my practice, too, is a construction. I grounded this journey within my experiences. I constructed these experiences in narrative fomi in order to reach a greater understanding of how I came to be the teacher I initially was. I explored metaphors that impacted my practice, re-constructed them, and saw more clearly the assumptions and influences that have guided my teaching. I centred my inquiry into my teaching within an Action Reflection methodology, bon-owing Jack Whitehead's (1989) term to describe my version of Action Research. I relied upon the embedded cyclical pattern of Action Reflection to understand my teaching Self: beginning from a critical moment, reflecting upon it, and then taking appropriate action, and continuing in this way, working to improve my practice. To understand these critical moments, I developed a personal definition of critical literacy. I then tumed this definition inward. In treating my practice as a textual production, I applied critical literacy as a framework in coming to know and understand the construction that is my teaching. I grounded my thesis journey within my Self, positioning my study within my experiences of being a grade 1 teacher struggling to teach critical literacy. I then repositioned my journey to that of a grade 1 teacher struggling to use critical literacy to improve my practice. This journey, then, is about the transition from critical literacyit as-subject to critical literacy-as-instmctional-method in improving my practice. I joumeyed inwards, using a critical moment to build new understandings, leading me to the next critical moment, and continued in this cyclical way. I worked in this meandering yet deliberate way to reach a new place in my teaching: one that is more inclusive of all the voices in my room. I concluded my journey with a beginning: a beginning of re-visioning my practice. In telling the stories of my journey, of my teaching, of my experiences, I changed into the teacher that I am more comfortable with. I've come to the frightening conclusion that I am the decisive element in the classroom. It's my personal approach that creates the climate. It's my daily mood that makes the weather As a teacher, I possess a tremendous power to make a person's life miserable or joyous. I can be a tool of torture or an instrument of inspiration. I can humiliate or humour, hurt or heal. In all situations, it is my response that decides whether a crisis will be escalated or de-escalated and a person humanized or de-humanized. (Ginott, as cited in Buscaglia, 2002, p. 22)
Resumo:
Bank switching in embedded processors having partitioned memory architecture results in code size as well as run time overhead. An algorithm and its application to assist the compiler in eliminating the redundant bank switching codes introduced and deciding the optimum data allocation to banked memory is presented in this work. A relation matrix formed for the memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Data allocation to memory is done by considering all possible permutation of memory banks and combination of data. The compiler output corresponding to each data mapping scheme is subjected to a static machine code analysis which identifies the one with minimum number of bank switching codes. Even though the method is compiler independent, the algorithm utilizes certain architectural features of the target processor. A prototype based on PIC 16F87X microcontrollers is described. This method scales well into larger number of memory blocks and other architectures so that high performance compilers can integrate this technique for efficient code generation. The technique is illustrated with an example
Resumo:
For the theoretical investigation of local phenomena (adsorption at surfaces, defects or impurities within a crystal, etc.) one can assume that the effects caused by the local disturbance are only limited to the neighbouring particles. With this model, that is well-known as cluster-approximation, an infinite system can be simulated by a much smaller segment of the surface (Cluster). The size of this segment varies strongly for different systems. Calculations to the convergence of bond distance and binding energy of an adsorbed aluminum atom on an Al(100)-surface showed that more than 100 atoms are necessary to get a sufficient description of surface properties. However with a full-quantummechanical approach these system sizes cannot be calculated because of the effort in computer memory and processor speed. Therefore we developed an embedding procedure for the simulation of surfaces and solids, where the whole system is partitioned in several parts which itsself are treated differently: the internal part (cluster), which is located near the place of the adsorbate, is calculated completely self-consistently and is embedded into an environment, whereas the influence of the environment on the cluster enters as an additional, external potential to the relativistic Kohn-Sham-equations. The basis of the procedure represents the density functional theory. However this means that the choice of the electronic density of the environment constitutes the quality of the embedding procedure. The environment density was modelled in three different ways: atomic densities; of a large prepended calculation without embedding transferred densities; bulk-densities (copied). The embedding procedure was tested on the atomic adsorptions of 'Al on Al(100) and Cu on Cu(100). The result was that if the environment is choices appropriately for the Al-system one needs only 9 embedded atoms to reproduce the results of exact slab-calculations. For the Cu-system first calculations without embedding procedures were accomplished, with the result that already 60 atoms are sufficient as a surface-cluster. Using the embedding procedure the same values with only 25 atoms were obtained. This means a substantial improvement if one takes into consideration that the calculation time increased cubically with the number of atoms. With the embedding method Infinite systems can be treated by molecular methods. Additionally the program code was extended by the possibility to make molecular-dynamic simulations. Now it is possible apart from the past calculations of fixed cores to investigate also structures of small clusters and surfaces. A first application we made with the adsorption of Cu on Cu(100). We calculated the relaxed positions of the atoms that were located close to the adsorption site and afterwards made the full-quantummechanical calculation of this system. We did that procedure for different distances to the surface. Thus a realistic adsorption process could be examined for the first time. It should be remarked that when doing the Cu reference-calculations (without embedding) we begun to parallelize the entire program code. Only because of this aspect the investigations for the 100 atomic Cu surface-clusters were possible. Due to the good efficiency of both the parallelization and the developed embedding procedure we will be able to apply the combination in future. This will help to work on more these areas it will be possible to bring in results of full-relativistic molecular calculations, what will be very interesting especially for the regime of heavy systems.