978 resultados para NUCLEAR DATA COLLECTIONS
Resumo:
We are investigating the performances of a data acquisition system for Time of Flight PET, based on LYSO crystal slabs and 64 channels Silicon Photomultipliers matrices (1.2 cm2 of active area each). Measurements have been performed to test the timing capability of the detection system (SiPM matices coupled to a LYSO slab and the read-out electronics) with both test signal and radioactive source.
Resumo:
A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.
Resumo:
GRS Results for the Burnup Pin-cell Benchmark Propagation of Cross-Section, Fission Yields and Decay Data Uncertainties
Resumo:
Analysis of Neutron Thermal Scattering Data Uncertainties in PWRs
Resumo:
A “Collaborative Agreement” involving the collective participation of our students in their last year of our “Nuclear Engineering Master Degree Programme” for: “the review and capturing of selected spent fuel isotopic assay data sets to be included in the new SFCOMPO database"
Generation of Fission Yield covariance data and application to Fission Pulse Decay Heat calculations
Resumo:
Generation of Fission Yield covariance data and application to Fission Pulse Decay Heat calculations
Resumo:
A validation of the burn-up simulation system EVOLCODE 2.0 is presented here, involving the experimental measurement of U and Pu isotopes and some fission fragments production ratios after a burn-up of around 30 GWd/tU in a Pressurized Light Water Reactor (PWR). This work provides an in-depth analysis of the validation results, including the possible sources of the uncertainties. An uncertainty analysis based on the sensitivity methodology has been also performed, providing the uncertainties in the isotopic content propagated from the cross sections uncertainties. An improvement of the classical Sensitivity/ Uncertainty (S/U) model has been developed to take into account the implicit dependence of the neutron flux normalization, that is, the effect of the constant power of the reactor. The improved S/U methodology, neglected in this kind of studies, has proven to be an important contribution to the explanation of some simulation-experiment discrepancies for which, in general, the cross section uncertainties are, for the most relevant actinides, an important contributor to the simulation uncertainties, of the same order of magnitude and sometimes even larger than the experimental uncertainties and the experiment- simulation differences. Additionally, some hints for the improvement of the JEFF3.1.1 fission yield library and for the correction of some errata in the experimental data are presented.
Resumo:
Desde el inicio de los tiempos el ser humano ha tenido la necesidad de comprender y analizar todo lo que nos rodea, para ello se ha valido de diferentes herramientas como las pinturas rupestres, la biblioteca de Alejandría, bastas colecciones de libros y actualmente una enorme cantidad de información informatizada. Todo esto siempre se ha almacenado, según la tecnología de la época lo permitía, con la esperanza de que fuera útil mediante su consulta y análisis. En la actualidad continúa ocurriendo lo mismo. Hasta hace unos años se ha realizado el análisis de información manualmente o mediante bases de datos relacionales. Ahora ha llegado el momento de una nueva tecnología, Big Data, con la cual se puede realizar el análisis de extensas cantidades de datos de todo tipo en tiempos relativamente pequeños. A lo largo de este libro, se estudiarán las características y ventajas de Big Data, además de realizar un estudio de la plataforma Hadoop. Esta es una plataforma basada en Java y puede realizar el análisis de grandes cantidades de datos de diferentes formatos y procedencias. Durante la lectura de estas páginas se irá dotando al lector de los conocimientos previos necesarios para su mejor comprensión, así como de ubicarle temporalmente en el desarrollo de este concepto, de su uso, las previsiones y la evolución y desarrollo que se prevé tenga en los próximos años. ABSTRACT. Since the beginning of time, human being was in need of understanding and analyzing everything around him. In order to do that, he used different media as cave paintings, Alexandria library, big amount of book collections and nowadays massive amount of computerized information. All this information was stored, depending on the age and technology capability, with the expectation of being useful though it consulting and analysis. Nowadays they keep doing the same. In the last years, they have been processing the information manually or using relational databases. Now it is time for a new technology, Big Data, which is able to analyze huge amount of data in a, relatively, small time. Along this book, characteristics and advantages of Big Data will be detailed, so as an introduction to Hadoop platform. This platform is based on Java and can perform the analysis of massive amount of data in different formats and coming from different sources. During this reading, the reader will be provided with the prior knowledge needed to it understanding, so as the temporal location, uses, forecast, evolution and growth in the next years.
Resumo:
The present work is a preliminary study to settle the optimum experimental conditions and data processing for accomplishing the strategies established by the Action Plan for the EU olive oil sector. The objectives of the work were: a) to monitor the evolution of extra virgin olive oil exposed to indirect solar light in transparent glass bottles during four months; b) to identify spectral differences between edible and lampant virgin olive oil by applying high resolution Nuclear Magnetic Resonance (HR-NMR) Spectroscopy. Pr esent study could contribute to determine the date of minimum storage, their optimum conditions, and to properly characterize olive oil.
Resumo:
Language resources, such as multilingual lexica and multilingual electronic dictionaries, contain collections of lexical entries in several languages. Having access to the corresponding explicit or implicit translation relations between such entries might be of great interest for many NLP-based applications. By using Semantic Web-based techniques, translations can be available on the Web to be consumed by other (semantic enabled) resources in a direct manner, not relying on application-specific formats. To that end, in this paper we propose a model for representing translations as linked data, as an extension of the lemon model. Our translation module represents some core information associated to term translations and does not commit to specific views or translation theories. As a proof of concept, we have extracted the translations of the terms contained in Terminesp, a multilingual terminological database, and represented them as linked data. We have made them accessible on the Web both for humans (via a Web interface) and software agents (with a SPARQL endpoint).
Resumo:
In this paper we present a dataset componsed of domain-specific sentiment lexicons in six languages for two domains. We used existing collections of reviews from Trip Advisor, Amazon, the Stanford Network Analysis Project and the OpinRank Review Dataset. We use an RDF model based on the lemon and Marl formats to represent the lexicons. We describe the methodology that we applied to generate the domain-specific lexicons and we provide access information to our datasets.
Resumo:
Yeast cells mutated in YRB2, which encodes a nuclear protein with similarity to other Ran-binding proteins, fail to export nuclear export signal (NES)-containing proteins including HIV Rev out of the nucleus. Unlike Xpo1p/Crm1p/exportin, an NES receptor, Yrb2p does not shuttle between the nucleus and the cytoplasm but instead remains inside the nucleus. However, by both biochemical and genetic criteria, Yrb2p interacts with Xpo1p and not with other members of the importin/karyopherin β superfamily. Moreover, the Yrb2p region containing nucleoporin-like FG repeats is important for NES-mediated protein export. Taken together, these data suggest that Yrb2p acts inside the nucleus to mediate the action of Xpo1p in at least one of several nuclear export pathways.
Resumo:
The endogenous clock that drives circadian rhythms is thought to communicate temporal information within the cell via cycling downstream transcripts. A transcript encoding a glycine-rich RNA-binding protein, Atgrp7, in Arabidopsis thaliana undergoes circadian oscillations with peak levels in the evening. The AtGRP7 protein also cycles with a time delay so that Atgrp7 transcript levels decline when the AtGRP7 protein accumulates to high levels. After AtGRP7 protein concentration has fallen to trough levels, Atgrp7 transcript starts to reaccumulate. Overexpression of AtGRP7 in transgenic Arabidopsis plants severely depresses cycling of the endogenous Atgrp7 transcript. These data establish both transcript and protein as components of a negative feedback circuit capable of generating a stable oscillation. AtGRP7 overexpression also depresses the oscillation of the circadian-regulated transcript encoding the related RNA-binding protein AtGRP8 but does not affect the oscillation of transcripts such as cab or catalase mRNAs. We propose that the AtGRP7 autoregulatory loop represents a “slave” oscillator in Arabidopsis that receives temporal information from a central “master” oscillator, conserves the rhythmicity by negative feedback, and transduces it to the output pathway by regulating a subset of clock-controlled transcripts.
Resumo:
A cellular protein, previously described as p35/38, binds to the complementary (−)-strand of the leader RNA and intergenic (IG) sequence of mouse hepatitis virus (MHV) RNA. The extent of the binding of this protein to IG sites correlates with the efficiency of the subgenomic mRNA transcription from that IG site, suggesting that it is a requisite transcription factor. We have purified this protein and determined by partial peptide sequencing that it is heterogeneous nuclear ribonucleoprotein (hnRNP) A1, an abundant, primarily nuclear protein. hnRNP A1 shuttles between the nucleus and cytoplasm and plays a role in the regulation of alternative RNA splicing. The MHV(−)-strand leader and IG sequences conform to the consensus binding motifs of hnRNP A1. Recombinant hnRNP A1 bound to these two RNA regions in vitro in a sequence-specific manner. During MHV infection, hnRNP A1 relocalizes from the nucleus to the cytoplasm, where viral replication occurs. These data suggest that hnRNP A1 is a cellular factor that regulates the RNA-dependent RNA transcription of the virus.
RanGTP-mediated nuclear export of karyopherin α involves its interaction with the nucleoporin Nup153
Resumo:
Using binding assays, we discovered an interaction between karyopherin α2 and the nucleoporin Nup153 and mapped their interacting domains. We also isolated a 15-kDa tryptic fragment of karyopherin β1, termed β1*, that contains a determinant for binding to the peptide repeat containing nucleoporin Nup98. In an in vitro assay in which export of endogenous nuclear karyopherin α from nuclei of digitonin-permeabilized cells was quantitatively monitored by indirect immunofluorescence with anti-karyopherin α antibodies, we found that karyopherin α export was stimulated by added GTPase Ran, required GTP hydrolysis, and was inhibited by wheat germ agglutinin. RanGTP-mediated export of karyopherin α was inhibited by peptides representing the interacting domains of Nup153 and karyopherin α2, indicating that the binding reactions detected in vitro are physiologically relevant and verifying our mapping data. Moreover, β1*, although it inhibited import, did not inhibit export of karyopherin α. Hence, karyopherin α import into and export from nuclei are asymmetric processes.