929 resultados para source code analysis


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The objective of this work is to characterize the genome of the chromosome 1 of A.thaliana, a small flowering plants used as a model organism in studies of biology and genetics, on the basis of a recent mathematical model of the genetic code. I analyze and compare different portions of the genome: genes, exons, coding sequences (CDS), introns, long introns, intergenes, untranslated regions (UTR) and regulatory sequences. In order to accomplish the task, I transformed nucleotide sequences into binary sequences based on the definition of the three different dichotomic classes. The descriptive analysis of binary strings indicate the presence of regularities in each portion of the genome considered. In particular, there are remarkable differences between coding sequences (CDS and exons) and non-coding sequences, suggesting that the frame is important only for coding sequences and that dichotomic classes can be useful to recognize them. Then, I assessed the existence of short-range dependence between binary sequences computed on the basis of the different dichotomic classes. I used three different measures of dependence: the well-known chi-squared test and two indices derived from the concept of entropy i.e. Mutual Information (MI) and Sρ, a normalized version of the “Bhattacharya Hellinger Matusita distance”. The results show that there is a significant short-range dependence structure only for the coding sequences whose existence is a clue of an underlying error detection and correction mechanism. No doubt, further studies are needed in order to assess how the information carried by dichotomic classes could discriminate between coding and noncoding sequence and, therefore, contribute to unveil the role of the mathematical structure in error detection and correction mechanisms. Still, I have shown the potential of the approach presented for understanding the management of genetic information.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Atmospheric aerosol particles directly impact air quality and participate in controlling the climate system. Organic Aerosol (OA) in general accounts for a large fraction (10–90%) of the global submicron (PM1) particulate mass. Chemometric methods for source identification are used in many disciplines, but methods relying on the analysis of NMR datasets are rarely used in atmospheric sciences. This thesis provides an original application of NMR-based chemometric methods to atmospheric OA source apportionment. The method was tested on chemical composition databases obtained from samples collected at different environments in Europe, hence exploring the impact of a great diversity of natural and anthropogenic sources. We focused on sources of water-soluble OA (WSOA), for which NMR analysis provides substantial advantages compared to alternative methods. Different factor analysis techniques are applied independently to NMR datasets from nine field campaigns of the project EUCAARI and allowed the identification of recurrent source contributions to WSOA in European background troposphere: 1) Marine SOA; 2) Aliphatic amines from ground sources (agricultural activities, etc.); 3) Biomass burning POA; 4) Biogenic SOA from terpene oxidation; 5) “Aged” SOAs, including humic-like substances (HULIS); 6) Other factors possibly including contributions from Primary Biological Aerosol Particles, and products of cooking activities. Biomass burning POA accounted for more than 50% of WSOC in winter months. Aged SOA associated with HULIS was predominant (> 75%) during the spring-summer, suggesting that secondary sources and transboundary transport become more important in spring and summer. Complex aerosol measurements carried out, involving several foreign research groups, provided the opportunity to compare source apportionment results obtained by NMR analysis with those provided by more widespread Aerodyne aerosol mass spectrometers (AMS) techniques that now provided categorization schemes of OA which are becoming a standard for atmospheric chemists. Results emerging from this thesis partly confirm AMS classification and partly challenge it.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The discovery of the Cosmic Microwave Background (CMB) radiation in 1965 is one of the fundamental milestones supporting the Big Bang theory. The CMB is one of the most important source of information in cosmology. The excellent accuracy of the recent CMB data of WMAP and Planck satellites confirmed the validity of the standard cosmological model and set a new challenge for the data analysis processes and their interpretation. In this thesis we deal with several aspects and useful tools of the data analysis. We focus on their optimization in order to have a complete exploitation of the Planck data and contribute to the final published results. The issues investigated are: the change of coordinates of CMB maps using the HEALPix package, the problem of the aliasing effect in the generation of low resolution maps, the comparison of the Angular Power Spectrum (APS) extraction performances of the optimal QML method, implemented in the code called BolPol, and the pseudo-Cl method, implemented in Cromaster. The QML method has been then applied to the Planck data at large angular scales to extract the CMB APS. The same method has been applied also to analyze the TT parity and the Low Variance anomalies in the Planck maps, showing a consistent deviation from the standard cosmological model, the possible origins for this results have been discussed. The Cromaster code instead has been applied to the 408 MHz and 1.42 GHz surveys focusing on the analysis of the APS of selected regions of the synchrotron emission. The new generation of CMB experiments will be dedicated to polarization measurements, for which are necessary high accuracy devices for separating the polarizations. Here a new technology, called Photonic Crystals, is exploited to develop a new polarization splitter device and its performances are compared to the devices used nowadays.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The present study has been carried out with the following objectives: i) To investigate the attributes of source parameters of local and regional earthquakes; ii) To estimate, as accurately as possible, M0, fc, Δσ and their standard errors to infer their relationship with source size; iii) To quantify high-frequency earthquake ground motion and to study the source scaling. This work is based on observational data of micro, small and moderate -earthquakes for three selected seismic sequences, namely Parkfield (CA, USA), Maule (Chile) and Ferrara (Italy). For the Parkfield seismic sequence (CA), a data set of 757 (42 clusters) repeating micro-earthquakes (0 ≤ MW ≤ 2), collected using borehole High Resolution Seismic Network (HRSN), have been analyzed and interpreted. We used the coda methodology to compute spectral ratios to obtain accurate values of fc , Δσ, and M0 for three target clusters (San Francisco, Los Angeles, and Hawaii) of our data. We also performed a general regression on peak ground velocities to obtain reliable seismic spectra of all earthquakes. For the Maule seismic sequence, a data set of 172 aftershocks of the 2010 MW 8.8 earthquake (3.7 ≤ MW ≤ 6.2), recorded by more than 100 temporary broadband stations, have been analyzed and interpreted to quantify high-frequency earthquake ground motion in this subduction zone. We completely calibrated the excitation and attenuation of the ground motion in Central Chile. For the Ferrara sequence, we calculated moment tensor solutions for 20 events from MW 5.63 (the largest main event occurred on May 20 2012), down to MW 3.2 by a 1-D velocity model for the crust beneath the Pianura Padana, using all the geophysical and geological information available for the area. The PADANIA model allowed a numerical study on the characteristics of the ground motion in the thick sediments of the flood plain.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Die Molekularbiologie von Menschen ist ein hochkomplexes und vielfältiges Themengebiet, in dem in vielen Bereichen geforscht wird. Der Fokus liegt hier insbesondere auf den Bereichen der Genomik, Proteomik, Transkriptomik und Metabolomik, und Jahre der Forschung haben große Mengen an wertvollen Daten zusammengetragen. Diese Ansammlung wächst stetig und auch für die Zukunft ist keine Stagnation absehbar. Mittlerweile aber hat diese permanente Informationsflut wertvolles Wissen in unüberschaubaren, digitalen Datenbergen begraben und das Sammeln von forschungsspezifischen und zuverlässigen Informationen zu einer großen Herausforderung werden lassen. Die in dieser Dissertation präsentierte Arbeit hat ein umfassendes Kompendium von humanen Geweben für biomedizinische Analysen generiert. Es trägt den Namen medicalgenomics.org und hat diverse biomedizinische Probleme auf der Suche nach spezifischem Wissen in zahlreichen Datenbanken gelöst. Das Kompendium ist das erste seiner Art und sein gewonnenes Wissen wird Wissenschaftlern helfen, einen besseren systematischen Überblick über spezifische Gene oder funktionaler Profile, mit Sicht auf Regulation sowie pathologische und physiologische Bedingungen, zu bekommen. Darüber hinaus ermöglichen verschiedene Abfragemethoden eine effiziente Analyse von signalgebenden Ereignissen, metabolischen Stoffwechselwegen sowie das Studieren der Gene auf der Expressionsebene. Die gesamte Vielfalt dieser Abfrageoptionen ermöglicht den Wissenschaftlern hoch spezialisierte, genetische Straßenkarten zu erstellen, mit deren Hilfe zukünftige Experimente genauer geplant werden können. Infolgedessen können wertvolle Ressourcen und Zeit eingespart werden, bei steigenden Erfolgsaussichten. Des Weiteren kann das umfassende Wissen des Kompendiums genutzt werden, um biomedizinische Hypothesen zu generieren und zu überprüfen.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As water quality interventions are scaled up to meet the Millennium Development Goal of halving the proportion of the population without access to safe drinking water by 2015 there has been much discussion on the merits of household- and source-level interventions. This study furthers the discussion by examining specific interventions through the use of embodied human and material energy. Embodied energy quantifies the total energy required to produce and use an intervention, including all upstream energy transactions. This model uses material quantities and prices to calculate embodied energy using national economic input/output-based models from China, the United States and Mali. Embodied energy is a measure of aggregate environmental impacts of the interventions. Human energy quantifies the caloric expenditure associated with the installation and operation of an intervention is calculated using the physical activity ratios (PARs) and basal metabolic rates (BMRs). Human energy is a measure of aggregate social impacts of an intervention. A total of four household treatment interventions – biosand filtration, chlorination, ceramic filtration and boiling – and four water source-level interventions – an improved well, a rope pump, a hand pump and a solar pump – are evaluated in the context of Mali, West Africa. Source-level interventions slightly out-perform household-level interventions in terms of having less total embodied energy. Human energy, typically assumed to be a negligible portion of total embodied energy, is shown to be significant to all eight interventions, and contributing over half of total embodied energy in four of the interventions. Traditional gender roles in Mali dictate the types of work performed by men and women. When the human energy is disaggregated by gender, it is seen that women perform over 99% of the work associated with seven of the eight interventions. This has profound implications for gender equality in the context of water quality interventions, and may justify investment in interventions that reduce human energy burdens.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In 2010 more than 600 radiocarbon samples were measured with the gas ion source at the MIni CArbon DAting System (MICADAS) at ETH Zurich and the number of measurements is rising quickly. While most samples contain less than 50 mu g C at present, the gas ion source is attractive as well for larger samples because the time-consuming graphitization is omitted. Additionally, modern samples are now measured down to 5 per-mill counting statistics in less than 30 min with the recently improved gas ion source. In the versatile gas handling system, a stepping-motor-driven syringe presses a mixture of helium and sample CO2 into the gas ion source, allowing continuous and stable measurements of different kinds of samples. CO2 can be provided in four different ways to the versatile gas interface. As a primary method. CO2 is delivered in glass or quartz ampoules. In this case, the CO2 is released in an automated ampoule cracker with 8 positions for individual samples. Secondly, OX-1 and blank gas in helium can be provided to the syringe by directly connecting gas bottles to the gas interface at the stage of the cracker. Thirdly, solid samples can be combusted in an elemental analyzer or in a thermo-optical OC/EC aerosol analyzer where the produced CO2 is transferred to the syringe via a zeolite trap for gas concentration. As a fourth method, CO2 is released from carbonates with phosphoric acid in septum-sealed vials and loaded onto the same trap used for the elemental analyzer. All four methods allow complete automation of the measurement, even though minor user input is presently still required. Details on the setup, versatility and applications of the gas handling system are given. (C) 2012 Elsevier B.V. All rights reserved.