937 resultados para particular inheritance regimes
Resumo:
Programas de doctorado: Restauración y rehabilitación arquitetónica.-- Construcción arquitectónica, Tradición constructiva versus innovación tecnológica actual. La fecha de publicación es la fecha de lectura
Resumo:
This PhD Thesis is the result of my research activity in the last three years. My main research interest was centered on the evolution of mitochondrial genome (mtDNA), and on its usefulness as a phylogeographic and phylogenetic marker at different taxonomic levels in different taxa of Metazoa. From a methodological standpoint, my main effort was dedicated to the sequencing of complete mitochondrial genomes, and the approach to whole-genome sequencing was based on the application of Long-PCR and shotgun sequences. Moreover, this research project is a part of a bigger sequencing project of mtDNAs in many different Metazoans’ taxa, and I mostly dedicated myself to sequence and analyze mtDNAs in selected taxa of bivalves and hexapods (Insecta). Sequences of bivalve mtDNAs are particularly limited, and my study contributed to extend the sampling. Moreover, I used the bivalve Musculista senhousia as model taxon to investigate the molecular mechanisms and the evolutionary significance of their aberrant mode of mitochondrial inheritance (Doubly Uniparental Inheritance, see below). In Insects, I focused my attention on the Genus Bacillus (Insecta Phasmida). A detailed phylogenetic analysis was performed in order to assess phylogenetic relationships within the genus, and to investigate the placement of Phasmida in the phylogenetic tree of Insecta. The main goal of this part of my study was to add to the taxonomic coverage of sequenced mtDNAs in basal insects, which were only partially analyzed.
Resumo:
The quality of astronomical sites is the first step to be considered to have the best performances from the telescopes. In particular, the efficiency of large telescopes in UV, IR, radio etc. is critically dependent on atmospheric transparency. It is well known that the random optical effects induced on the light propagation by turbulent atmosphere also limit telescope’s performances. Nowadays, clear appears the importance to correlate the main atmospheric physical parameters with the optical quality reachable by large aperture telescopes. The sky quality evaluation improved with the introduction of new techniques, new instrumentations and with the understanding of the link between the meteorological (or synoptical parameters and the observational conditions thanks to the application of the theories of electromagnetic waves propagation in turbulent medias: what we actually call astroclimatology. At the present the site campaigns are evolved and are performed using the classical scheme of optical seeing properties, meteorological parameters, sky transparency, sky darkness and cloudiness. New concept are added and are related to the geophysical properties such as seismicity, microseismicity, local variability of the climate, atmospheric conditions related to the ground optical turbulence and ground wind regimes, aerosol presence, use of satellite data. The purpose of this project is to provide reliable methods to analyze the atmospheric properties that affect ground-based optical astronomical observations and to correlate them with the main atmospheric parameters generating turbulence and affecting the photometric accuracy. The first part of the research concerns the analysis and interpretation of longand short-time scale meteorological data at two of the most important astronomical sites located in very different environments: the Paranal Observatory in the Atacama Desert (Chile), and the Observatorio del Roque de Los Muchachos(ORM) located in La Palma (Canary Islands, Spain). The optical properties of airborne dust at ORM have been investigated collecting outdoor data using a ground-based dust monitor. Because of its dryness, Paranal is a suitable observatory for near-IR observations, thus the extinction properties in the spectral range 1.00-2.30 um have been investigated using an empirical method. Furthermore, this PhD research has been developed using several turbulence profilers in the selection of the site for the European Extremely Large Telescope(E-ELT). During the campaigns the properties of the turbulence at different heights at Paranal and in the sites located in northern Chile and Argentina have been studied. This given the possibility to characterize the surface layer turbulence at Paranal and its connection with local meteorological conditions.
Resumo:
The treatment of the Cerebral Palsy (CP) is considered as the “core problem” for the whole field of the pediatric rehabilitation. The reason why this pathology has such a primary role, can be ascribed to two main aspects. First of all CP is the form of disability most frequent in childhood (one new case per 500 birth alive, (1)), secondarily the functional recovery of the “spastic” child is, historically, the clinical field in which the majority of the therapeutic methods and techniques (physiotherapy, orthotic, pharmacologic, orthopedic-surgical, neurosurgical) were first applied and tested. The currently accepted definition of CP – Group of disorders of the development of movement and posture causing activity limitation (2) – is the result of a recent update by the World Health Organization to the language of the International Classification of Functioning Disability and Health, from the original proposal of Ingram – A persistent but not unchangeable disorder of posture and movement – dated 1955 (3). This definition considers CP as a permanent ailment, i.e. a “fixed” condition, that however can be modified both functionally and structurally by means of child spontaneous evolution and treatments carried out during childhood. The lesion that causes the palsy, happens in a structurally immature brain in the pre-, peri- or post-birth period (but only during the firsts months of life). The most frequent causes of CP are: prematurity, insufficient cerebral perfusion, arterial haemorrhage, venous infarction, hypoxia caused by various origin (for example from the ingestion of amniotic liquid), malnutrition, infection and maternal or fetal poisoning. In addition to these causes, traumas and malformations have to be included. The lesion, whether focused or spread over the nervous system, impairs the whole functioning of the Central Nervous System (CNS). As a consequence, they affect the construction of the adaptive functions (4), first of all posture control, locomotion and manipulation. The palsy itself does not vary over time, however it assumes an unavoidable “evolutionary” feature when during growth the child is requested to meet new and different needs through the construction of new and different functions. It is essential to consider that clinically CP is not only a direct expression of structural impairment, that is of etiology, pathogenesis and lesion timing, but it is mainly the manifestation of the path followed by the CNS to “re”-construct the adaptive functions “despite” the presence of the damage. “Palsy” is “the form of the function that is implemented by an individual whose CNS has been damaged in order to satisfy the demands coming from the environment” (4). Therefore it is only possible to establish general relations between lesion site, nature and size, and palsy and recovery processes. It is quite common to observe that children with very similar neuroimaging can have very different clinical manifestations of CP and, on the other hand, children with very similar motor behaviors can have completely different lesion histories. A very clear example of this is represented by hemiplegic forms, which show bilateral hemispheric lesions in a high percentage of cases. The first section of this thesis is aimed at guiding the interpretation of CP. First of all the issue of the detection of the palsy is treated from historical viewpoint. Consequently, an extended analysis of the current definition of CP, as internationally accepted, is provided. The definition is then outlined in terms of a space dimension and then of a time dimension, hence it is highlighted where this definition is unacceptably lacking. The last part of the first section further stresses the importance of shifting from the traditional concept of CP as a palsy of development (defect analysis) towards the notion of development of palsy, i.e., as the product of the relationship that the individual however tries to dynamically build with the surrounding environment (resource semeiotics) starting and growing from a different availability of resources, needs, dreams, rights and duties (4). In the scientific and clinic community no common classification system of CP has so far been universally accepted. Besides, no standard operative method or technique have been acknowledged to effectively assess the different disabilities and impairments exhibited by children with CP. CP is still “an artificial concept, comprising several causes and clinical syndromes that have been grouped together for a convenience of management” (5). The lack of standard and common protocols able to effectively diagnose the palsy, and as a consequence to establish specific treatments and prognosis, is mainly because of the difficulty to elevate this field to a level based on scientific evidence. A solution aimed at overcoming the current incomplete treatment of CP children is represented by the clinical systematic adoption of objective tools able to measure motor defects and movement impairments. A widespread application of reliable instruments and techniques able to objectively evaluate both the form of the palsy (diagnosis) and the efficacy of the treatments provided (prognosis), constitutes a valuable method able to validate care protocols, establish the efficacy of classification systems and assess the validity of definitions. Since the ‘80s, instruments specifically oriented to the analysis of the human movement have been advantageously designed and applied in the context of CP with the aim of measuring motor deficits and, especially, gait deviations. The gait analysis (GA) technique has been increasingly used over the years to assess, analyze, classify, and support the process of clinical decisions making, allowing for a complete investigation of gait with an increased temporal and spatial resolution. GA has provided a basis for improving the outcome of surgical and nonsurgical treatments and for introducing a new modus operandi in the identification of defects and functional adaptations to the musculoskeletal disorders. Historically, the first laboratories set up for gait analysis developed their own protocol (set of procedures for data collection and for data reduction) independently, according to performances of the technologies available at that time. In particular, the stereophotogrammetric systems mainly based on optoelectronic technology, soon became a gold-standard for motion analysis. They have been successfully applied especially for scientific purposes. Nowadays the optoelectronic systems have significantly improved their performances in term of spatial and temporal resolution, however many laboratories continue to use the protocols designed on the technology available in the ‘70s and now out-of-date. Furthermore, these protocols are not coherent both for the biomechanical models and for the adopted collection procedures. In spite of these differences, GA data are shared, exchanged and interpreted irrespectively to the adopted protocol without a full awareness to what extent these protocols are compatible and comparable with each other. Following the extraordinary advances in computer science and electronics, new systems for GA no longer based on optoelectronic technology, are now becoming available. They are the Inertial and Magnetic Measurement Systems (IMMSs), based on miniature MEMS (Microelectromechanical systems) inertial sensor technology. These systems are cost effective, wearable and fully portable motion analysis systems, these features gives IMMSs the potential to be used both outside specialized laboratories and to consecutive collect series of tens of gait cycles. The recognition and selection of the most representative gait cycle is then easier and more reliable especially in CP children, considering their relevant gait cycle variability. The second section of this thesis is focused on GA. In particular, it is firstly aimed at examining the differences among five most representative GA protocols in order to assess the state of the art with respect to the inter-protocol variability. The design of a new protocol is then proposed and presented with the aim of achieving gait analysis on CP children by means of IMMS. The protocol, named ‘Outwalk’, contains original and innovative solutions oriented at obtaining joint kinematic with calibration procedures extremely comfortable for the patients. The results of a first in-vivo validation of Outwalk on healthy subjects are then provided. In particular, this study was carried out by comparing Outwalk used in combination with an IMMS with respect to a reference protocol and an optoelectronic system. In order to set a more accurate and precise comparison of the systems and the protocols, ad hoc methods were designed and an original formulation of the statistical parameter coefficient of multiple correlation was developed and effectively applied. On the basis of the experimental design proposed for the validation on healthy subjects, a first assessment of Outwalk, together with an IMMS, was also carried out on CP children. The third section of this thesis is dedicated to the treatment of walking in CP children. Commonly prescribed treatments in addressing gait abnormalities in CP children include physical therapy, surgery (orthopedic and rhizotomy), and orthoses. The orthotic approach is conservative, being reversible, and widespread in many therapeutic regimes. Orthoses are used to improve the gait of children with CP, by preventing deformities, controlling joint position, and offering an effective lever for the ankle joint. Orthoses are prescribed for the additional aims of increasing walking speed, improving stability, preventing stumbling, and decreasing muscular fatigue. The ankle-foot orthosis (AFO), with a rigid ankle, are primarily designed to prevent equinus and other foot deformities with a positive effect also on more proximal joints. However, AFOs prevent the natural excursion of the tibio-tarsic joint during the second rocker, hence hampering the natural leaning progression of the whole body under the effect of the inertia (6). A new modular (submalleolar) astragalus-calcanear orthosis, named OMAC, has recently been proposed with the intention of substituting the prescription of AFOs in those CP children exhibiting a flat and valgus-pronated foot. The aim of this section is thus to present the mechanical and technical features of the OMAC by means of an accurate description of the device. In particular, the integral document of the deposited Italian patent, is provided. A preliminary validation of OMAC with respect to AFO is also reported as resulted from an experimental campaign on diplegic CP children, during a three month period, aimed at quantitatively assessing the benefit provided by the two orthoses on walking and at qualitatively evaluating the changes in the quality of life and motor abilities. As already stated, CP is universally considered as a persistent but not unchangeable disorder of posture and movement. Conversely to this definition, some clinicians (4) have recently pointed out that movement disorders may be primarily caused by the presence of perceptive disorders, where perception is not merely the acquisition of sensory information, but an active process aimed at guiding the execution of movements through the integration of sensory information properly representing the state of one’s body and of the environment. Children with perceptive impairments show an overall fear of moving and the onset of strongly unnatural walking schemes directly caused by the presence of perceptive system disorders. The fourth section of the thesis thus deals with accurately defining the perceptive impairment exhibited by diplegic CP children. A detailed description of the clinical signs revealing the presence of the perceptive impairment, and a classification scheme of the clinical aspects of perceptual disorders is provided. In the end, a functional reaching test is proposed as an instrumental test able to disclosure the perceptive impairment. References 1. Prevalence and characteristics of children with cerebral palsy in Europe. Dev Med Child Neurol. 2002 Set;44(9):633-640. 2. Bax M, Goldstein M, Rosenbaum P, Leviton A, Paneth N, Dan B, et al. Proposed definition and classification of cerebral palsy, April 2005. Dev Med Child Neurol. 2005 Ago;47(8):571-576. 3. Ingram TT. A study of cerebral palsy in the childhood population of Edinburgh. Arch. Dis. Child. 1955 Apr;30(150):85-98. 4. Ferrari A, Cioni G. The spastic forms of cerebral palsy : a guide to the assessment of adaptive functions. Milan: Springer; 2009. 5. Olney SJ, Wright MJ. Cerebral Palsy. Campbell S et al. Physical Therapy for Children. 2nd Ed. Philadelphia: Saunders. 2000;:533-570. 6. Desloovere K, Molenaers G, Van Gestel L, Huenaerts C, Van Campenhout A, Callewaert B, et al. How can push-off be preserved during use of an ankle foot orthosis in children with hemiplegia? A prospective controlled study. Gait Posture. 2006 Ott;24(2):142-151.
Resumo:
The field of complex systems is a growing body of knowledge, It can be applied to countless different topics, from physics to computer science, biology, information theory and sociology. The main focus of this work is the use of microscopic models to study the behavior of urban mobility, which characteristics make it a paradigmatic example of complexity. In particular, simulations are used to investigate phase changes in a finite size open Manhattan-like urban road network under different traffic conditions, in search for the parameters to identify phase transitions, equilibrium and non-equilibrium conditions . It is shown how the flow-density macroscopic fundamental diagram of the simulation shows,like real traffic, hysteresis behavior in the transition from the congested phase to the free flow phase, and how the different regimes can be identified studying the statistics of road occupancy.
Resumo:
Mitochondria are inherited maternally in most metazoans. However, in some bivalves, two mitochondrial lineages are present: one transmitted through eggs (F), the other through sperm (M). This is called Doubly Uniparental Inheritance (DUI). During male embryo development, spermatozoon mitochondria aggregate and end up in the primordial germ cells, while they are dispersed in female embryos. The molecular mechanisms of segregation patterns are still unknown. In the DUI species Ruditapes philippinarum, I examined sperm mitochondria distribution by MitoTracker, microtubule staining and TEM, and I localized germ line determinants with immunocytochemical analysis. I also analyzed the gonad transcriptome, searching for genes involved in reproduction and sex determination. Moreover, I analyzed an M-type specific open reading frame that could be responsible for maintenance/degradation of M mitochondria during embryo development. These transcripts were also localized in tissues using in situ hybridization. As in Mytilus, two distribution patterns of M mitochondria were detected in R. philippinarum, supporting that they are related to DUI. Moreover, the first division midbody concurs in positioning aggregated M mitochondria on the animal-vegetal axis of the male embryo: in organisms with spiral segmentation this zone is not involved in further cleavages, so aggregation is maintained. Moreover, sperm mitochondria reach the same embryonic area where germ plasm is transferred, suggesting their contribution in male germ line formation. The finding of reproduction and ubiquitination transcripts led to formulate a model in which ubiquitination genes stored in female oocytes during gametogenesis would activate sex-gene expression in the early embryonic developmental stages (preformation). Only gametogenetic cells were labeled by in situ hybridization, proving their specific transcription in developing gametes. Other than having a role in sex determination, some ubiquination factors could also be involved in mitochondrial inheritance, and their differential expression could be responsible for the different fate of sperm mitochondria in the two sexes.
Resumo:
Liquids under the influence of external fields exhibit a wide range of intriguing phenomena that can be markedly different from the behaviour of a quiescent system. This work considers two different systems — a glassforming Yukawa system and a colloid-polymer mixture — by Molecular Dynamics (MD) computer simulations coupled to dissipative particle dynamics. The former consists of a 50-50 binary mixture of differently-sized, like-charged colloids interacting via a screened Coulomb (Yukawa) potential. Near the glass transition the influence of an external shear field is studied. In particular, the transition from elastic response to plastic flow is of interest. At first, this model is characterised in equilibrium. Upon decreasing temperature it exhibits the typical dynamics of glassforming liquids, i.e. the structural relaxation time τα grows strongly in a rather small temperature range. This is discussed with respect to the mode-coupling theory of the glass transition (MCT). For the simulation of bulk systems under shear, Lees-Edwards boundary conditions are applied. At constant shear rates γ˙ ≫ 1/τα the relevant time scale is given by 1/γ˙ and the system shows shear thinning behaviour. In order to understand the pronounced differences between a quiescent system and a system under shear, the response to a suddenly commencing or terminating shear flow is studied. After the switch-on of the shear field the shear stress shows an overshoot, marking the transition from elastic to plastic deformation, which is connected to a super-diffusive increase of the mean squared displacement. Since the average static structure only depends on the value of the shear stress, it does not discriminate between those two regimes. The distribution of local stresses, in contrast, becomes broader as soon as the system starts flowing. After a switch-off of the shear field, these additional fluctuations are responsible for the fast decay of stresses, which occurs on a time scale 1/γ˙ . The stress decay after a switch-off in the elastic regime, on the other hand, happens on the much larger time scale of structural relaxation τα. While stresses decrease to zero after a switch-off for temperatures above the glass transition, they decay to a finite value for lower temperatures. The obtained results are important for advancing new theoretical approaches in the framework of mode-coupling theory. Furthermore, they suggest new experimental investigations on colloidal systems. The colloid-polymer mixture is studied in the context of the behaviour near the critical point of phase separation. For the MD simulations a new effective model with soft interaction potentials is introduced and its phase diagram is presented. Here, mainly the equilibrium properties of this model are characterised. While the self-diffusion constants of colloids and polymers do not change strongly when the critical point is approached, critical slowing down of interdiffusion is observed. The order parameter fluctuations can be determined through the long-wavelength limit of static structure factors. For this strongly asymmetric mixture it is shown how the relevant structure factor can be extracted by a diagonalisation of a matrix that contains the partial static structure factors. By presenting first results of this model under shear it is demonstrated that it is suitable for non-equilibrium simulations as well.
Resumo:
Five different methods were critically examined to characterize the pore structure of the silica monoliths. The mesopore characterization was performed using: a) the classical BJH method of nitrogen sorption data, which showed overestimated values in the mesopore distribution and was improved by using the NLDFT method, b) the ISEC method implementing the PPM and PNM models, which were especially developed for monolithic silicas, that contrary to the particulate supports, demonstrate the two inflection points in the ISEC curve, enabling the calculation of pore connectivity, a measure for the mass transfer kinetics in the mesopore network, c) the mercury porosimetry using a new recommended mercury contact angle values. rnThe results of the characterization of mesopores of monolithic silica columns by the three methods indicated that all methods were useful with respect to the pore size distribution by volume, but only the ISEC method with implemented PPM and PNM models gave the average pore size and distribution based on the number average and the pore connectivity values.rnThe characterization of the flow-through pore was performed by two different methods: a) the mercury porosimetry, which was used not only for average flow-through pore value estimation, but also the assessment of entrapment. It was found that the mass transfer from the flow-through pores to mesopores was not hindered in case of small sized flow-through pores with a narrow distribution, b) the liquid penetration where the average flow-through pore values were obtained via existing equations and improved by the additional methods developed according to Hagen-Poiseuille rules. The result was that not the flow-through pore size influences the column bock pressure, but the surface area to volume ratio of silica skeleton is most decisive. Thus the monolith with lowest ratio values will be the most permeable. rnThe flow-through pore characterization results obtained by mercury porosimetry and liquid permeability were compared with the ones from imaging and image analysis. All named methods enable a reliable characterization of the flow-through pore diameters for the monolithic silica columns, but special care should be taken about the chosen theoretical model.rnThe measured pore characterization parameters were then linked with the mass transfer properties of monolithic silica columns. As indicated by the ISEC results, no restrictions in mass transfer resistance were noticed in mesopores due to their high connectivity. The mercury porosimetry results also gave evidence that no restrictions occur for mass transfer from flow-through pores to mesopores in the small scaled silica monoliths with narrow distribution. rnThe prediction of the optimum regimes of the pore structural parameters for the given target parameters in HPLC separations was performed. It was found that a low mass transfer resistance in the mesopore volume is achieved when the nominal diameter of the number average size distribution of the mesopores is appr. an order of magnitude larger that the molecular radius of the analyte. The effective diffusion coefficient of an analyte molecule in the mesopore volume is strongly dependent on the value of the nominal pore diameter of the number averaged pore size distribution. The mesopore size has to be adapted to the molecular size of the analyte, in particular for peptides and proteins. rnThe study on flow-through pores of silica monoliths demonstrated that the surface to volume of the skeletons ratio and external porosity are decisive for the column efficiency. The latter is independent from the flow-through pore diameter. The flow-through pore characteristics by direct and indirect approaches were assessed and theoretical column efficiency curves were derived. The study showed that next to the surface to volume ratio, the total porosity and its distribution of the flow-through pores and mesopores have a substantial effect on the column plate number, especially as the extent of adsorption increases. The column efficiency is increasing with decreasing flow through pore diameter, decreasing with external porosity, and increasing with total porosity. Though this tendency has a limit due to heterogeneity of the studied monolithic samples. We found that the maximum efficiency of the studied monolithic research columns could be reached at a skeleton diameter of ~ 0.5 µm. Furthermore when the intention is to maximize the column efficiency, more homogeneous monoliths should be prepared.rn
Resumo:
Hochreichende Konvektion über Waldbränden ist eine der intensivsten Formen von atmosphärischer Konvektion. Die extreme Wolkendynamik mit hohen vertikalen Windgeschwindigkeiten (bis 20 m/s) bereits an der Wolkenbasis, hohen Wasserdampfübersättigungen (bis 1%) und die durch das Feuer hohen Anzahlkonzentration von Aerosolpartikeln (bis 100000 cm^-3) bilden einen besonderen Rahmen für Aerosol-Wolken Wechselwirkungen.Ein entscheidender Schritt in der mikrophysikalischen Entwicklung einer konvektiven Wolke ist die Aktivierung von Aerosolpartikeln zu Wolkentropfen. Dieser Aktivierungsprozess bestimmt die anfängliche Anzahl und Größe der Wolkentropfen und kann daher die Entwicklung einer konvektiven Wolke und deren Niederschlagsbildung beeinflussen. Die wichtigsten Faktoren, welche die anfängliche Anzahl und Größe der Wolkentropfen bestimmen, sind die Größe und Hygroskopizität der an der Wolkenbasis verfügbaren Aerosolpartikel sowie die vertikale Windgeschwindigkeit. Um den Einfluss dieser Faktoren unter pyro-konvektiven Bedingungen zu untersuchen, wurden numerische Simulationen mit Hilfe eines Wolkenpaketmodells mit detaillierter spektraler Beschreibung der Wolkenmikrophysik durchgeführt. Diese Ergebnisse können in drei unterschiedliche Bereiche abhängig vom Verhältnis zwischen vertikaler Windgeschwindigkeit und Aerosolanzahlkonzentration (w/NCN) eingeteilt werden: (1) ein durch die Aerosolkonzentration limitierter Bereich (hohes w/NCN), (2) ein durch die vertikale Windgeschwindigkeit limitierter Bereich (niedriges w/NCN) und (3) ein Übergangsbereich (mittleres w/NCN). Die Ergebnisse zeigen, dass die Variabilität der anfänglichen Anzahlkonzentration der Wolkentropfen in (pyro-) konvektiven Wolken hauptsächlich durch die Variabilität der vertikalen Windgeschwindigkeit und der Aerosolkonzentration bestimmt wird. rnUm die mikrophysikalischen Prozesse innerhalb der rauchigen Aufwindregion einer pyrokonvektiven Wolke mit einer detaillierten spektralen Mikrophysik zu untersuchen, wurde das Paketmodel entlang einer Trajektorie innerhalb der Aufwindregion initialisiert. Diese Trajektore wurde durch dreidimensionale Simulationen eines pyro-konvektiven Ereignisses durch das Model ATHAM berechnet. Es zeigt sich, dass die Anzahlkonzentration der Wolkentropfen mit steigender Aerosolkonzentration ansteigt. Auf der anderen Seite verringert sich die Größe der Wolkentropfen mit steigender Aerosolkonzentration. Die Reduzierung der Verbreiterung des Tropfenspektrums stimmt mit den Ergebnissen aus Messungen überein und unterstützt das Konzept der Unterdrückung von Niederschlag in stark verschmutzen Wolken.Mit Hilfe des Models ATHAM wurden die dynamischen und mikrophysikalischen Prozesse von pyro-konvektiven Wolken, aufbauend auf einer realistischen Parametrisierung der Aktivierung von Aerosolpartikeln durch die Ergebnisse der Aktivierungsstudie, mit zwei- und dreidimensionalen Simulationen untersucht. Ein modernes zweimomenten mikrophysikalisches Schema wurde in ATHAM implementiert, um den Einfluss der Anzahlkonzentration von Aerosolpartikeln auf die Entwicklung von idealisierten pyro-konvektiven Wolken in US Standardamtosphären für die mittleren Breiten und den Tropen zu untersuchen. Die Ergebnisse zeigen, dass die Anzahlkonzentration der Aerosolpartikel die Bildung von Regen beeinflusst. Für geringe Aerosolkonzentrationen findet die rasche Regenbildung hauptsächlich durch warme mikrophysikalische Prozesse statt. Für höhere Aerosolkonzentrationen ist die Eisphase wichtiger für die Bildung von Regen. Dies führt zu einem verspäteten Einsetzen von Niederschlag für verunreinigtere Atmosphären. Außerdem wird gezeigt, dass die Zusammensetzung der Eisnukleationspartikel (IN) einen starken Einfluss auf die dynamische und mikrophysikalische Struktur solcher Wolken hat. Bei sehr effizienten IN bildet sich Regen früher. Die Untersuchung zum Einfluss des atmosphärischen Hintergrundprofils zeigt eine geringe Auswirkung der Meteorologie auf die Sensitivität der pyro-konvektiven Wolken auf diernAerosolkonzentration. Zum Abschluss wird gezeigt, dass die durch das Feuer emittierte Hitze einen deutlichen Einfluss auf die Entwicklung und die Wolkenobergrenze von pyro-konvektive Wolken hat. Zusammenfassend kann gesagt werden, dass in dieser Dissertation die Mikrophysik von pyrokonvektiven Wolken mit Hilfe von idealisierten Simulation eines Wolkenpaketmodell mit detaillierte spektraler Mikrophysik und eines 3D Modells mit einem zweimomenten Schema im Detail untersucht wurde. Es wird gezeigt, dass die extremen Bedingungen im Bezug auf die vertikale Windgeschwindigkeiten und Aerosolkonzentrationen einen deutlichen Einfluss auf die Entwicklung von pyro-konvektiven Wolken haben.
On the inheritance and mechanism of baculovirus resistance of the codling moth, Cydia pomonella (L.)
Resumo:
Das Cydia pomonella Granulovirus (CpGV, Baculoviridae) wird seit Ende der 1980er Jahre als hoch-selektives und effizientes biologisches Bekämpfungsmittel zur Kontrolle des Apfelwicklers im Obstanbau eingesetzt. Seit 2004 wurden in Europa verschiedene Apfelwicklerpopulationen beobachtet die resistent gegenüber dem hauptsächlich angewendeten Isolat CpGV-M aufweisen. Die vorliegende Arbeit befasst sich mit der Untersuchung der Vererbung und des Mechanismus der CpGV Resistenz. Einzelpaarkreuzungen zwischen einem empfindlichen Laborstamm (CpS) und einem homogen resistenten Stamm (CpRR1) zeigten, dass die Resistenz durch ein einziges dominantes Gen, das auf dem Z-Chromosom lokalisiert ist, vererbt wird. Massernkreuzungen zwischen CpS und einer heterogen resistenten Feldpopulation (CpR) deuteten zunächst auf einen unvollständig dominanten autosomalen Erbgang hin. Einzelpaarkreuzungen zwischen CpS und CpR bewiesen jedoch, dass die Resistenz in CpR ebenfalls monogen dominant und geschlechtsgebunden auf dem Z-Chromosom vererbt wird. Diese Arbeit diskutiert zudem die Vor- und Nachteile von Einzelpaarkreuzungen gegenüber Massernkreuzungen bei der Untersuchung von Vererbungsmechanismen. Die Wirksamkeit eines neuen CpGV Isolates aus dem Iran (CpGV-I12) gegenüber CpRR1 Larven, wurde in Bioassays getestet. Die Ergebnisse zeigen, dass CpGV-I12 die Resistenz in allen Larvenstadien von CpRR1 brechen kann und fast so gut wirkt wie CpGV-M gegenüber CpS Larven. Daher ist CpGV-I12 für die Kontrolle des Apfelwicklers in Anlagen wo die Resistenz aufgetreten ist geeignet. Um den der CpGV Resistenz zugrunde liegenden Mechanismus zu untersuchen, wurden vier verschiedene Experimente durchgeführt: 1) die peritrophische Membran degradiert indem ein optischer Aufheller dem virus-enthaltenden Futtermedium beigefügt wurde. Das Entfernen dieser mechanischen Schutzbarriere, die den Mitteldarm auskleidet, führte allerdings nicht zu einer Reduzierung der Resistenz in CpR Larven. Demnach ist die peritrophische Membran nicht am Resistenzmechanismus beteiligt. 2) Die Injektion von Budded Virus in das Hämocoel führte nicht zur Brechung der Resistenz. Folglich die die Resistenz nicht auf den Mitteldarm beschränkt, sondern auch in der Sekundärinfektion wirksam. 3) Die Replikation von CpGV in verschiedenen Geweben (Mitteldarm, Hämolymphe und Fettkörper) von CpS und CpRR1 wurde mittels quantitativer PCR verfolgt. In CpS Larven konnte in allen drei Gewebetypen sowohl nach oraler als auch nach intra-hämocoelarer Infektion eine Zunahme der CpGV Genome in Abhängigkeit der Zeit festgestellt werden. Dagegen konnte in den Geweben aus CpRR1 nach oraler sowie intra-hämocoelarer Infektion keine Virusreplikation detektiert werden. Dies deutet darauf hin, dass die CpGV Resistenz in allen Zelltypen präsent ist. 4) Um zu untersuchen ob ein humoraler Faktor in der Hämolymphe ursächlich an der Resistenz beteiligt ist, wurde Hämolymphe aus CpRR1 Larven in CpS Larven injiziert und diese anschließend oral mit CpGV infiziert. Es konnte jedoch keine Immunreaktion beobachtet und kein Faktor in der Hämolymphe identifiziert werden, der Resistenz induzieren könnte. Auf Grundlage dieser Ergebnisse kann festgestellt werden, dass in resistenten Apfelwicklerlarven die virale Replikation in allen Zelltypen verhindert wird, was auf eine Virus-Zell Inkompatibilität hinweist. Da in CpRR1 keine DNA Replikation beobachtet wurde, wird die CpGV Resistenz wahrscheinlich durch eine frühe Unterbindung der Virusreplikation verursacht.Das früh exprimierte Gen pe38 codiert für ein Protein, das wahrscheinlich für die Resistenzbrechung durch CpGV-I12 verantwortlich ist. Interaktionen zwischen dem Protein PE38 und Proteinen in CpRR1 wurden mit Hilfe des Yeast Two-Hybrid (Y2H) Systems untersucht. Die detektierten Interaktionen sind noch nicht durch andere Methoden bestätigt, jedoch wurden zwei mögliche Gene auf dem Z-Chromosom und eines auf Chromosom 15 gefunden, wie möglicherweise an der CpGV Resistenz beteiligt sind.
Resumo:
En la sociedad europea crece la preocupación por el retorno de tendencias fascistas y neonazis y por la extensión de ideologías xenófobas y antisemitas, algunas de ellas alimentadas a partir de tesis de negacionistas de aquellos trágicos eventos de nuestra historia reciente. La lucha frente a los discursos negacionistas se ha llevado más allá del ámbito social y académico, y se ha propuesto la incorporación en los ordenamientos jurídicos europeos de tipos penales específicos que incriminan este tipo de discurso: negar, banalizar, o justificar el Holocausto u otros genocidios o graves crímenes contra la humanidad. Esta legislación, que encuentra su mayor expresión en la Decisión marco 2008/913/JAI, aunque castiga un discurso socialmente repugnante, sin embargo presenta dudas en cuanto a su legitimidad con un sistema de libertades erigido sobre el pilar del pluralismo propio de los Estados democráticos. Surge así la cuestión de si pueden estar surgiendo «nuevos» delitos de opinión y a ello se dedica entonces la presente tesis. El objetivo concreto de este trabajo será analizar esta política-criminal para proponer una configuración del delito de negacionismo compatible con la libertad de expresión, aunque se cuestionará la conveniencia de castigar penalmente a través de un específico delito este tipo de conductas. En particular se pretende responder a tres preguntas: en primer lugar, ¿el discurso negacionista debe ampararse prima facie por la libertad de expresión en un ordenamiento abierto y personalista y cuáles podrían ser las «reglas» que podrían servir como criterio para limitar este género de manifestaciones? La segunda pregunta sería entonces: ¿Cómo podría construirse un tipo penal respetuoso con los principios constitucionales y penales que específicamente incriminara este género de conductas? Y, como última pregunta: ¿Es conveniente o adecuada una política criminal que lleve a crear un específico delito de negacionismo?
Resumo:
Die Verifikation bewertet die Güte von quantitativen Niederschlagsvorhersagen(QNV) gegenüber Beobachtungen und liefert Hinweise auf systematische Modellfehler. Mit Hilfe der merkmals-bezogenen Technik SAL werden simulierte Niederschlagsverteilungen hinsichtlich (S)truktur, (A)mplitude und (L)ocation analysiert. Seit einigen Jahren werden numerische Wettervorhersagemodelle benutzt, mit Gitterpunktabständen, die es erlauben, hochreichende Konvektion ohne Parametrisierung zu simulieren. Es stellt sich jetzt die Frage, ob diese Modelle bessere Vorhersagen liefern. Der hoch aufgelöste stündliche Beobachtungsdatensatz, der in dieser Arbeit verwendet wird, ist eine Kombination von Radar- und Stationsmessungen. Zum einem wird damit am Beispiel der deutschen COSMO-Modelle gezeigt, dass die Modelle der neuesten Generation eine bessere Simulation des mittleren Tagesgangs aufweisen, wenn auch mit zu geringen Maximum und etwas zu spätem Auftreten. Im Gegensatz dazu liefern die Modelle der alten Generation ein zu starkes Maximum, welches erheblich zu früh auftritt. Zum anderen wird mit dem neuartigen Modell eine bessere Simulation der räumlichen Verteilung des Niederschlags, durch eine deutliche Minimierung der Luv-/Lee Proble-matik, erreicht. Um diese subjektiven Bewertungen zu quantifizieren, wurden tägliche QNVs von vier Modellen für Deutschland in einem Achtjahreszeitraum durch SAL sowie klassischen Maßen untersucht. Die höher aufgelösten Modelle simulieren realistischere Niederschlagsverteilungen(besser in S), aber bei den anderen Komponenten tritt kaum ein Unterschied auf. Ein weiterer Aspekt ist, dass das Modell mit der gröbsten Auf-lösung(ECMWF) durch den RMSE deutlich am besten bewertet wird. Darin zeigt sich das Problem des ‚Double Penalty’. Die Zusammenfassung der drei Komponenten von SAL liefert das Resultat, dass vor allem im Sommer das am feinsten aufgelöste Modell (COSMO-DE) am besten abschneidet. Hauptsächlich kommt das durch eine realistischere Struktur zustande, so dass SAL hilfreiche Informationen liefert und die subjektive Bewertung bestätigt. rnIm Jahr 2007 fanden die Projekte COPS und MAP D-PHASE statt und boten die Möglich-keit, 19 Modelle aus drei Modellkategorien hinsichtlich ihrer Vorhersageleistung in Südwestdeutschland für Akkumulationszeiträume von 6 und 12 Stunden miteinander zu vergleichen. Als Ergebnisse besonders hervorzuheben sind, dass (i) je kleiner der Gitter-punktabstand der Modelle ist, desto realistischer sind die simulierten Niederschlags-verteilungen; (ii) bei der Niederschlagsmenge wird in den hoch aufgelösten Modellen weniger Niederschlag, d.h. meist zu wenig, simuliert und (iii) die Ortskomponente wird von allen Modellen am schlechtesten simuliert. Die Analyse der Vorhersageleistung dieser Modelltypen für konvektive Situationen zeigt deutliche Unterschiede. Bei Hochdrucklagen sind die Modelle ohne Konvektionsparametrisierung nicht in der Lage diese zu simulieren, wohingegen die Modelle mit Konvektionsparametrisierung die richtige Menge, aber zu flächige Strukturen realisieren. Für konvektive Ereignisse im Zusammenhang mit Fronten sind beide Modelltypen in der Lage die Niederschlagsverteilung zu simulieren, wobei die hoch aufgelösten Modelle realistischere Felder liefern. Diese wetterlagenbezogene Unter-suchung wird noch systematischer unter Verwendung der konvektiven Zeitskala durchge-führt. Eine erstmalig für Deutschland erstellte Klimatologie zeigt einen einer Potenzfunktion folgenden Abfall der Häufigkeit dieser Zeitskala zu größeren Werten hin auf. Die SAL Ergebnisse sind für beide Bereiche dramatisch unterschiedlich. Für kleine Werte der konvektiven Zeitskala sind sie gut, dagegen werden bei großen Werten die Struktur sowie die Amplitude deutlich überschätzt. rnFür zeitlich sehr hoch aufgelöste Niederschlagsvorhersagen gewinnt der Einfluss der zeitlichen Fehler immer mehr an Bedeutung. Durch die Optimierung/Minimierung der L Komponente von SAL innerhalb eines Zeitfensters(+/-3h) mit dem Beobachtungszeit-punkt im Zentrum ist es möglich diese zu bestimmen. Es wird gezeigt, dass bei optimalem Zeitversatz die Struktur und Amplitude der QNVs für das COSMO-DE besser werden und damit die grundsätzliche Fähigkeit des Modells die Niederschlagsverteilung realistischer zu simulieren, besser gezeigt werden kann.
Resumo:
The last decade has witnessed the establishment of a Standard Cosmological Model, which is based on two fundamental assumptions: the first one is the existence of a new non relativistic kind of particles, i. e. the Dark Matter (DM) that provides the potential wells in which structures create, while the second one is presence of the Dark Energy (DE), the simplest form of which is represented by the Cosmological Constant Λ, that sources the acceleration in the expansion of our Universe. These two features are summarized by the acronym ΛCDM, which is an abbreviation used to refer to the present Standard Cosmological Model. Although the Standard Cosmological Model shows a remarkably successful agreement with most of the available observations, it presents some longstanding unsolved problems. A possible way to solve these problems is represented by the introduction of a dynamical Dark Energy, in the form of the scalar field ϕ. In the coupled DE models, the scalar field ϕ features a direct interaction with matter in different regimes. Cosmic voids are large under-dense regions in the Universe devoided of matter. Being nearby empty of matter their dynamics is supposed to be dominated by DE, to the nature of which the properties of cosmic voids should be very sensitive. This thesis work is devoted to the statistical and geometrical analysis of cosmic voids in large N-body simulations of structure formation in the context of alternative competing cosmological models. In particular we used the ZOBOV code (see ref. Neyrinck 2008), a publicly available void finder algorithm, to identify voids in the Halos catalogues extraxted from CoDECS simulations (see ref. Baldi 2012 ). The CoDECS are the largest N-body simulations to date of interacting Dark Energy (DE) models. We identify suitable criteria to produce voids catalogues with the aim of comparing the properties of these objects in interacting DE scenarios to the standard ΛCDM model, at different redshifts. This thesis work is organized as follows: in chapter 1, the Standard Cosmological Model as well as the main properties of cosmic voids are intro- duced. In chapter 2, we will present the scalar field scenario. In chapter 3 the tools, the methods and the criteria by which a voids catalogue is created are described while in chapter 4 we discuss the statistical properties of cosmic voids included in our catalogues. In chapter 5 the geometrical properties of the catalogued cosmic voids are presented by means of their stacked profiles. In chapter 6 we summarized our results and we propose further developments of this work.