923 resultados para variable power, cycle-run, stochastic cycling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] Background. Coxiella burnetii is a highly clonal microorganism which is difficult to culture, requiring BSL3 conditions for its propagation. This leads to a scarce availability of isolates worldwide. On the other hand, published methods of characterization have delineated up to 8 different genomic groups and 36 genotypes. However, all these methodologies, with the exception of one that exhibited limited discriminatory power (3 genotypes), rely on performing between 10 and 20 PCR amplifications or sequencing long fragments of DNA, which make their direct application to clinical samples impracticable and leads to a scarce accessibility of data on the circulation of C. burnetii genotypes. Results: To assess the variability of this organism in Spain, we have developed a novel method that consists of a multiplex (8 targets) PCR and hybridization with specific probes that reproduce the previous classification of this organism into 8 genomic groups, and up to 16 genotypes. It allows for a direct characterization from clinical and environmental samples in a single run, which will help in the study of the different genotypes circulating in wild and domestic cycles as well as from sporadic human cases and outbreaks. The method has been validated with reference isolates. A high variability of C. burnetii has been found in Spain among 90 samples tested, detecting 10 different genotypes, being those adaA negative associated with acute Q fever cases presenting as fever of intermediate duration with liver involvement and with chronic cases. Genotypes infecting humans are also found in sheep, goats, rats, wild boar and ticks, and the only genotype found in cattle has never been found among our clinical samples. Conclusions: This newly developed methodology has permitted to demonstrate that C. burnetii is highly variable in Spain. With the data presented here, cattle seem not to participate in the transmission of C. burnetii to humans in the samples studied, while sheep, goats, wild boar, rats and ticks share genotypes with the human population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of Concurrency Theory to Systems Biology is in its earliest stage of progress. The metaphor of cells as computing systems by Regev and Shapiro opened the employment of concurrent languages for the modelling of biological systems. Their peculiar characteristics led to the design of many bio-inspired formalisms which achieve higher faithfulness and specificity. In this thesis we present pi@, an extremely simple and conservative extension of the pi-calculus representing a keystone in this respect, thanks to its expressiveness capabilities. The pi@ calculus is obtained by the addition of polyadic synchronisation and priority to the pi-calculus, in order to achieve compartment semantics and atomicity of complex operations respectively. In its direct application to biological modelling, the stochastic variant of the calculus, Spi@, is shown able to model consistently several phenomena such as formation of molecular complexes, hierarchical subdivision of the system into compartments, inter-compartment reactions, dynamic reorganisation of compartment structure consistent with volume variation. The pivotal role of pi@ is evidenced by its capability of encoding in a compositional way several bio-inspired formalisms, so that it represents the optimal core of a framework for the analysis and implementation of bio-inspired languages. In this respect, the encodings of BioAmbients, Brane Calculi and a variant of P Systems in pi@ are formalised. The conciseness of their translation in pi@ allows their indirect comparison by means of their encodings. Furthermore it provides a ready-to-run implementation of minimal effort whose correctness is granted by the correctness of the respective encoding functions. Further important results of general validity are stated on the expressive power of priority. Several impossibility results are described, which clearly state the superior expressiveness of prioritised languages and the problems arising in the attempt of providing their parallel implementation. To this aim, a new setting in distributed computing (the last man standing problem) is singled out and exploited to prove the impossibility of providing a purely parallel implementation of priority by means of point-to-point or broadcast communication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Waste management is becoming, year after year, always more important both for the costs associated with it and for the ever increasing volumes of waste generated. The discussion on the fate of organic fraction of municipal solid waste (OFMSW) leads everyday to new solutions. Many alternatives are proposed, ranging from incineration to composting passing through anaerobic digestion. “For Biogas” is a collaborative effort, between C.I.R.S.A. and R.E.S. cooperative, whose main goal is to generate “green” energy from both biowaste and sludge anaerobic co-digestion. Specifically, the project include a pilot plant receiving dewatered sludge from both urban and agro-industrial sewage (DS) and the organic fraction of MSW (in 2/1 ratio) which is digested in absence of oxygen to produce biogas and digestate. Biogas is piped to a co-generation system producing power and heat reused in the digestion process itself, making it independent from the national grid. Digestate undergoes a process of mechanical separation giving a liquid fraction, introduced in the treatment plant, and a solid fraction disposed in landfill (in future it will be further processed to obtain compost). This work analyzed and estimated the impacts generated by the pilot plant in its operative phase. Once the model was been characterized, on the basis of the CML2001 methodology, a comparison is made with the present scenario assumed for OFMSW and DS. Actual scenario treats separately the two fractions: the organic one is sent to a composting plant, while sludge is sent to landfill. Results show that the most significant difference between the two scenarios is in the GWP category as the project "For Biogas" is able to generate “zero emission” power and heat. It also generates a smaller volume of waste for disposal. In conclusion, the analysis evaluated the performance of two alternative methods of management of OFMSW and DS, highlighting that "For Biogas" project is to be preferred to the actual scenario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electronic applications are nowadays converging under the umbrella of the cloud computing vision. The future ecosystem of information and communication technology is going to integrate clouds of portable clients and embedded devices exchanging information, through the internet layer, with processing clusters of servers, data-centers and high performance computing systems. Even thus the whole society is waiting to embrace this revolution, there is a backside of the story. Portable devices require battery to work far from the power plugs and their storage capacity does not scale as the increasing power requirement does. At the other end processing clusters, such as data-centers and server farms, are build upon the integration of thousands multiprocessors. For each of them during the last decade the technology scaling has produced a dramatic increase in power density with significant spatial and temporal variability. This leads to power and temperature hot-spots, which may cause non-uniform ageing and accelerated chip failure. Nonetheless all the heat removed from the silicon translates in high cooling costs. Moreover trend in ICT carbon footprint shows that run-time power consumption of the all spectrum of devices accounts for a significant slice of entire world carbon emissions. This thesis work embrace the full ICT ecosystem and dynamic power consumption concerns by describing a set of new and promising system levels resource management techniques to reduce the power consumption and related issues for two corner cases: Mobile Devices and High Performance Computing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last few years, a great deal of interest has risen concerning the applications of stochastic methods to several biochemical and biological phenomena. Phenomena like gene expression, cellular memory, bet-hedging strategy in bacterial growth and many others, cannot be described by continuous stochastic models due to their intrinsic discreteness and randomness. In this thesis I have used the Chemical Master Equation (CME) technique to modelize some feedback cycles and analyzing their properties, including experimental data. In the first part of this work, the effect of stochastic stability is discussed on a toy model of the genetic switch that triggers the cellular division, which malfunctioning is known to be one of the hallmarks of cancer. The second system I have worked on is the so-called futile cycle, a closed cycle of two enzymatic reactions that adds and removes a chemical compound, called phosphate group, to a specific substrate. I have thus investigated how adding noise to the enzyme (that is usually in the order of few hundred molecules) modifies the probability of observing a specific number of phosphorylated substrate molecules, and confirmed theoretical predictions with numerical simulations. In the third part the results of the study of a chain of multiple phosphorylation-dephosphorylation cycles will be presented. We will discuss an approximation method for the exact solution in the bidimensional case and the relationship that this method has with the thermodynamic properties of the system, which is an open system far from equilibrium.In the last section the agreement between the theoretical prediction of the total protein quantity in a mouse cells population and the observed quantity will be shown, measured via fluorescence microscopy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this work was to investigate the impact of different hybridization concepts and levels of hybridization on fuel economy of a standard road vehicle where both conventional and non-conventional hybrid architectures are treated exactly in the same way from the point of view of overall energy flow optimization. Hybrid component models were developed and presented in detail as well as the simulations results mainly for NEDC cycle. The analysis was performed on four different parallel hybrid powertrain concepts: Hybrid Electric Vehicle (HEV), High Speed Flywheel Hybrid Vehicle (HSF-HV), Hydraulic Hybrid Vehicle (HHV) and Pneumatic Hybrid Vehicle (PHV). In order to perform equitable analysis of different hybrid systems, comparison was performed also on the basis of the same usable system energy storage capacity (i.e. 625kJ for HEV, HSF and the HHV) but in the case of pneumatic hybrid systems maximal storage capacity was limited by the size of the systems in order to comply with the packaging requirements of the vehicle. The simulations were performed within the IAV Gmbh - VeLoDyn software simulator based on Matlab / Simulink software package. Advanced cycle independent control strategy (ECMS) was implemented into the hybrid supervisory control unit in order to solve power management problem for all hybrid powertrain solutions. In order to maintain State of Charge within desired boundaries during different cycles and to facilitate easy implementation and recalibration of the control strategy for very different hybrid systems, Charge Sustaining Algorithm was added into the ECMS framework. Also, a Variable Shift Pattern VSP-ECMS algorithm was proposed as an extension of ECMS capabilities so as to include gear selection into the determination of minimal (energy) cost function of the hybrid system. Further, cycle-based energetic analysis was performed in all the simulated cases, and the results have been reported in the corresponding chapters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation analyzes the effect of market analysts’ expectations of share prices (price targets) on executive compensation. It examines how well the estimated effects of price targets on compensation fit with two competing views on determining executive compensation: the arm’s length bargaining model, which assumes that a board seeks to maximize shareholders’ interests, and the managerial power model, which assumes that a board seeks to maximize managers’ compensation (Bebchuk et al. 2005). The first chapter documents the pattern of CEO pay from fiscal year 1996 to 2010. The second chapter analyzes the Institutional Broker Estimate System Detail History Price Target data file, which that reports analysts’ price targets for firms. I show that the number of price target announcements is positively associated with company share price’s volatility, that price targets are predictive of changes in the value of stocks, and that when analysts announce positive (negative) expectations of future stock price, share prices change in the same direction in the short run. The third chapter analyzes the effect of price targets on executive compensation. I find that analysts' price targets alter the composition of executive pay between cash-based compensation and stock-based compensation. When analysts forecast a rise (fall) in the share price for a firm, the compensation package tilts toward stock-based (cash-based) compensation. The substitution effect is stronger in companies that have weaker corporate governance. The fourth chapter explores the effect of the introduction of the Sarbanes-Oxley Act (SOX) in 2002 and its reinforcement in 2006 on the options granting process. I show that the introduction of SOX and its reinforcement eliminated the practice of backdating options but increased “spring-loading” of option grants around price targets announcements. Overall, the dissertation shows that price targets provide insights into the determinants of executive pay in favor of the managerial power model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In cycling cells positive stimuli like nutrient, growth factors and mitogens increase ribosome biogenesis rate and protein synthesis to ensure both growth and proliferation. In contrast, under stress situation, proliferating cells negatively modulate ribosome production to reduce protein synthesis and block cell cycle progression. The main strategy used by cycling cell to coordinate cell proliferation and ribosome biogenesis is to share regulatory elements, which participate directly in ribosome production and in cell cycle regulation. In fact, there is evidence that stimulation or inhibition of cell proliferation exerts direct effect on activity of the RNA polymerases controlling the ribosome biogenesis, while several alterations in normal ribosome biogenesis cause changes of the expression and the activity of the tumor suppressor p53, the main effector of cell cycle progression inhibition. The available data on the cross-talk between ribosome biogenesis and cell proliferation have been until now obtained in experimental model in which changes in ribosome biogenesis were obtained either by reducing the activity of the RNA polymerase I or by down-regulating the expression of the ribosomal proteins. The molecular pathways involved in the relationship between the effect of the inhibition of RNA polymerase III (Pol III) activity and cell cycle progression have been not yet investigated. In eukaryotes, RNA Polymerase III is responsible for transcription of factors involved both in ribosome assembly (5S rRNA) and rRNA processing (RNAse P and MRP).Thus, the aim of this study is characterize the effects of the down-regulation of RNA Polymerase III activity, or the specific depletion of 5S rRNA. The results that will be obtained might lead to a deeper understanding of the molecular pathway that controls the coordination between ribosome biogenesis and cell cycle, and might give useful information about the possibility to target RNA Polymerase III for cancer treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this thesis is the power transient analysis concerning experimental devices placed within the reflector of Jules Horowitz Reactor (JHR). Since JHR material testing facility is designed to achieve 100 MW core thermal power, a large reflector hosts fissile material samples that are irradiated up to total relevant power of 3 MW. MADISON devices are expected to attain 130 kW, conversely ADELINE nominal power is of some 60 kW. In addition, MOLFI test samples are envisaged to reach 360 kW for what concerns LEU configuration and up to 650 kW according to HEU frame. Safety issues concern shutdown transients and need particular verifications about thermal power decreasing of these fissile samples with respect to core kinetics, as far as single device reactivity determination is concerned. Calculation model is conceived and applied in order to properly account for different nuclear heating processes and relative time-dependent features of device transients. An innovative methodology is carried out since flux shape modification during control rod insertions is investigated regarding the impact on device power through core-reflector coupling coefficients. In fact, previous methods considering only nominal core-reflector parameters are then improved. Moreover, delayed emissions effect is evaluated about spatial impact on devices of a diffuse in-core delayed neutron source. Delayed gammas transport related to fission products concentration is taken into account through evolution calculations of different fuel compositions in equilibrium cycle. Provided accurate device reactivity control, power transients are then computed for every sample according to envisaged shutdown procedures. Results obtained in this study are aimed at design feedback and reactor management optimization by JHR project team. Moreover, Safety Report is intended to utilize present analysis for improved device characterization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wurden die durch Training induzierten motorischen Gedächtnisleistungen der Taufliege Drosophila melanogaster beim Überklettern von acht symmetrisch verteilten Lücken auf einem rotierenden Ring untersucht. Durch den auf sie einwirkenden optischen Fluss der vorbeiziehenden äußeren Umgebung wurden die Fliegen angeregt, diesem optomotorischen Reiz entgegenzuwirken und die Lücken laufend zu überqueren. Durch Training verbessert und langfristig gelernt wird die kompensatorische Lückenüberquerung X+ gegen die Rotation. In der aus diesem Training erhaltenen Lernkurve war eine überdurchschnittlich hohe Leistungsverbesserung nach einem einzigen Trainingslauf mit einem zeitlichen Bestand von ca. 40 Minuten abzulesen, um danach vom motorischen Gedächtnisspeicher trainierter Fliegen nicht mehr abgerufen werden zu können. Nach einer Ruhephase von einem bis mehreren Tagen wurden die Fliegen auf mögliche Langzeitlernleistungen untersucht und diese für verschiedene Intervalle nachgewiesen. Sowohl die Leistungsverbesserung während des Trainings, als auch der Lerneffekt nach 24h bleiben in mutanten rutabaga2080 sowie rut1 Fliegen aus. Betroffen ist das Gen der Adenylylzyklase I, ein Schlüsselprotein der cAMP-Signalkaskade, die u.a. im olfaktorischen und visuellen Lernen gebraucht wird. Damit ergab sich die Möglichkeit die motorischen Gedächtnisformen durch partielle Rettung zu kartieren. Die motorische Gedächtniskonsolidierung ist schlafabhängig. Wie sich herausstellte, benötigen WTB Fliegen nur eine Dunkelphase von 10h zwischen einem ersten Trainingslauf und einem Testlauf um signifikante Leistungssteigerungen zu erzielen. In weiterführenden Versuchen wurden die Fliegen nachts sowie tagsüber mit einer LED-Lampe oder in einer Dunkelkammer, mit einem Kreisschüttler oder einer Laborwippe depriviert, mit dem Ergebnis, dass nur jene Fliegen ihre Leistung signifikant gegenüber einem ersten Trainingslauf verbessern konnten, welche entweder ausschließlich der Dunkelheit ausgesetzt waren oder welchen die Möglichkeit gegeben wurde, ein Gedächtnis zunächst in einer natürlichen Schlafphase zu konsolidieren (21Uhr bis 7Uhr MEZ). In weiteren Experimenten wurden die experimentellen Bedingungen entweder während des Trainings oder des Tests auf eine Fliege und damit verbunden auf eine erst durch das Training mögliche motorische Gedächtniskonsolidierung einwirken zu können, untersucht. Dazu wurden die Experimentparameter Lückenweite, Rotationsrichtung des Lückenringes, Geschwindigkeit des Lückenringes sowie die Verteilung der acht Lücken auf dem Ring (symmetrisch, asymmetrisch) im Training oder beim Gedächtnisabruf im Testlauf verändert. Aus den Ergebnissen kann geschlussfolgert werden, dass die Lückenweite langzeitkonsolidiert wird, die Rotationsrichtung kurzzeitig abgespeichert wird und die Drehgeschwindigkeit motivierend auf die Fliegen wirkt. Die symmetrische Verteilung der Lücken auf dem Ring dient der Langzeitkonsolidierung und ist als Trainingseingang von hoher Wichtigkeit. Mit Hilfe verschiedener Paradigmen konnten die Leistungsverbesserungen der Fliegen bei Abruf eines Kurz- bzw. Langzeitgedächtnisses hochauflösend betrachtet werden (Transfer). Die Konzentration, mit der eine WTB Fliege eine motorische Aufgabe - die Überquerung von Lücken entgegengesetzt der Rotationsrichtung - durchführt, konnte mit Hilfe von Distraktoreizen bestimmt werden. Wie sich herausstellte, haben Distraktoren einen Einfluss auf die Erfolgsquote einer Überquerung, d.h. mit zunehmender Distraktionsstärke nahm die Wahrscheinlichkeit einer Lückenüberquerung ab. Die Ablenkungsreize wirkten sich weiterhin auf die Vermessung einer Lücke aus, in dem entweder "peering"-artigen Bewegungen im Training durchgeführt wurden oder je nach Reizstärke ausschließlich nur jene Lücken vermessen wurden, welche auch überquert werden sollten.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solid oral dosage form disintegration in the human stomach is a highly complex process dependent on physicochemical properties of the stomach contents as well as on physical variables such as hydrodynamics and mechanical stress. Understanding the role of hydrodynamics and forces in disintegration of oral solid dosage forms can help to improve in vitro disintegration testing and the predictive power of the in vitro test. The aim of this work was to obtain a deep understanding of the influence of changing hydrodynamic conditions on solid oral dosage form performance. Therefore, the hydrodynamic conditions and forces present in the compendial PhEur/USP disintegration test device were characterized using a computational fluid dynamics (CFD) approach. Furthermore, a modified device was developed and the hydrodynamic conditions present were simulated using CFD. This modified device was applied in two case studies comprising immediate release (IR) tablets and gastroretentive drug delivery systems (GRDDS). Due to the description of movement provided in the PhEur, the movement velocity of the basket-rack assembly follows a sinusoidal profile. Therefore, hydrodynamic conditions are changing continually throughout the movement cycle. CFD simulations revealed that the dosage form is exposed to a wide range of fluid velocities and shear forces during the test. The hydrodynamic conditions in the compendial device are highly variable and cannot be controlled. A new, modified disintegration test device based on computerized numerical control (CNC) technique was developed. The modified device can be moved in all three dimensions and radial movement is also possible. Simple and complex moving profiles can be developed and the influence of the hydrodynamic conditions on oral solid dosage form performance can be evaluated. Furthermore, a modified basket was designed that allows two-sided fluid flow. CFD simulations of the hydrodynamics and forces in the modified device revealed significant differences in the fluid flow field and forces when compared to the compendial device. Due to the CNC technique moving velocity and direction are arbitrary and hydrodynamics become controllable. The modified disintegration test device was utilized to examine the influence of moving velocity on disintegration times of IR tablets. Insights into the influence of moving speed, medium viscosity and basket design on disintegration times were obtained. An exponential relationship between moving velocity of the modified basket and disintegration times was established in simulated gastric fluid. The same relationship was found between the disintegration times and the CFD predicted average shear stress on the tablet surface. Furthermore, a GRDDS was developed based on the approach of an in situ polyelectrolyte complex (PEC). Different complexes composed of different grades of chitosan and carrageenan and different ratios of those were investigated for their swelling behavior, mechanical stability, and in vitro drug release. With an optimized formulation the influence of changing hydrodynamic conditions on the swelling behavior and the drug release profile was demonstrated using the modified disintegration test device. Both, swelling behavior and drug release, were largely dependent on the hydrodynamic conditions. Concluding, it has been shown within this thesis that the application of the modified disintegration test device allows for detailed insights into the influence of hydrodynamic conditions on solid oral dosage form disintegration and dissolution. By the application of appropriate test conditions, the predictive power of in vitro disintegration testing can be improved using the modified disintegration test device. Furthermore, CFD has proven a powerful tool to examine the hydrodynamics and forces in the compendial as well as in the modified disintegration test device. rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Der atmosphärische Kreislauf reaktiver Stickstoffverbindungen beschäftigt sowohl die Naturwissenschaftler als auch die Politik. Dies ist insbesondere darauf zurückzuführen, dass reaktive Stickoxide die Bildung von bodennahem Ozon kontrollieren. Reaktive Stickstoffverbindungen spielen darüber hinaus als gasförmige Vorläufer von Feinstaubpartikeln eine wichtige Rolle und der Transport von reaktivem Stickstoff über lange Distanzen verändert den biogeochemischen Kohlenstoffkreislauf des Planeten, indem er entlegene Ökosysteme mit Stickstoff düngt. Die Messungen von stabilen Stickstoffisotopenverhältnissen (15N/14N) bietet ein Hilfsmittel, welches es erlaubt, die Quellen von reaktiven Stickstoffverbindungen zu identifizieren und die am Stickstoffkeislauf beteiligten Reaktionen mithilfe ihrer reaktionsspezifischen Isotopenfraktionierung genauer zu untersuchen. rnIn dieser Doktorarbeit demonstriere ich, dass es möglich ist, mit Hilfe von Nano-Sekundärionenmassenspektrometrie (NanoSIMS) verschiedene stickstoffhaltige Verbindungen, die üblicherweise in atmosphärischen Feinstaubpartikeln vorkommen, mit einer räumlichen Auflösung von weniger als einem Mikrometer zu analysieren und zu identifizieren. Die Unterscheidung verschiedener stickstoffhaltiger Verbindungen erfolgt anhand der relativen Signalintensitäten der positiven und negativen Sekundärionensignale, die beobachtet werden, wenn die Feinstaubproben mit einem Cs+ oder O- Primärionenstrahl beschossen werden. Die Feinstaubproben können direkt auf dem Probenahmesubstrat in das Massenspektrometer eingeführt werden, ohne chemisch oder physikalisch aufbereited zu werden. Die Methode wurde Mithilfe von Nitrat, Nitrit, Ammoniumsulfat, Harnstoff, Aminosären, biologischen Feinstaubproben (Pilzsporen) und Imidazol getestet. Ich habe gezeigt, dass NO2 Sekundärionen nur beim Beschuss von Nitrat und Nitrit (Salzen) mit positiven Primärionen entstehen, während NH4+ Sekundärionen nur beim Beschuss von Aminosäuren, Harnstoff und Ammoniumsalzen mit positiven Primärionen freigesetzt werden, nicht aber beim Beschuss biologischer Proben wie z.B. Pilzsporen. CN- Sekundärionen werden beim Beschuss aller stickstoffhaltigen Verbindungen mit positiven Primärionen beobachtet, da fast alle Proben oberflächennah mit Kohlenstoffspuren kontaminiert sind. Die relative Signalintensität der CN- Sekundärionen ist bei kohlenstoffhaltigen organischen Stickstoffverbindungen am höchsten.rnDarüber hinaus habe ich gezeigt, dass an reinen Nitratsalzproben (NaNO3 und KNO3), welche auf Goldfolien aufgebracht wurden speziesspezifische stabile Stickstoffisotopenverhältnisse mithilfe des 15N16O2- / 14N16O2- - Sekundärionenverhältnisses genau und richtig gemessen werden können. Die Messgenauigkeit auf Feldern mit einer Rastergröße von 5×5 µm2 wurde anhand von Langzeitmessungen an einem hausinternen NaNO3 Standard als ± 0.6 ‰ bestimmt. Die Differenz der matrixspezifischen instrumentellen Massenfraktionierung zwischen NaNO3 und KNO3 betrug 7.1 ± 0.9 ‰. 23Na12C2- Sekundärionen können eine ernst zu nehmende Interferenz darstellen wenn 15N16O2- Sekundärionen zur Messung des nitratspezifischen schweren Stickstoffs eingesetzt werden sollen und Natrium und Kohlenstoff im selben Feinstaubpartikel als interne Mischung vorliegt oder die natriumhaltige Probe auf einem kohlenstoffhaltigen Substrat abgelegt wurde. Selbst wenn, wie im Fall von KNO3, keine derartige Interferenz vorliegt, führt eine interne Mischung mit Kohlenstoff im selben Feinstaubpartikel zu einer matrixspezifischen instrumentellen Massenfraktionierung die mit der folgenden Gleichung beschrieben werden kann: 15Nbias = (101 ± 4) ∙ f − (101 ± 3) ‰, mit f = 14N16O2- / (14N16O2- + 12C14N-). rnWird das 12C15N- / 12C14N- Sekundärionenverhältnis zur Messung der stabilen Stickstoffisotopenzusammensetzung verwendet, beeinflusst die Probematrix die Messungsergebnisse nicht, auch wenn Stickstoff und Kohlenstoff in den Feinstaubpartikeln in variablen N/C–Verhältnissen vorliegen. Auch Interferenzen spielen keine Rolle. Um sicherzustellen, dass die Messung weiterhin spezifisch auf Nitratspezies eingeschränkt bleibt, kann eine 14N16O2- Maske bei der Datenauswertung verwendet werden. Werden die Proben auf einem kohlenstoffhaltigen, stickstofffreien Probennahmesubstrat gesammelt, erhöht dies die Signalintensität für reine Nitrat-Feinstaubpartikel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent studies found that soil-atmosphere coupling features, through soil moisture, have been crucial to simulate well heat waves amplitude, duration and intensity. Moreover, it was found that soil moisture depletion both in Winter and Spring anticipates strong heat waves during the Summer. Irrigation in geophysical studies can be intended as an anthropogenic forcing to the soil-moisture, besides changes in land proprieties. In this study, the irrigation was add to a LAM hydrostatic model (BOLAM) and coupled with the soil. The response of the model to irrigation perturbation is analyzed during a dry Summer season. To identify a dry Summer, with overall positive temperature anomalies, an extensive climatological characterization of 2015 was done. The method included a statistical validation on the reference period distribution used to calculate the anomalies. Drought conditions were observed during Summer 2015 and previous seasons, both on the analyzed region and the Alps. Moreover July was characterized as an extreme event for the referred distribution. The numerical simulation consisted on the summer season of 2015 and two run: a control run (CTR), with the soil coupling and a perturbed run (IPR). The perturbation consists on a mask of land use created from the Cropland FAO dataset, where an irrigation water flux of 3 mm/day was applied from 6 A.M. to 9 A.M. every day. The results show that differences between CTR and IPR has a strong daily cycle. The main modifications are on the air masses proprieties, not on to the dynamics. However, changes in the circulation at the boundaries of the Po Valley are observed, and a diagnostic spatial correlation of variable differences shows that soil moisture perturbation explains well the variation observed in the 2 meters height temperature and in the latent heat fluxes.On the other hand, does not explain the spatial shift up and downslope observed during different periods of the day. Given the results, irrigation process affects the atmospheric proprieties on a larger scale than the irrigation, therefore it is important in daily forecast, particularly during hot and dry periods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cognitive impairments are currently regarded as important determinants of functional domains and are promising treatment goals in schizophrenia. Nevertheless, the exact nature of the interdependent relationship between neurocognition and social cognition as well as the relative contribution of each of these factors to adequate functioning remains unclear. The purpose of this article is to systematically review the findings and methodology of studies that have investigated social cognition as a mediator variable between neurocognitive performance and functional outcome in schizophrenia. Moreover, we carried out a study to evaluate this mediation hypothesis by the means of structural equation modeling in a large sample of 148 schizophrenia patients. The review comprised 15 studies. All but one study provided evidence for the mediating role of social cognition both in cross-sectional and in longitudinal designs. Other variables like motivation and social competence additionally mediated the relationship between social cognition and functional outcome. The mean effect size of the indirect effect was 0.20. However, social cognitive domains were differentially effective mediators. On average, 25% of the variance in functional outcome could be explained in the mediation model. The results of our own statistical analysis are in line with these conclusions: Social cognition mediated a significant indirect relationship between neurocognition and functional outcome. These results suggest that research should focus on differential mediation pathways. Future studies should also consider the interaction with other prognostic factors, additional mediators, and moderators in order to increase the predictive power and to target those factors relevant for optimizing therapy effects.