945 resultados para ACCELERATING FRONTS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN] We describe the coupling between upper ocean layer variability and size-fractionated phytoplankton distribution in the non-nutrient-limited Bransfield Strait region (BS) of Antarctica. For this purpose we use hydrographic and size-fractionated chlorophyll a data from a transect that crossed 2 fronts and an eddy, together with data from 3 stations located in a deeply mixed region, the Antarctic Sound (AS). In the BS transect, small phytoplankton (<20 μm equivalent spherical diameter [ESD]) accounted for 80% of total chl a and their distribution appeared to be linked to cross-frontal variability. On the deepening upper mixed layer (UML) sides of both fronts we observed a deep subducting column-like structure of small phytoplankton biomass. On the shoaling UML sides of both fronts, where there were signs of restratification, we observed a local shallow maximum of small phytoplankton biomass. We propose that this observed phytoplankton distribution may be a response to the development of frontal vertical circulation cells. In the deep, turbulent environment of the AS, larger phytoplankton (>20 μm ESD) accounted for 80% of total chl a. The proportion of large phytoplankton increases as the depth of the upper mixed layer (ZUML), and the corresponding rate of vertical mixing, increases. We hypothesize that this change in phytoplankton composition with varying ZUML is related to the competition for light, and results from modification of the light regime caused by vertical mixing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]We have studied the short-term variability -at temporal scale of days and spatial scale of 5 km- of the hydrographic field, organic and inorganic nutrients, chlorophyll and picoplanktonic abundances, across a 40 Km section crossing a frontal system south of Gran Canaria, where anticyclonic eddies in early-stages of formation and convergent fronts have been widely reported in the past. Each cruise consisted in a 3-4 daily-repeated section, and was carried out at the same period of the year (May) during two consecutive years (2011 and 2012). The main goal of our study was to analyze the picoplankton response to short-term variability at scales not considered in regular oceanographic samplings, even in regions with complex hydrographic fields.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES] El turismo de cruceros ha reaparecido con fuerza desde los años ochenta, acelerándose su implantación en Europa -y en particular en Canarias-, desde la siguiente década. Gran parte de los estudios sobre el turismo de cruceros se han centrado en las características de la demanda (el perfil del turista, la capacidad de gasto, los impactos que causa, etc.). Sin embargo, la literatura sobre la percepción que tienen los residentes sobre este turismo es más bien escasa y donde se centra el actual estudio, en particular en el espacio más inmediato al Puerto de La Luz y de Las Palmas.[EN] Cruise tourism is a way of taking leisure time in our society. It is an activity that has become very popular since the eighties, accelerating its presence in Europe and, particularly,in the Canary Islands since the following decade. Many of the studies on cruise tourism have focused on the characteristics of the demand (including the profile of tourists, spending power, the impacts that this activity causes, etc.). However, the literature on the residents’ perception about this tourism where this study focuses on, is rather scarce is rather scarce, particularly in the space immediately to the Port of La Luz and Las Palmas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The miniaturization race in the hardware industry aiming at continuous increasing of transistor density on a die does not bring respective application performance improvements any more. One of the most promising alternatives is to exploit a heterogeneous nature of common applications in hardware. Supported by reconfigurable computation, which has already proved its efficiency in accelerating data intensive applications, this concept promises a breakthrough in contemporary technology development. Memory organization in such heterogeneous reconfigurable architectures becomes very critical. Two primary aspects introduce a sophisticated trade-off. On the one hand, a memory subsystem should provide well organized distributed data structure and guarantee the required data bandwidth. On the other hand, it should hide the heterogeneous hardware structure from the end-user, in order to support feasible high-level programmability of the system. This thesis work explores the heterogeneous reconfigurable hardware architectures and presents possible solutions to cope the problem of memory organization and data structure. By the example of the MORPHEUS heterogeneous platform, the discussion follows the complete design cycle, starting from decision making and justification, until hardware realization. Particular emphasis is made on the methods to support high system performance, meet application requirements, and provide a user-friendly programmer interface. As a result, the research introduces a complete heterogeneous platform enhanced with a hierarchical memory organization, which copes with its task by means of separating computation from communication, providing reconfigurable engines with computation and configuration data, and unification of heterogeneous computational devices using local storage buffers. It is distinguished from the related solutions by distributed data-flow organization, specifically engineered mechanisms to operate with data on local domains, particular communication infrastructure based on Network-on-Chip, and thorough methods to prevent computation and communication stalls. In addition, a novel advanced technique to accelerate memory access was developed and implemented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many potential diltiazem related L-VDCC blockers were developed using a multidisciplinary approach. This current study was to investigate and compare diltiazem with to the newly developed compounds by mouse Langendorff-perfused heart, Ca2+-transient and on recombinant L-VDCC. Twenty particular compounds were selected by the ligand-based virtual screening procedure (LBVS). From these compounds, five of them (5b, M2, M7, M8 and P1) showed a potent and selective inotropic activity on guinea-pig left atria driven 1 Hz. Further assays displayed an interesting negative inotropic effect of M2, M8, P1 and M7 on guinea pig isolated left papillary muscle driven at 1 Hz, a relevant vasorelaxant activity of 5b, M2, M7, M8 and P1 on K+-depolarized guinea-pig ileum longitudinal smooth muscle and a significant inhibition of contraction of 5b, M2, M8 and P1 on carbachol stimulated ileum longitudinal smooth muscle. Wild-type human heart and rabbit lung α1 subunits were expressed (combined with the regulatory α2δ and β3 subunits) in Xenopus Leavis oocytes using a two-electrode voltage clamp technique. Diltiazem is a benzothiazepine Ca2+ channel blocker used clinically for its antihypertensive and antiarrhythmic effects. Previous radioligand binding assays revealed a complex interaction with the benzothiazepine binding site for M2, M7 and M8. (Carosati E. et al. J. Med Chem. 2006, 49; 5206). In agreement with this findings, the relative order of increased rates of contraction and relaxation at lower concentrations s(≤10-6M) in unpaced hearts was M7>M2>M8>P1. Similar increases in Ca2+ transient were observed in cardiomyocytes. Diltiazem showed negative inotropic effects whereas 5b had no significant effect. Diltiazem blocks Ca2+current in a use-dependent manner and facilitates the channel by accelerating the inactivation and decelerating the recovery from inactivation. In contrast to diltiazem, the new analogs had no pronounced use-dependence. Application of 100 μM M8, M2 showed ~ 10% tonic block; in addition, M8, M2 and P1 shifted the steady state inactivation in hyperpolarized direction and the current inactivation time was significantly decreased compared with control (219.6 ± 11.5 ms, 226 ± 14.5 vs. 269 ± 12.9 vs. 199.28 ± 8.19 ms). Contrary to diltiazem, the recovery from the block by M8 and M2 was comparable to control. Only P1 showed a significantly decrease of the time for the recovery from inactivation. All of the compounds displayed the same sensitivity on the Ca2+ channel rabbit lung α1 except P1. Taken together, these findings suggest that M8, M2 and P1 might directly decrease the binding affinity or allow rapid dissociation from the benzothiazepine binding site.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The irrigation scheme Eduardo Mondlane, situated in Chókwè District - in the Southern part of the Gaza province and within the Limpopo River Basin - is the largest in the country, covering approximately 30,000 hectares of land. Built by the Portuguese colonial administration in the 1950s to exploit the agricultural potential of the area through cash-cropping, after Independence it became one of Frelimo’s flagship projects aiming at the “socialization of the countryside” and at agricultural economic development through the creation of a state farm and of several cooperatives. The failure of Frelimo’s economic reforms, several infrastructural constraints and local farmers resistance to collective forms of production led to scheme to a state of severe degradation aggravated by the floods of the year 2000. A project of technical rehabilitation initiated after the floods is currently accompanied by a strong “efficiency” discourse from the managing institution that strongly opposes the use of irrigated land for subsistence agriculture, historically a major livelihood strategy for smallfarmers, particularly for women. In fact, the area has been characterized, since the end of the XIX century, by a stable pattern of male migration towards South African mines, that has resulted in an a steady increase of women-headed households (both de jure and de facto). The relationship between land reform, agricultural development, poverty alleviation and gender equality in Southern Africa is long debated in academic literature. Within this debate, the role of agricultural activities in irrigation schemes is particularly interesting considering that, in a drought-prone area, having access to water for irrigation means increased possibilities of improving food and livelihood security, and income levels. In the case of Chókwè, local governments institutions are endorsing the development of commercial agriculture through initiatives such as partnerships with international cooperation agencies or joint-ventures with private investors. While these business models can sometimes lead to positive outcomes in terms of poverty alleviation, it is important to recognize that decentralization and neoliberal reforms occur in the context of financial and political crisis of the State that lacks the resources to efficiently manage infrastructures such as irrigation systems. This kind of institutional and economic reforms risk accelerating processes of social and economic marginalisation, including landlessness, in particular for poor rural women that mainly use irrigated land for subsistence production. The study combines an analysis of the historical and geographical context with the study of relevant literature and original fieldwork. Fieldwork was conducted between February and June 2007 (where I mainly collected secondary data, maps and statistics and conducted preliminary visit to Chókwè) and from October 2007 to March 2008. Fieldwork methodology was qualitative and used semi-structured interviews with central and local Government officials, technical experts of the irrigation scheme, civil society organisations, international NGOs, rural extensionists, and water users from the irrigation scheme, in particular those women smallfarmers members of local farmers’ associations. Thanks to the collaboration with the Union of Farmers’ Associations of Chókwè, she has been able to participate to members’ meeting, to education and training activities addressed to women farmers members of the Union and to organize a group discussion. In Chókwè irrigation scheme, women account for the 32% of water users of the familiar sector (comprising plot-holders with less than 5 hectares of land) and for just 5% of the private sector. If one considers farmers’ associations of the familiar sector (a legacy of Frelimo’s cooperatives), women are 84% of total members. However, the security given to them by the land title that they have acquired through occupation is severely endangered by the use that they make of land, that is considered as “non efficient” by the irrigation scheme authority. Due to a reduced access to marketing possibilities and to inputs, training, information and credit women, in actual fact, risk to see their right to access land and water revoked because they are not able to sustain the increasing cost of the water fee. The myth of the “efficient producer” does not take into consideration the characteristics of inequality and gender discrimination of the neo-liberal market. Expecting small-farmers, and in particular women, to be able to compete in the globalized agricultural market seems unrealistic, and can perpetuate unequal gendered access to resources such as land and water.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Communication and coordination are two key-aspects in open distributed agent system, being both responsible for the system’s behaviour integrity. An infrastructure capable to handling these issues, like TuCSoN, should to be able to exploit modern technologies and tools provided by fast software engineering contexts. Thesis aims to demonstrate TuCSoN infrastructure’s abilities to cope new possibilities, hardware and software, offered by mobile technology. The scenarios are going to configure, are related to the distributed nature of multi-agent systems where an agent should be located and runned just on a mobile device. We deal new mobile technology frontiers concerned with smartphones using Android operating system by Google. Analysis and deployment of a distributed agent-based system so described go first to impact with quality and quantity considerations about available resources. Engineering issue at the base of our research is to use TuCSoN against to reduced memory and computing capability of a smartphone, without the loss of functionality, efficiency and integrity for the infrastructure. Thesis work is organized on two fronts simultaneously: the former is the rationalization process of the available hardware and software resources, the latter, totally orthogonal, is the adaptation and optimization process about TuCSoN architecture for an ad-hoc client side release.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aerosol particles and water vapour are two important constituents of the atmosphere. Their interaction, i.e. thecondensation of water vapour on particles, brings about the formation of cloud, fog, and raindrops, causing the water cycle on the earth, and being responsible for climate changes. Understanding the roles of water vapour and aerosol particles in this interaction has become an essential part of understanding the atmosphere. In this work, the heterogeneous nucleation on pre-existing aerosol particles by the condensation of water vapour in theflow of a capillary nozzle was investigated. Theoretical and numerical modelling as well as experiments on thiscondensation process were included. Based on reasonable results from the theoretical and numerical modelling, an idea of designing a new nozzle condensation nucleus counter (Nozzle-CNC), that is to utilise the capillary nozzle to create an expanding water saturated air flow, was then put forward and various experiments were carried out with this Nozzle-CNC under different experimental conditions. Firstly, the air stream in the long capillary nozzle with inner diameter of 1.0~mm was modelled as a steady, compressible and heat-conducting turbulence flow by CFX-FLOW3D computational program. An adiabatic and isentropic cooling in the nozzle was found. A supersaturation in the nozzle can be created if the inlet flow is water saturated, and its value depends principally on flow velocity or flow rate through the nozzle. Secondly, a particle condensational growth model in air stream was developed. An extended Mason's diffusion growthequation with size correction for particles beyond the continuum regime and with the correction for a certain particle Reynolds number in an accelerating state was given. The modelling results show the rapid condensational growth of aerosol particles, especially for fine size particles, in the nozzle stream, which, on the one hand, may induce evident `over-sizing' and `over-numbering' effects in aerosol measurements as nozzle designs are widely employed for producing accelerating and focused aerosol beams in aerosol instruments like optical particle counter (OPC) and aerodynamical particle sizer (APS). It can, on the other hand, be applied in constructing the Nozzle-CNC. Thirdly, based on the optimisation of theoretical and numerical results, the new Nozzle-CNC was built. Under various experimental conditions such as flow rate, ambient temperature, and the fraction of aerosol in the total flow, experiments with this instrument were carried out. An interesting exponential relation between the saturation in the nozzle and the number concentration of atmospheric nuclei, including hygroscopic nuclei (HN), cloud condensation nuclei (CCN), and traditionally measured atmospheric condensation nuclei (CN), was found. This relation differs from the relation for the number concentration of CCN obtained by other researchers. The minimum detectable size of this Nozzle-CNC is 0.04?m. Although further improvements are still needed, this Nozzle-CNC, in comparison with other CNCs, has severaladvantages such as no condensation delay as particles larger than the critical size grow simultaneously, low diffusion losses of particles, little water condensation at the inner wall of the instrument, and adjustable saturation --- therefore the wide counting region, as well as no calibration compared to non-water condensation substances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nano(bio)science and nano(bio)technology play a growing and tremendous interest both on academic and industrial aspects. They are undergoing rapid developments on many fronts such as genomics, proteomics, system biology, and medical applications. However, the lack of characterization tools for nano(bio)systems is currently considered as a major limiting factor to the final establishment of nano(bio)technologies. Flow Field-Flow Fractionation (FlFFF) is a separation technique that is definitely emerging in the bioanalytical field, and the number of applications on nano(bio)analytes such as high molar-mass proteins and protein complexes, sub-cellular units, viruses, and functionalized nanoparticles is constantly increasing. This can be ascribed to the intrinsic advantages of FlFFF for the separation of nano(bio)analytes. FlFFF is ideally suited to separate particles over a broad size range (1 nm-1 μm) according to their hydrodynamic radius (rh). The fractionation is carried out in an empty channel by a flow stream of a mobile phase of any composition. For these reasons, fractionation is developed without surface interaction of the analyte with packing or gel media, and there is no stationary phase able to induce mechanical or shear stress on nanosized analytes, which are for these reasons kept in their native state. Characterization of nano(bio)analytes is made possible after fractionation by interfacing the FlFFF system with detection techniques for morphological, optical or mass characterization. For instance, FlFFF coupling with multi-angle light scattering (MALS) detection allows for absolute molecular weight and size determination, and mass spectrometry has made FlFFF enter the field of proteomics. Potentialities of FlFFF couplings with multi-detection systems are discussed in the first section of this dissertation. The second and the third sections are dedicated to new methods that have been developed for the analysis and characterization of different samples of interest in the fields of diagnostics, pharmaceutics, and nanomedicine. The second section focuses on biological samples such as protein complexes and protein aggregates. In particular it focuses on FlFFF methods developed to give new insights into: a) chemical composition and morphological features of blood serum lipoprotein classes, b) time-dependent aggregation pattern of the amyloid protein Aβ1-42, and c) aggregation state of antibody therapeutics in their formulation buffers. The third section is dedicated to the analysis and characterization of structured nanoparticles designed for nanomedicine applications. The discussed results indicate that FlFFF with on-line MALS and fluorescence detection (FD) may become the unparallel methodology for the analysis and characterization of new, structured, fluorescent nanomaterials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There have been almost fifty years since Harry Eckstein' s classic monograph, A Theory of Stable Democracy (Princeton, 1961), where he sketched out the basic tenets of the “congruence theory”, which was to become one of the most important and innovative contributions to understanding democratic rule. His next work, Division and Cohesion in Democracy, (Princeton University Press: 1966) is designed to serve as a plausibility probe for this 'theory' (ftn.) and is a case study of a Northern democratic system, Norway. What is more, this line of his work best exemplifies the contribution Eckstein brought to the methodology of comparative politics through his seminal article, “ “Case Study and Theory in Political Science” ” (in Greenstein and Polsby, eds., Handbook of Political Science, 1975), on the importance of the case study as an approach to empirical theory. This article demonstrates the special utility of “crucial case studies” in testing theory, thereby undermining the accepted wisdom in comparative research that the larger the number of cases the better. Although not along the same lines, but shifting the case study unit of research, I intend to take up here the challenge and build upon an equally unique political system, the Swedish one. Bearing in mind the peculiarities of the Swedish political system, my unit of analysis is going to be further restricted to the Swedish Social Democratic Party, the Svenska Arbetare Partiet. However, my research stays within the methodological framework of the case study theory inasmuch as it focuses on a single political system and party. The Swedish SAP endurance in government office and its electoral success throughout half a century (ftn. As of the 1991 election, there were about 56 years - more than half century - of interrupted social democratic "reign" in Sweden.) are undeniably a performance no other Social Democrat party has yet achieved in democratic conditions. Therefore, it is legitimate to inquire about the exceptionality of this unique political power combination. Which were the different components of this dominance power position, which made possible for SAP's governmental office stamina? I will argue here that it was the end-product of a combination of multifarious factors such as a key position in the party system, strong party leadership and organization, a carefully designed strategy regarding class politics and welfare policy. My research is divided into three main parts, the historical incursion, the 'welfare' part and the 'environment' part. The first part is a historical account of the main political events and issues, which are relevant for my case study. Chapter 2 is devoted to the historical events unfolding in the 1920-1960 period: the Saltsjoebaden Agreement, the series of workers' strikes in the 1920s and SAP's inception. It exposes SAP's ascent to power in the mid 1930s and the party's ensuing strategies for winning and keeping political office, that is its economic program and key economic goals. The following chapter - chapter 3 - explores the next period, i.e. the period from 1960s to 1990s and covers the party's troubled political times, its peak and the beginnings of the decline. The 1960s are relevant for SAP's planning of a long term economic strategy - the Rehn Meidner model, a new way of macroeconomic steering, based on the Keynesian model, but adapted to the new economic realities of welfare capitalist societies. The second and third parts of this study develop several hypotheses related to SAP's 'dominant position' (endurance in politics and in office) and test them afterwards. Mainly, the twin issues of economics and environment are raised and their political relevance for the party analyzed. On one hand, globalization and its spillover effects over the Swedish welfare system are important causal factors in explaining the transformative social-economic challenges the party had to put up with. On the other hand, Europeanization and environmental change influenced to a great deal SAP's foreign policy choices and its domestic electoral strategies. The implications of globalization on the Swedish welfare system will make the subject of two chapters - chapters four and five, respectively, whereupon the Europeanization consequences will be treated at length in the third part of this work - chapters six and seven, respectively. Apparently, at first sight, the link between foreign policy and electoral strategy is difficult to prove and uncanny, in the least. However, in the SAP's case there is a bulk of literature and public opinion statistical data able to show that governmental domestic policy and party politics are in a tight dependence to foreign policy decisions and sovereignty issues. Again, these country characteristics and peculiar causal relationships are outlined in the first chapters and explained in the second and third parts. The sixth chapter explores the presupposed relationship between Europeanization and environmental policy, on one hand, and SAP's environmental policy formulation and simultaneous agenda-setting at the international level, on the other hand. This chapter describes Swedish leadership in environmental policy formulation on two simultaneous fronts and across two different time spans. The last chapter, chapter eight - while trying to develop a conclusion, explores the alternative theories plausible in explaining the outlined hypotheses and points out the reasons why these theories do not fit as valid alternative explanation to my systemic corporatism thesis as the main causal factor determining SAP's 'dominant position'. Among the alternative theories, I would consider Traedgaardh L. and Bo Rothstein's historical exceptionalism thesis and the public opinion thesis, which alone are not able to explain the half century social democratic endurance in government in the Swedish case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The laser driven ion acceleration is a burgeoning field of resarch and is attracting a growing number of scientists since the first results reported in 2000 obtained irradiating thin solid foils by high power laser pulses. The growing interest is driven by the peculiar characteristics of the produced bunches, the compactness of the whole accelerating system and the very short accelerating length of this all-optical accelerators. A fervent theoretical and experimental work has been done since then. An important part of the theoretical study is done by means of numerical simulations and the most widely used technique exploits PIC codes (“Particle In Cell'”). In this thesis the PIC code AlaDyn, developed by our research group considering innovative algorithms, is described. My work has been devoted to the developement of the code and the investigation of the laser driven ion acceleration for different target configurations. Two target configurations for the proton acceleration are presented together with the results of the 2D and 3D numerical investigation. One target configuration consists of a solid foil with a low density layer attached on the irradiated side. The nearly critical plasma of the foam layer allows a very high energy absorption by the target and an increase of the proton energy up to a factor 3, when compared to the ``pure'' TNSA configuration. The differences of the regime with respect to the standard TNSA are described The case of nearly critical density targets has been investigated with 3D simulations. In this case the laser travels throughout the plasma and exits on the rear side. During the propagation, the laser drills a channel and induce a magnetic vortex that expanding on the rear side of the targer is source of a very intense electric field. The protons of the plasma are strongly accelerated up to energies of 100 MeV using a 200PW laser.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Untersuchungen zur Expression der induzierbaren NO-Synthetase (NOS2) belegen eine häufige Expression dieses Enzyms in Tumoren unterschiedlicher Gewebe. Bislang ist jedoch ungeklärt, ob die Expression der NOS2 in Tumorzellen die apoptotische Eliminierung durch zytotoxische T-Zellen beeinflussen kann. In der vorliegenden Arbeit wurden die Folgen einer endogenen NO-Synthese auf die Apoptosesensitivität von HEK293-Zellen untersucht. Um primäre NO-Wirkungen von NO-induzierten, sekundären (kompensatorischen) Veränderungen zu trennen, wurde mit einem induzierbaren Vektorsystem gearbeitet. Die NOS2 wurde zunächst unter der Kontrolle eines Ecdyson-sensitiven Promoters in HEK293-Zellen kloniert. Es konnten regulierbare NOS2-Klone selektiert werden, die nach Ponasteronbehandlung dosisabhängig die NOS2 exprimieren und NO synthetisieren. Die NOS2-Expression wurde durch Western Blot Analyse und Immunfluoreszenzfärbung dargestellt und die NO-Produktion mit Hilfe der Griess-Reaktion gemessen. An den NOS2-induzierten Zellen wurde dann der Einfluss von NO auf die CD95-vermittelte Apoptose analysiert. Es zeigte sich nach Stimulation des CD95-Rezeptors eine deutliche Korrelation der Apoptoserate mit der NOS2-Expression. In Kokulturexperimenten mit Peptid-spezifischen zytotoxischen T-Zellen zeigte sich, dass NO-produzierende Zielzellen effektiver eliminiert werden konnten. Auch nach Behandlung der Zellen mit TRAIL ergab sich eine höhere Apoptoserate in NO-produzierenden Zellen. Die weitere Analyse der durch NO beeinflussten Signalwege ergab eine Beteiligung von ER-Stress-vermittelten Apoptosewegen. Dies zeigte sich an der Hochregulation des ER-Stress-Proteins Grp78 (BiP) nach NOS2-Expression und der Spaltung der am ER-lokalisierten Caspase-4. Darüber hinaus konnte der schnellere Verlust des mitochondrialen Membranpotentials in Abhängigkeit von der NOS2-Expression nachgewiesen werden. Weiterhin wurde die Wirkung einer dauerhaften NO-Exposition auf die Apoptosesensitivität der Zellen untersucht. Auch ohne zusätzliche CD95-Stimulation induzierte eine kontinuierliche NOS2-Expression nach wenigen Tagen in den EcR293-NOS2-Zellen Apoptose. Diese Dauerbehandlung führte zum nahezu vollständigen Absterben der Kulturen. Einige Zellen überlebten jedoch diese Behandlung und wuchsen zu Zellklonen. Diese NO-resistenten Klone konnten isoliert werden. Sie zeigten eine zusätzliche Resistenz für CD95-vermittelte Apoptosesignale und waren besser vor dem Angriff Peptid-spezifischer CTLs geschützt. Die Apoptoseresistenz blieb auch nach längerer Kultur erhalten und scheint auf NO-induzierter Genotoxizität zu beruhen. Anhand dieser Arbeit konnte gezeigt werden, dass allein durch chronische NO-Behandlung eine Selektion apoptoseresistenter Zellen stattfinden kann.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Verifikation bewertet die Güte von quantitativen Niederschlagsvorhersagen(QNV) gegenüber Beobachtungen und liefert Hinweise auf systematische Modellfehler. Mit Hilfe der merkmals-bezogenen Technik SAL werden simulierte Niederschlagsverteilungen hinsichtlich (S)truktur, (A)mplitude und (L)ocation analysiert. Seit einigen Jahren werden numerische Wettervorhersagemodelle benutzt, mit Gitterpunktabständen, die es erlauben, hochreichende Konvektion ohne Parametrisierung zu simulieren. Es stellt sich jetzt die Frage, ob diese Modelle bessere Vorhersagen liefern. Der hoch aufgelöste stündliche Beobachtungsdatensatz, der in dieser Arbeit verwendet wird, ist eine Kombination von Radar- und Stationsmessungen. Zum einem wird damit am Beispiel der deutschen COSMO-Modelle gezeigt, dass die Modelle der neuesten Generation eine bessere Simulation des mittleren Tagesgangs aufweisen, wenn auch mit zu geringen Maximum und etwas zu spätem Auftreten. Im Gegensatz dazu liefern die Modelle der alten Generation ein zu starkes Maximum, welches erheblich zu früh auftritt. Zum anderen wird mit dem neuartigen Modell eine bessere Simulation der räumlichen Verteilung des Niederschlags, durch eine deutliche Minimierung der Luv-/Lee Proble-matik, erreicht. Um diese subjektiven Bewertungen zu quantifizieren, wurden tägliche QNVs von vier Modellen für Deutschland in einem Achtjahreszeitraum durch SAL sowie klassischen Maßen untersucht. Die höher aufgelösten Modelle simulieren realistischere Niederschlagsverteilungen(besser in S), aber bei den anderen Komponenten tritt kaum ein Unterschied auf. Ein weiterer Aspekt ist, dass das Modell mit der gröbsten Auf-lösung(ECMWF) durch den RMSE deutlich am besten bewertet wird. Darin zeigt sich das Problem des ‚Double Penalty’. Die Zusammenfassung der drei Komponenten von SAL liefert das Resultat, dass vor allem im Sommer das am feinsten aufgelöste Modell (COSMO-DE) am besten abschneidet. Hauptsächlich kommt das durch eine realistischere Struktur zustande, so dass SAL hilfreiche Informationen liefert und die subjektive Bewertung bestätigt. rnIm Jahr 2007 fanden die Projekte COPS und MAP D-PHASE statt und boten die Möglich-keit, 19 Modelle aus drei Modellkategorien hinsichtlich ihrer Vorhersageleistung in Südwestdeutschland für Akkumulationszeiträume von 6 und 12 Stunden miteinander zu vergleichen. Als Ergebnisse besonders hervorzuheben sind, dass (i) je kleiner der Gitter-punktabstand der Modelle ist, desto realistischer sind die simulierten Niederschlags-verteilungen; (ii) bei der Niederschlagsmenge wird in den hoch aufgelösten Modellen weniger Niederschlag, d.h. meist zu wenig, simuliert und (iii) die Ortskomponente wird von allen Modellen am schlechtesten simuliert. Die Analyse der Vorhersageleistung dieser Modelltypen für konvektive Situationen zeigt deutliche Unterschiede. Bei Hochdrucklagen sind die Modelle ohne Konvektionsparametrisierung nicht in der Lage diese zu simulieren, wohingegen die Modelle mit Konvektionsparametrisierung die richtige Menge, aber zu flächige Strukturen realisieren. Für konvektive Ereignisse im Zusammenhang mit Fronten sind beide Modelltypen in der Lage die Niederschlagsverteilung zu simulieren, wobei die hoch aufgelösten Modelle realistischere Felder liefern. Diese wetterlagenbezogene Unter-suchung wird noch systematischer unter Verwendung der konvektiven Zeitskala durchge-führt. Eine erstmalig für Deutschland erstellte Klimatologie zeigt einen einer Potenzfunktion folgenden Abfall der Häufigkeit dieser Zeitskala zu größeren Werten hin auf. Die SAL Ergebnisse sind für beide Bereiche dramatisch unterschiedlich. Für kleine Werte der konvektiven Zeitskala sind sie gut, dagegen werden bei großen Werten die Struktur sowie die Amplitude deutlich überschätzt. rnFür zeitlich sehr hoch aufgelöste Niederschlagsvorhersagen gewinnt der Einfluss der zeitlichen Fehler immer mehr an Bedeutung. Durch die Optimierung/Minimierung der L Komponente von SAL innerhalb eines Zeitfensters(+/-3h) mit dem Beobachtungszeit-punkt im Zentrum ist es möglich diese zu bestimmen. Es wird gezeigt, dass bei optimalem Zeitversatz die Struktur und Amplitude der QNVs für das COSMO-DE besser werden und damit die grundsätzliche Fähigkeit des Modells die Niederschlagsverteilung realistischer zu simulieren, besser gezeigt werden kann.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biogene flüchtige organische Verbindungen (BFOV) werden in großen Mengen aus terrestrischenrnÖkosystemen, insbesondere aus Wäldern und Wiesen, in die untere Troposphäre emittiert. Austausch-rnFlüsse von BFOVs sind in der troposphärischen Chemie wichtig, weil sie eine bedeutende Rolle in derrnOzon- und Aerosolbildung haben. Trotzdem bleiben die zeitliche und räumliche Änderung der BFOVrnEmissionen und ihre Rolle in Bildung und Wachstum von Aerosolen ungewiss.rnDer Fokus dieser Arbeit liegt auf der in-situ Anwendung der Protonen Transfer ReaktionsrnMassenspektrometrie (PTR-MS) und der Messung von biogenen flüchtigen organischen Verbindungen inrnnordländischen, gemäßigten und tropischen Waldökosystemen während drei unterschiedlicherrnFeldmesskampagnen. Der Hauptvorteil der PTR-MS-Technik liegt in der hohen Messungsfrequenz,rnwodurch eine eventuelle Änderung in der Atmosphäre durch Transport, Vermischung und Chemiernonline beobachtet werden kann. Die PTR-MS-Messungen wurden zweimal am Boden aus und einmalrnvon einem Forschungsflugzug durchgeführt.rnIn Kapitel 3 werden die PTR-MS-Daten, gesammelt während der Flugmesskampagne über demrntropischen Regenwald, vorgelegt. Diese Studie zeigt den Belang der Grenzschichtdynamik für diernVerteilung von Spurengasen mittels eines eindimensionalen Säule - Chemie und KlimaModells (SCM).rnDer Tagesablauf von Isopren zeigte zwischen 14:00 und 16:15 Uhr lokaler Zeit einen Mittelwert vonrn5.4 ppbv auf der Höhe der Baumspitzen und von 3.3 ppbv über 300 m Höhe. Dies deutet darauf hin, dassrnsowohl der turbulente Austausch als auch die hohe Reaktionsfähigkeit von Isopren mit den OxidantienrnOH und Ozon eine wichtige Rolle spielen. Nach dem Ausgleich von chemischen Verlusten undrnEntrainment (Ein- und Ausmischung von Luft an der Grenzschicht), wurde ein Fluss vonrn8.4 mg Isopren m-2h-1 unter teilweise bewölkten Bedingungen für den tropischen Regenwald in derrnGuyanregion abgeschätzt. Dies entspricht einem täglichen Emissionsfluss von 28 mg Isopren prornQuadratmeter.rnIm Kapitel 4 werden die Messungen, welche auf einer Hügellage in gemäßigter Breite inrnsüddeutschland stattgefunden haben, diskutiert. Bei diesem Standort ist die Grenzschicht nachts unter diernStandorthöhe abgefallen, was den Einsatzort von Emissionen abgesondert hatte. Während diernGrenzschicht morgens wieder über die Höhe des Einsatzortes anstieg, konnten die eingeschlossenenrnnächtlichen Emissionen innerhalb der bodennahen Schicht beobachtet werden. Außerdem wurde einrndeutlicher Anstieg von flüchtigen organischen Verbindungen gemessen, wenn die Luftmassen überrnMünchen geführt wurden oder wenn verschmutzte Luftmassen aus dem Po-Tal über die Alpen nachrnDeutschland transportiert wurden. Daten von dieser Kampagne wurden genutzt, um die Änderungen inrndem Mischungsverhältnis der flüchtigen organischen Verbindungen, verbunden mit dem Durchfluss vonrnwarmen und kalten Wetterfronten sowie bei Regen zu untersuchen.rnIm Kapitel 5 werden PTR-MS-Messungen aus dem nördlichen Nadelwaldgürtel beschrieben. Starkernnächtliche Inversionen mit einer niedrigen Windgeschwindigkeit fingen die Emissionen vonrnnahegelegenen Kiefernwäldern und andere BFOV-Quellen ab, was zu hohen nächtlichen BFOVMischverhältnissenrnführte. Partikelereignisse wurden für Tag und Nacht detailliert analysiert. Diernnächtlichen Partikelereignisse erfolgten synchron mit starken extremen von Monoterpenen, obwohl dasrnzweite Ereignis Kernbildung einschloss und nicht mit Schwefelsäure korrelierte. Die MonoterpenrnMischungsverhältnisse von über 16 ppbv waren unerwartet hoch für diese Jahreszeit. NiedrigernWindgeschwindigkeiten und die Auswertung von Rückwärtstrajektorien deuten auf eine konzentrierternQuelle in der Nähe von Hyytiälä hin. Die optische Stereoisomerie von Monoterpenen hat bestätigt, dassrndie Quelle unnatürlich ist, da das Verhältnis von [(+)-α-pinen]/[(−)-α-pinen] viel höher ist als dasrnnatürliches Verhältnis der beiden Enantiomeren.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.