912 resultados para finite and infinitesimal models
Resumo:
This dissertation takes a step towards providing a better understanding of post-socialist welfare state development from a theoretical as well as an empirical perspective. The overall analytical goal of this thesis has been to critically assess the development of social policies in Estonia, Latvia and Lithuania using them as illustrative examples of post-socialist welfare state development in the light of the theories, approaches and typologies that have been developed to study affluent capitalist democracies. The four studies included in this dissertation aspire to a common aim in a number of specific ways. The first study tries to place the ideal-typical welfare state models of the Baltic States within the well-known welfare state typologies. At the same time, it provides a rich overview of the main social security institutions in the three countries by comparing them with each other and with the previous structures of the Soviet period. It examines the social insurance institutions of the Baltic States (old-age pensions, unemployment insurance, short-term benefits, sickness, maternity and parental insurance and family benefits) with respect to conditions of eligibility, replacement rates, financing and contributions. The findings of this study indicate that the Latvian social security system can generally be labelled as a mix of the basic security and corporatist models. The Estonian social security system can generally also be characterised as a mix of the basic security and corporatist models, even if there are some weak elements of the targeted model in it. It appears that the institutional changes developing in the social security system of Lithuania have led to a combination of the basic security and targeted models of the welfare state. Nevertheless, as the example of the three Baltic States shows, there is diversity in how these countries solve problems within the field of social policy. In studying the social security schemes in detail, some common features were found that could be attributed to all three countries. Therefore, the critical analysis of the main social security institutions of the Baltic States in this study gave strong supporting evidence in favour of identifying the post-socialist regime type that is already gaining acceptance within comparative welfare state research. Study Two compares the system of social maintenance and insurance in the Soviet Union, which was in force in the three Baltic countries before their independence, with the currently existing social security systems. The aim of the essay is to highlight the forces that have influenced the transformation of the social policy from its former highly universal, albeit authoritarian, form, to the less universal, social insurance-based systems of present-day Estonia, Latvia and Lithuania. This study demonstrates that the welfare–economy nexus is not the only important factor in the development of social programs. The results of this analysis revealed that people's attitudes towards distributive justice and the developmental level of civil society also play an important part in shaping social policies. The shift to individualism in people’s mentality and the decline of the labour movement, or, to be more precise, the decline in trade union membership and influence, does nothing to promote the development of social rights in the Baltic countries and hinders the expansion of social policies. The legacy of the past has been another important factor in shaping social programs. It can be concluded that social policy should be studied as if embedded not only in the welfare-economy nexus, but also in the societal, historical and cultural nexus of a given society. Study Three discusses the views of the state elites on family policy within a wider theoretical setting covering family policy and social policy in a broader sense and attempts to expand this analytical framework to include other post-socialist countries. The aim of this essay is to explore the various views of the state elites in the Baltics concerning family policy and, in particular, family benefits as one of the possible explanations for the observed policy differences. The qualitative analyses indicate that the Baltic States differ significantly with regard to the motives behind their family policies. Lithuanian decision-makers seek to reduce poverty among families with children and enhance the parents’ responsibility for bringing up their children. Latvian policy-makers act so as to increase the birth rate and create equal opportunities for children from all families. Estonian policy-makers seek to create equal opportunities for all children and the desire to enhance gender equality is more visible in the case of Estonia in comparison with the other two countries. It is strongly arguable that there is a link between the underlying motives and the kinds of family benefits in a given country. This study, thus, indicates how intimately the attitudes of the state bureaucrats, policy-makers, political elite and researchers shape social policy. It confirms that family policy is a product of the prevailing ideology within a country, while the potential influence of globalisation and Europeanisation is detectable too. The final essay takes into account the opinions of welfare users and examines the performances of the institutionalised family benefits by relying on the recipients’ opinions regarding these benefits. The opinions of the populations as a whole regarding government efforts to help families are compared with those of the welfare users. Various family benefits are evaluated according to the recipients' satisfaction with those benefits as well as the contemporaneous levels of subjective satisfaction with the welfare programs related to the absolute level of expenditure on each program. The findings of this paper indicate that, in Latvia, people experience a lower level of success regarding state-run family insurance institutions, as compared to those in Lithuania and Estonia. This is deemed to be because the cash benefits for families and children in Latvia are, on average, seen as marginally influencing the overall financial situation of the families concerned. In Lithuania and Estonia, the overwhelming majority think that the family benefit systems improve the financial situation of families. It appears that recipients evaluated universal family benefits as less positive than targeted benefits. Some universal benefits negatively influenced the level of general satisfaction with the family benefits system provided in the countries being researched. This study puts forward a discussion about whether universalism is always more legitimate than targeting. In transitional economies, in which resources are highly constrained, some forms of universal benefits could turn out to be very expensive in relative terms, without being seen as useful or legitimate forms of help to families. In sum, by closely examining the different aspects of social policy, this dissertation goes beyond the over-generalisation of Eastern European welfare state development and, instead, takes a more detailed look at what is really going on in these countries through the examples of Lithuania, Latvia and Estonia. In addition, another important contribution made by this study is that it revives ‘western’ theoretical knowledge through ‘eastern’ empirical evidence and provides the opportunity to expand the theoretical framework for post-socialist societies.
Resumo:
Dielectric Elastomers (DE) are incompressible dielectrics which can experience deviatoric (isochoric) finite deformations in response to applied large electric fields. Thanks to the strong electro-mechanical coupling, DE intrinsically offer great potentialities for conceiving novel solid-state mechatronic devices, in particular linear actuators, which are more integrated, lightweight, economic, silent, resilient and disposable than equivalent devices based on traditional technologies. Such systems may have a huge impact in applications where the traditional technology does not allow coping with the limits of weight or encumbrance, and with problems involving interaction with humans or unknown environments. Fields such as medicine, domotic, entertainment, aerospace and transportation may profit. For actuation usage, DE are typically shaped in thin films coated with compliant electrodes on both sides and piled one on the other to form a multilayered DE. DE-based Linear Actuators (DELA) are entirely constituted by polymeric materials and their overall performance is highly influenced by several interacting factors; firstly by the electromechanical properties of the film, secondly by the mechanical properties and geometry of the polymeric frame designed to support the film, and finally by the driving circuits and activation strategies. In the last decade, much effort has been focused in the devolvement of analytical and numerical models that could explain and predict the hyperelastic behavior of different types of DE materials. Nevertheless, at present, the use of DELA is limited. The main reasons are 1) the lack of quantitative and qualitative models of the actuator as a whole system 2) the lack of a simple and reliable design methodology. In this thesis, a new point of view in the study of DELA is presented which takes into account the interaction between the DE film and the film supporting frame. Hyperelastic models of the DE film are reported which are capable of modeling the DE and the compliant electrodes. The supporting frames are analyzed and designed as compliant mechanisms using pseudo-rigid body models and subsequent finite element analysis. A new design methodology is reported which optimize the actuator performances allowing to specifically choose its inherent stiffness. As a particular case, the methodology focuses on the design of constant force actuators. This class of actuators are an example of how the force control could be highly simplified. Three new DE actuator concepts are proposed which highlight the goodness of the proposed method.
Resumo:
The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.
Resumo:
In this thesis a connection between triply factorised groups and nearrings is investigated. A group G is called triply factorised by its subgroups A, B, and M, if G = AM = BM = AB, where M is normal in G and the intersection of A and B with M is trivial. There is a well-known connection between triply factorised groups and radical rings. If the adjoint group of a radical ring operates on its additive group, the semidirect product of those two groups is triply factorised. On the other hand, if G = AM = BM = AB is a triply factorised group with abelian subgroups A, B, and M, G can be constructed from a suitable radical ring, if the intersection of A and B is trivial. In these triply factorised groups the normal subgroup M is always abelian. In this thesis the construction of triply factorised groups is generalised using nearrings instead of radical rings. Nearrings are a generalisation of rings in the sense that their additive groups need not be abelian and only one distributive law holds. Furthermore, it is shown that every triply factorised group G = AM = BM = AB can be constructed from a nearring if A and B intersect trivially. Moreover, the structure of nearrings is investigated in detail. Especially local nearrings are investigated, since they are important for the construction of triply factorised groups. Given an arbitrary p-group N, a method to construct a local nearring is presented, such that the triply factorised group constructed from this nearring contains N as a subgroup of the normal subgroup M. Finally all local nearrings with dihedral groups of units are classified. It turns out that these nearrings are always finite and their order does not exceed 16.
Resumo:
The modern stratigraphy of clastic continental margins is the result of the interaction between several geological processes acting on different time scales, among which sea level oscillations, sediment supply fluctuations and local tectonics are the main mechanisms. During the past three years my PhD was focused on understanding the impact of each of these process in the deposition of the central and northern Adriatic sedimentary successions, with the aim of reconstructing and quantifying the Late Quaternary eustatic fluctuations. In the last few decades, several Authors tried to quantify past eustatic fluctuations through the analysis of direct sea level indicators, among which drowned barrier-island deposits or coral reefs, or indirect methods, such as Oxygen isotope ratios (δ18O) or modeling simulations. Sea level curves, obtained from direct sea level indicators, record a composite signal, formed by the contribution of the global eustatic change and regional factors, as tectonic processes or glacial-isostatic rebound effects: the eustatic signal has to be obtained by removing the contribution of these other mechanisms. To obtain the most realistic sea level reconstructions it is important to quantify the tectonic regime of the central Adriatic margin. This result has been achieved integrating a numerical approach with the analysis of high-resolution seismic profiles. In detail, the subsidence trend obtained from the geohistory analysis and the backstripping of the borehole PRAD1.2 (the borehole PRAD1.2 is a 71 m continuous borehole drilled in -185 m of water depth, south of the Mid Adriatic Deep - MAD - during the European Project PROMESS 1, Profile Across Mediterranean Sedimentary Systems, Part 1), has been confirmed by the analysis of lowstand paleoshorelines and by benthic foraminifera associations investigated through the borehole. This work showed an evolution from inner-shelf environment, during Marine Isotopic Stage (MIS) 10, to upper-slope conditions, during MIS 2. Once the tectonic regime of the central Adriatic margin has been constrained, it is possible to investigate the impact of sea level and sediment supply fluctuations on the deposition of the Late Pleistocene-Holocene transgressive deposits. The Adriatic transgressive record (TST - Transgressive Systems Tract) is formed by three correlative sedimentary bodies, deposited in less then 14 kyr since the Last Glacial Maximum (LGM); in particular: along the central Adriatic shelf and in the adjacent slope basin the TST is formed by marine units, while along the northern Adriatic shelf the TST is represented by costal deposits in a backstepping configuration. The central Adriatic margin, characterized by a thick transgressive sedimentary succession, is the ideal site to investigate the impact of late Pleistocene climatic and eustatic fluctuations, among which Meltwater Pulses 1A and 1B and the Younger Dryas cold event. The central Adriatic TST is formed by a tripartite deposit bounded by two regional unconformities. In particular, the middle TST unit includes two prograding wedges, deposited in the interval between the two Meltwater Pulse events, as highlighted by several 14C age estimates, and likely recorded the Younger Dryas cold interval. Modeling simulations, obtained with the two coupled models HydroTrend 3.0 and 2D-Sedflux 1.0C (developed by the Community Surface Dynamics Modeling System - CSDMS), integrated by the analysis of high resolution seismic profiles and core samples, indicate that: 1 - the prograding middle TST unit, deposited during the Younger Dryas, was formed as a consequence of an increase in sediment flux, likely connected to a decline in vegetation cover in the catchment area due to the establishment of sub glacial arid conditions; 2 - the two-stage prograding geometry was the consequence of a sea level still-stand (or possibly a fall) during the Younger Dryas event. The northern Adriatic margin, characterized by a broad and gentle shelf (350 km wide with a low angle plunge of 0.02° to the SE), is the ideal site to quantify the timing of each steps of the post LGM sea level rise. The modern shelf is characterized by sandy deposits of barrier-island systems in a backstepping configuration, showing younger ages at progressively shallower depths, which recorded the step-wise nature of the last sea level rise. The age-depth model, obtained by dated samples of basal peat layers, is in good agreement with previous published sea level curves, and highlights the post-glacial eustatic trend. The interval corresponding to the Younger Dyas cold reversal, instead, is more complex: two coeval coastal deposits characterize the northern Adriatic shelf at very different water depths. Several explanations and different models can be attempted to explain this conundrum, but the problem remains still unsolved.
Resumo:
The aim of this Thesis is to investigate the effect of heterogeneities within the subducting plate on the dynamics of subduction. In particular, I study the motion of the trench for oceanic and continental subduction, first, separately, and, then, together in the same system to understand how they interact. The understanding of these features is fundamental to reconstruct the evolution of complex subduction zones, such as the Central Mediterranean. For this purpose, I developed 2D and 3D numerical models of oceanic and continental subduction where the rheological, geometrical and compositional properties of the plates are varied. In these models, the trench and the overriding plate move self-consistently as a function of the dynamics of the system. The effect of continental subduction on trench migration is largely investigated. Results from a parametric study showed that despite different rheological properties of the plates, all models with a uniform continental crust share the same kinematic behaviour: the trench starts to advance once the continent arrives at the subduction zone. Hence, the advancing mode in continental collision scenarios is at least partly driven by an intrinsic feature of the system. Moreover, the presence of a weak lower crust within the continental plate can lead to the occurrence of delamination. Indeed, by changing the viscosity of the lower crust, both delamination and slab detachment can occur. Delamination is favoured by a low viscosity value of the lower crust, because this makes the mechanical decoupling easier between crust and lithospheric mantle. These features are observed both in 2D and 3D models, but the numerical results of the 3D models also showed that the rheology of the continental crust has a very strong effect on the dynamics of the whole system, since it influences not only the continental part of plate but also the oceanic sides.
Resumo:
La tesi di Dottorato studia il flusso sanguigno tramite un codice agli elementi finiti (COMSOL Multiphysics). Nell’arteria è presente un catetere Doppler (in posizione concentrica o decentrata rispetto all’asse di simmetria) o di stenosi di varia forma ed estensione. Le arterie sono solidi cilindrici rigidi, elastici o iperelastici. Le arterie hanno diametri di 6 mm, 5 mm, 4 mm e 2 mm. Il flusso ematico è in regime laminare stazionario e transitorio, ed il sangue è un fluido non-Newtoniano di Casson, modificato secondo la formulazione di Gonzales & Moraga. Le analisi numeriche sono realizzate in domini tridimensionali e bidimensionali, in quest’ultimo caso analizzando l’interazione fluido-strutturale. Nei casi tridimensionali, le arterie (simulazioni fluidodinamiche) sono infinitamente rigide: ricavato il campo di pressione si procede quindi all’analisi strutturale, per determinare le variazioni di sezione e la permanenza del disturbo sul flusso. La portata sanguigna è determinata nei casi tridimensionali con catetere individuando tre valori (massimo, minimo e medio); mentre per i casi 2D e tridimensionali con arterie stenotiche la legge di pressione riproduce l’impulso ematico. La mesh è triangolare (2D) o tetraedrica (3D), infittita alla parete ed a valle dell’ostacolo, per catturare le ricircolazioni. Alla tesi sono allegate due appendici, che studiano con codici CFD la trasmissione del calore in microcanali e l’ evaporazione di gocce d’acqua in sistemi non confinati. La fluidodinamica nei microcanali è analoga all’emodinamica nei capillari. Il metodo Euleriano-Lagrangiano (simulazioni dell’evaporazione) schematizza la natura mista del sangue. La parte inerente ai microcanali analizza il transitorio a seguito dell’applicazione di un flusso termico variabile nel tempo, variando velocità in ingresso e dimensioni del microcanale. L’indagine sull’evaporazione di gocce è un’analisi parametrica in 3D, che esamina il peso del singolo parametro (temperatura esterna, diametro iniziale, umidità relativa, velocità iniziale, coefficiente di diffusione) per individuare quello che influenza maggiormente il fenomeno.
Resumo:
This thesis is devoted to the study of the properties of high-redsfhit galaxies in the epoch 1 < z < 3, when a substantial fraction of galaxy mass was assembled, and when the evolution of the star-formation rate density peaked. Following a multi-perspective approach and using the most recent and high-quality data available (spectra, photometry and imaging), the morphologies and the star-formation properties of high-redsfhit galaxies were investigated. Through an accurate morphological analyses, the built up of the Hubble sequence was placed around z ~ 2.5. High-redshift galaxies appear, in general, much more irregular and asymmetric than local ones. Moreover, the occurrence of morphological k-correction is less pronounced than in the local Universe. Different star-formation rate indicators were also studied. The comparison of ultra-violet and optical based estimates, with the values derived from infra-red luminosity showed that the traditional way of addressing the dust obscuration is problematic, at high-redshifts, and new models of dust geometry and composition are required. Finally, by means of stacking techniques applied to rest-frame ultra-violet spectra of star-forming galaxies at z~2, the warm phase of galactic-scale outflows was studied. Evidence was found of escaping gas at velocities of ~ 100 km/s. Studying the correlation of inter-stellar absorption lines equivalent widths with galaxy physical properties, the intensity of the outflow-related spectral features was proven to depend strongly on a combination of the velocity dispersion of the gas and its geometry.
Resumo:
Population growth in urban areas is a world-wide phenomenon. According to a recent United Nations report, over half of the world now lives in cities. Numerous health and environmental issues arise from this unprecedented urbanization. Recent studies have demonstrated the effectiveness of urban green spaces and the role they play in improving both the aesthetics and the quality of life of its residents. In particular, urban green spaces provide ecosystem services such as: urban air quality improvement by removing pollutants that can cause serious health problems, carbon storage, carbon sequestration and climate regulation through shading and evapotranspiration. Furthermore, epidemiological studies with controlled age, sex, marital and socio-economic status, have provided evidence of a positive relationship between green space and the life expectancy of senior citizens. However, there is little information on the role of public green spaces in mid-sized cities in northern Italy. To address this need, a study was conducted to assess the ecosystem services of urban green spaces in the city of Bolzano, South Tyrol, Italy. In particular, we quantified the cooling effect of urban trees and the hourly amount of pollution removed by the urban forest. The information was gathered using field data collected through local hourly air pollution readings, tree inventory and simulation models. During the study we quantified pollution removal for ozone, nitrogen dioxide, carbon monoxide and particulate matter (<10 microns). We estimated the above ground carbon stored and annually sequestered by the urban forest. Results have been compared to transportation CO2 emissions to determine the CO2 offset potential of urban streetscapes. Furthermore, we assessed commonly used methods for estimating carbon stored and sequestered by urban trees in the city of Bolzano. We also quantified ecosystem disservices such as hourly urban forest volatile organic compound emissions.
Resumo:
Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.
Resumo:
Proxy data are essential for the investigation of climate variability on time scales larger than the historical meteorological observation period. The potential value of a proxy depends on our ability to understand and quantify the physical processes that relate the corresponding climate parameter and the signal in the proxy archive. These processes can be explored under present-day conditions. In this thesis, both statistical and physical models are applied for their analysis, focusing on two specific types of proxies, lake sediment data and stable water isotopes.rnIn the first part of this work, the basis is established for statistically calibrating new proxies from lake sediments in western Germany. A comprehensive meteorological and hydrological data set is compiled and statistically analyzed. In this way, meteorological times series are identified that can be applied for the calibration of various climate proxies. A particular focus is laid on the investigation of extreme weather events, which have rarely been the objective of paleoclimate reconstructions so far. Subsequently, a concrete example of a proxy calibration is presented. Maxima in the quartz grain concentration from a lake sediment core are compared to recent windstorms. The latter are identified from the meteorological data with the help of a newly developed windstorm index, combining local measurements and reanalysis data. The statistical significance of the correlation between extreme windstorms and signals in the sediment is verified with the help of a Monte Carlo method. This correlation is fundamental for employing lake sediment data as a new proxy to reconstruct windstorm records of the geological past.rnThe second part of this thesis deals with the analysis and simulation of stable water isotopes in atmospheric vapor on daily time scales. In this way, a better understanding of the physical processes determining these isotope ratios can be obtained, which is an important prerequisite for the interpretation of isotope data from ice cores and the reconstruction of past temperature. In particular, the focus here is on the deuterium excess and its relation to the environmental conditions during evaporation of water from the ocean. As a basis for the diagnostic analysis and for evaluating the simulations, isotope measurements from Rehovot (Israel) are used, provided by the Weizmann Institute of Science. First, a Lagrangian moisture source diagnostic is employed in order to establish quantitative linkages between the measurements and the evaporation conditions of the vapor (and thus to calibrate the isotope signal). A strong negative correlation between relative humidity in the source regions and measured deuterium excess is found. On the contrary, sea surface temperature in the evaporation regions does not correlate well with deuterium excess. Although requiring confirmation by isotope data from different regions and longer time scales, this weak correlation might be of major importance for the reconstruction of moisture source temperatures from ice core data. Second, the Lagrangian source diagnostic is combined with a Craig-Gordon fractionation parameterization for the identified evaporation events in order to simulate the isotope ratios at Rehovot. In this way, the Craig-Gordon model can be directly evaluated with atmospheric isotope data, and better constraints for uncertain model parameters can be obtained. A comparison of the simulated deuterium excess with the measurements reveals that a much better agreement can be achieved using a wind speed independent formulation of the non-equilibrium fractionation factor instead of the classical parameterization introduced by Merlivat and Jouzel, which is widely applied in isotope GCMs. Finally, the first steps of the implementation of water isotope physics in the limited-area COSMO model are described, and an approach is outlined that allows to compare simulated isotope ratios to measurements in an event-based manner by using a water tagging technique. The good agreement between model results from several case studies and measurements at Rehovot demonstrates the applicability of the approach. Because the model can be run with high, potentially cloud-resolving spatial resolution, and because it contains sophisticated parameterizations of many atmospheric processes, a complete implementation of isotope physics will allow detailed, process-oriented studies of the complex variability of stable isotopes in atmospheric waters in future research.rn
Resumo:
Five different methods were critically examined to characterize the pore structure of the silica monoliths. The mesopore characterization was performed using: a) the classical BJH method of nitrogen sorption data, which showed overestimated values in the mesopore distribution and was improved by using the NLDFT method, b) the ISEC method implementing the PPM and PNM models, which were especially developed for monolithic silicas, that contrary to the particulate supports, demonstrate the two inflection points in the ISEC curve, enabling the calculation of pore connectivity, a measure for the mass transfer kinetics in the mesopore network, c) the mercury porosimetry using a new recommended mercury contact angle values. rnThe results of the characterization of mesopores of monolithic silica columns by the three methods indicated that all methods were useful with respect to the pore size distribution by volume, but only the ISEC method with implemented PPM and PNM models gave the average pore size and distribution based on the number average and the pore connectivity values.rnThe characterization of the flow-through pore was performed by two different methods: a) the mercury porosimetry, which was used not only for average flow-through pore value estimation, but also the assessment of entrapment. It was found that the mass transfer from the flow-through pores to mesopores was not hindered in case of small sized flow-through pores with a narrow distribution, b) the liquid penetration where the average flow-through pore values were obtained via existing equations and improved by the additional methods developed according to Hagen-Poiseuille rules. The result was that not the flow-through pore size influences the column bock pressure, but the surface area to volume ratio of silica skeleton is most decisive. Thus the monolith with lowest ratio values will be the most permeable. rnThe flow-through pore characterization results obtained by mercury porosimetry and liquid permeability were compared with the ones from imaging and image analysis. All named methods enable a reliable characterization of the flow-through pore diameters for the monolithic silica columns, but special care should be taken about the chosen theoretical model.rnThe measured pore characterization parameters were then linked with the mass transfer properties of monolithic silica columns. As indicated by the ISEC results, no restrictions in mass transfer resistance were noticed in mesopores due to their high connectivity. The mercury porosimetry results also gave evidence that no restrictions occur for mass transfer from flow-through pores to mesopores in the small scaled silica monoliths with narrow distribution. rnThe prediction of the optimum regimes of the pore structural parameters for the given target parameters in HPLC separations was performed. It was found that a low mass transfer resistance in the mesopore volume is achieved when the nominal diameter of the number average size distribution of the mesopores is appr. an order of magnitude larger that the molecular radius of the analyte. The effective diffusion coefficient of an analyte molecule in the mesopore volume is strongly dependent on the value of the nominal pore diameter of the number averaged pore size distribution. The mesopore size has to be adapted to the molecular size of the analyte, in particular for peptides and proteins. rnThe study on flow-through pores of silica monoliths demonstrated that the surface to volume of the skeletons ratio and external porosity are decisive for the column efficiency. The latter is independent from the flow-through pore diameter. The flow-through pore characteristics by direct and indirect approaches were assessed and theoretical column efficiency curves were derived. The study showed that next to the surface to volume ratio, the total porosity and its distribution of the flow-through pores and mesopores have a substantial effect on the column plate number, especially as the extent of adsorption increases. The column efficiency is increasing with decreasing flow through pore diameter, decreasing with external porosity, and increasing with total porosity. Though this tendency has a limit due to heterogeneity of the studied monolithic samples. We found that the maximum efficiency of the studied monolithic research columns could be reached at a skeleton diameter of ~ 0.5 µm. Furthermore when the intention is to maximize the column efficiency, more homogeneous monoliths should be prepared.rn
Resumo:
Reliable electronic systems, namely a set of reliable electronic devices connected to each other and working correctly together for the same functionality, represent an essential ingredient for the large-scale commercial implementation of any technological advancement. Microelectronics technologies and new powerful integrated circuits provide noticeable improvements in performance and cost-effectiveness, and allow introducing electronic systems in increasingly diversified contexts. On the other hand, opening of new fields of application leads to new, unexplored reliability issues. The development of semiconductor device and electrical models (such as the well known SPICE models) able to describe the electrical behavior of devices and circuits, is a useful means to simulate and analyze the functionality of new electronic architectures and new technologies. Moreover, it represents an effective way to point out the reliability issues due to the employment of advanced electronic systems in new application contexts. In this thesis modeling and design of both advanced reliable circuits for general-purpose applications and devices for energy efficiency are considered. More in details, the following activities have been carried out: first, reliability issues in terms of security of standard communication protocols in wireless sensor networks are discussed. A new communication protocol is introduced, allows increasing the network security. Second, a novel scheme for the on-die measurement of either clock jitter or process parameter variations is proposed. The developed scheme can be used for an evaluation of both jitter and process parameter variations at low costs. Then, reliability issues in the field of “energy scavenging systems” have been analyzed. An accurate analysis and modeling of the effects of faults affecting circuit for energy harvesting from mechanical vibrations is performed. Finally, the problem of modeling the electrical and thermal behavior of photovoltaic (PV) cells under hot-spot condition is addressed with the development of an electrical and thermal model.
Resumo:
Atmospheric aerosol particles serving as cloud condensation nuclei (CCN) are key elements of the hydrological cycle and climate. Knowledge of the spatial and temporal distribution of CCN in the atmosphere is essential to understand and describe the effects of aerosols in meteorological models. In this study, CCN properties were measured in polluted and pristine air of different continental regions, and the results were parameterized for efficient prediction of CCN concentrations.The continuous-flow CCN counter used for size-resolved measurements of CCN efficiency spectra (activation curves) was calibrated with ammonium sulfate and sodium chloride aerosols for a wide range of water vapor supersaturations (S=0.068% to 1.27%). A comprehensive uncertainty analysis showed that the instrument calibration depends strongly on the applied particle generation techniques, Köhler model calculations, and water activity parameterizations (relative deviations in S up to 25%). Laboratory experiments and a comparison with other CCN instruments confirmed the high accuracy and precision of the calibration and measurement procedures developed and applied in this study.The mean CCN number concentrations (NCCN,S) observed in polluted mega-city air and biomass burning smoke (Beijing and Pearl River Delta, China) ranged from 1000 cm−3 at S=0.068% to 16 000 cm−3 at S=1.27%, which is about two orders of magnitude higher than in pristine air at remote continental sites (Swiss Alps, Amazonian rainforest). Effective average hygroscopicity parameters, κ, describing the influence of chemical composition on the CCN activity of aerosol particles were derived from the measurement data. They varied in the range of 0.3±0.2, were size-dependent, and could be parameterized as a function of organic and inorganic aerosol mass fraction. At low S (≤0.27%), substantial portions of externally mixed CCN-inactive particles with much lower hygroscopicity were observed in polluted air (fresh soot particles with κ≈0.01). Thus, the aerosol particle mixing state needs to be known for highly accurate predictions of NCCN,S. Nevertheless, the observed CCN number concentrations could be efficiently approximated using measured aerosol particle number size distributions and a simple κ-Köhler model with a single proxy for the effective average particle hygroscopicity. The relative deviations between observations and model predictions were on average less than 20% when a constant average value of κ=0.3 was used in conjunction with variable size distribution data. With a constant average size distribution, however, the deviations increased up to 100% and more. The measurement and model results demonstrate that the aerosol particle number and size are the major predictors for the variability of the CCN concentration in continental boundary layer air, followed by particle composition and hygroscopicity as relatively minor modulators. Depending on the required and applicable level of detail, the measurement results and parameterizations presented in this study can be directly implemented in detailed process models as well as in large-scale atmospheric and climate models for efficient description of the CCN activity of atmospheric aerosols.
Resumo:
Aerosolpartikel beeinflussen das Klima durch Streuung und Absorption von Strahlung sowie als Nukleations-Kerne für Wolkentröpfchen und Eiskristalle. Darüber hinaus haben Aerosole einen starken Einfluss auf die Luftverschmutzung und die öffentliche Gesundheit. Gas-Partikel-Wechselwirkunge sind wichtige Prozesse, weil sie die physikalischen und chemischen Eigenschaften von Aerosolen wie Toxizität, Reaktivität, Hygroskopizität und optische Eigenschaften beeinflussen. Durch einen Mangel an experimentellen Daten und universellen Modellformalismen sind jedoch die Mechanismen und die Kinetik der Gasaufnahme und der chemischen Transformation organischer Aerosolpartikel unzureichend erfasst. Sowohl die chemische Transformation als auch die negativen gesundheitlichen Auswirkungen von toxischen und allergenen Aerosolpartikeln, wie Ruß, polyzyklische aromatische Kohlenwasserstoffe (PAK) und Proteine, sind bislang nicht gut verstanden.rn Kinetische Fluss-Modelle für Aerosoloberflächen- und Partikelbulk-Chemie wurden auf Basis des Pöschl-Rudich-Ammann-Formalismus für Gas-Partikel-Wechselwirkungen entwickelt. Zunächst wurde das kinetische Doppelschicht-Oberflächenmodell K2-SURF entwickelt, welches den Abbau von PAK auf Aerosolpartikeln in Gegenwart von Ozon, Stickstoffdioxid, Wasserdampf, Hydroxyl- und Nitrat-Radikalen beschreibt. Kompetitive Adsorption und chemische Transformation der Oberfläche führen zu einer stark nicht-linearen Abhängigkeit der Ozon-Aufnahme bezüglich Gaszusammensetzung. Unter atmosphärischen Bedingungen reicht die chemische Lebensdauer von PAK von wenigen Minuten auf Ruß, über mehrere Stunden auf organischen und anorganischen Feststoffen bis hin zu Tagen auf flüssigen Partikeln. rn Anschließend wurde das kinetische Mehrschichtenmodell KM-SUB entwickelt um die chemische Transformation organischer Aerosolpartikel zu beschreiben. KM-SUB ist in der Lage, Transportprozesse und chemische Reaktionen an der Oberfläche und im Bulk von Aerosol-partikeln explizit aufzulösen. Es erforder im Gegensatz zu früheren Modellen keine vereinfachenden Annahmen über stationäre Zustände und radiale Durchmischung. In Kombination mit Literaturdaten und neuen experimentellen Ergebnissen wurde KM-SUB eingesetzt, um die Effekte von Grenzflächen- und Bulk-Transportprozessen auf die Ozonolyse und Nitrierung von Protein-Makromolekülen, Ölsäure, und verwandten organischen Ver¬bin-dungen aufzuklären. Die in dieser Studie entwickelten kinetischen Modelle sollen als Basis für die Entwicklung eines detaillierten Mechanismus für Aerosolchemie dienen sowie für das Herleiten von vereinfachten, jedoch realistischen Parametrisierungen für großskalige globale Atmosphären- und Klima-Modelle. rn Die in dieser Studie durchgeführten Experimente und Modellrechnungen liefern Beweise für die Bildung langlebiger reaktiver Sauerstoff-Intermediate (ROI) in der heterogenen Reaktion von Ozon mit Aerosolpartikeln. Die chemische Lebensdauer dieser Zwischenformen beträgt mehr als 100 s, deutlich länger als die Oberflächen-Verweilzeit von molekularem O3 (~10-9 s). Die ROIs erklären scheinbare Diskrepanzen zwischen früheren quantenmechanischen Berechnungen und kinetischen Experimenten. Sie spielen eine Schlüsselrolle in der chemischen Transformation sowie in den negativen Gesundheitseffekten von toxischen und allergenen Feinstaubkomponenten, wie Ruß, PAK und Proteine. ROIs sind vermutlich auch an der Zersetzung von Ozon auf mineralischem Staub und an der Bildung sowie am Wachstum von sekundären organischen Aerosolen beteiligt. Darüber hinaus bilden ROIs eine Verbindung zwischen atmosphärischen und biosphärischen Mehrphasenprozessen (chemische und biologische Alterung).rn Organische Verbindungen können als amorpher Feststoff oder in einem halbfesten Zustand vorliegen, der die Geschwindigkeit von heterogenen Reaktionenen und Mehrphasenprozessen in Aerosolen beeinflusst. Strömungsrohr-Experimente zeigen, dass die Ozonaufnahme und die oxidative Alterung von amorphen Proteinen durch Bulk-Diffusion kinetisch limitiert sind. Die reaktive Gasaufnahme zeigt eine deutliche Zunahme mit zunehmender Luftfeuchte, was durch eine Verringerung der Viskosität zu erklären ist, bedingt durch einen Phasenübergang der amorphen organischen Matrix von einem glasartigen zu einem halbfesten Zustand (feuchtigkeitsinduzierter Phasenübergang). Die chemische Lebensdauer reaktiver Verbindungen in organischen Partikeln kann von Sekunden bis zu Tagen ansteigen, da die Diffusionsrate in der halbfesten Phase bei niedriger Temperatur oder geringer Luftfeuchte um Größenordnungen absinken kann. Die Ergebnisse dieser Studie zeigen wie halbfeste Phasen die Auswirkung organischeer Aerosole auf Luftqualität, Gesundheit und Klima beeinflussen können. rn