11 resultados para Jernström Offset

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we study the relation between crustal heterogeneities and complexities in fault processes. The first kind of heterogeneity considered involves the concept of asperity. The presence of an asperity in the hypocentral region of the M = 6.5 earthquake of June 17-th, 2000 in the South Iceland Seismic Zone was invoked to explain the change of seismicity pattern before and after the mainshock: in particular, the spatial distribution of foreshock epicentres trends NW while the strike of the main fault is N 7◦ E and aftershocks trend accordingly; the foreshock depths were typically deeper than average aftershock depths. A model is devised which simulates the presence of an asperity in terms of a spherical inclusion, within a softer elastic medium in a transform domain with a deviatoric stress field imposed at remote distances (compressive NE − SW, tensile NW − SE). An isotropic compressive stress component is induced outside the asperity, in the direction of the compressive stress axis, and a tensile component in the direction of the tensile axis; as a consequence, fluid flow is inhibited in the compressive quadrants while it is favoured in tensile quadrants. Within the asperity the isotropic stress vanishes but the deviatoric stress increases substantially, without any significant change in the principal stress directions. Hydrofracture processes in the tensile quadrants and viscoelastic relaxation at depth may contribute to lower the effective rigidity of the medium surrounding the asperity. According to the present model, foreshocks may be interpreted as induced, close to the brittle-ductile transition, by high pressure fluids migrating upwards within the tensile quadrants; this process increases the deviatoric stress within the asperity which eventually fails, becoming the hypocenter of the mainshock, on the optimally oriented fault plane. In the second part of our work we study the complexities induced in fault processes by the layered structure of the crust. In the first model proposed we study the case in which fault bending takes place in a shallow layer. The problem can be addressed in terms of a deep vertical planar crack, interacting with a shallower inclined planar crack. An asymptotic study of the singular behaviour of the dislocation density at the interface reveals that the density distribution has an algebraic singularity at the interface of degree ω between -1 and 0, depending on the dip angle of the upper crack section and on the rigidity contrast between the two media. From the welded boundary condition at the interface between medium 1 and 2, a stress drop discontinuity condition is obtained which can be fulfilled if the stress drop in the upper medium is lower than required for a planar trough-going surface: as a corollary, a vertically dipping strike-slip fault at depth may cross the interface with a sedimentary layer, provided that the shallower section is suitably inclined (fault "refraction"); this results has important implications for our understanding of the complexity of the fault system in the SISZ; in particular, we may understand the observed offset of secondary surface fractures with respect to the strike direction of the seismic fault. The results of this model also suggest that further fractures can develop in the opposite quadrant and so a second model describing fault branching in the upper layer is proposed. As the previous model, this model can be applied only when the stress drop in the shallow layer is lower than the value prescribed for a vertical planar crack surface. Alternative solutions must be considered if the stress drop in the upper layer is higher than in the other layer, which may be the case when anelastic processes relax deviatoric stress in layer 2. In such a case one through-going crack cannot fulfil the welded boundary conditions and unwelding of the interface may take place. We have solved this problem within the theory of fracture mechanics, employing the boundary element method. The fault terminates against the interface in a T-shaped configuration, whose segments interact among each other: the lateral extent of the unwelded surface can be computed in terms of the main fault parameters and the stress field resulting in the shallower layer can be modelled. A wide stripe of high and nearly uniform shear stress develops above the unwelded surface, whose width is controlled by the lateral extension of unwelding. Secondary shear fractures may then open within this stripe, according to the Coulomb failure criterion, and the depth of open fractures opening in mixed mode may be computed and compared with the well studied fault complexities observed in the field. In absence of the T-shaped decollement structure, stress concentration above the seismic fault would be difficult to reconcile with observations, being much higher and narrower.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Synchronization is a key issue in any communication system, but it becomes fundamental in the navigation systems, which are entirely based on the estimation of the time delay of the signals coming from the satellites. Thus, even if synchronization has been a well known topic for many years, the introduction of new modulations and new physical layer techniques in the modern standards makes the traditional synchronization strategies completely ineffective. For this reason, the design of advanced and innovative techniques for synchronization in modern communication systems, like DVB-SH, DVB-T2, DVB-RCS, WiMAX, LTE, and in the modern navigation system, like Galileo, has been the topic of the activity. Recent years have seen the consolidation of two different trends: the introduction of Orthogonal Frequency Division Multiplexing (OFDM) in the communication systems, and of the Binary Offset Carrier (BOC) modulation in the modern Global Navigation Satellite Systems (GNSS). Thus, a particular attention has been given to the investigation of the synchronization algorithms in these areas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Satellite SAR (Synthetic Aperture Radar) interferometry represents a valid technique for digital elevation models (DEM) generation, providing metric accuracy even without ancillary data of good quality. Depending on the situations the interferometric phase could be interpreted both as topography and as a displacement eventually occurred between the two acquisitions. Once that these two components have been separated it is possible to produce a DEM from the first one or a displacement map from the second one. InSAR DEM (Digital Elevation Model) generation in the cryosphere is not a straightforward operation because almost every interferometric pair contains also a displacement component, which, even if small, when interpreted as topography during the phase to height conversion step could introduce huge errors in the final product. Considering a glacier, assuming the linearity of its velocity flux, it is therefore necessary to differentiate at least two pairs in order to isolate the topographic residue only. In case of an ice shelf the displacement component in the interferometric phase is determined not only by the flux of the glacier but also by the different heights of the two tides. As a matter of fact even if the two scenes of the interferometric pair are acquired at the same time of the day only the main terms of the tide disappear in the interferogram, while the other ones, smaller, do not elide themselves completely and so correspond to displacement fringes. Allowing for the availability of tidal gauges (or as an alternative of an accurate tidal model) it is possible to calculate a tidal correction to be applied to the differential interferogram. It is important to be aware that the tidal correction is applicable only knowing the position of the grounding line, which is often a controversial matter. In this thesis it is described the methodology applied for the generation of the DEM of the Drygalski ice tongue in Northern Victoria Land, Antarctica. The displacement has been determined both in an interferometric way and considering the coregistration offsets of the two scenes. A particular attention has been devoted to investigate the importance of the role of some parameters, such as timing annotations and orbits reliability. Results have been validated in a GIS environment by comparison with GPS displacement vectors (displacement map and InSAR DEM) and ICEsat GLAS points (InSAR DEM).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this proposal is to explain the paradigm of the American foreign policy during the Johnson Administration, especially toward Europe, within the NATO framework, and toward URSS, in the context of the détente, just emerged during the decade of the sixties. During that period, after the passing of the J. F. Kennedy, President L. B. Johnson inherited a complex and very high-powered world politics, which wanted to get a new phase off the ground in the transatlantic relations and share the burden of the Cold war with a refractory Europe. Known as the grand design, it was a policy that needed the support of the allies and a clear purpose which appealed to the Europeans. At first, President Johnson detected in the problem of the nuclear sharing the good deal to make with the NATO allies. At the same time, he understood that the United States needed to reassert their leadeship within the new stage of relations with the Soviet Union. Soon, the “transatlantic bargain” became something not so easy to dealt with. The Federal Germany wanted to say a word in the nuclear affairs and, why not, put the finger on the trigger of the atlantic nuclear weapons. URSS, on the other hand, wanted to keep Germany down. The other allies did not want to share the onus of the defense of Europe, at most the responsability for the use of the weapons and, at least, to participate in the decision-making process. France, which wanted to detach herself from the policy of the United States and regained a world role, added difficulties to the manage of this course of action. Through the years of the Johnson’s office, the divergences of the policies placed by his advisers to gain the goal put the American foreign policy in deep water. The withdrawal of France from the organization but not from the Alliance, give Washington a chance to carry out his goal. The development of a clear-cut disarm policy leaded the Johnson’s administration to the core of the matter. The Non-proliferation Treaty signed in 1968, solved in a business-like fashion the problem with the allies. The question of nuclear sharing faded away with the acceptance of more deep consultative role in the nuclear affairs by the allies, the burden for the defense of Europe became more bearable through the offset agreement with the FRG and a new doctrine, the flexible response, put an end, at least formally, to the taboo of the nuclear age. The Johnson’s grand design proved to be different from the Kennedy’s one, but all things considered, it was more workable. The unpredictable result was a real détente with the Soviet Union, which, we can say, was a merit of President Johnson.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work contributes to the field of spatial economics by embracing three distinct modelling approaches, belonging to different strands of the theoretical literature. In the first chapter I present a theoretical model in which the changes in urban system’s degree of functional specialisation are linked to (i) firms’ organisational choices and firms’ location decisions. The interplay between firms’ internal communication/managing costs (between headquarters and production plants) and the cost of communicating with distant business services providers leads the transition process from an “integrated” urban system where each city hosts every different functions to a “functionally specialised” urban system where each city is either a primary business center (hosting advanced business services providers, a secondary business center or a pure manufacturing city and all this city-types coexist in equilibrium.The second chapter investigates the impact of free trade on welfare in a two-country world modelled as an international Hotelling duopoly with quadratic transport costs and asymmetric countries, where a negative environmental externality is associated with the consumption of the good produced in the smaller country. Countries’ relative sizes as well as the intensity of negative environmental externality affect potential welfare gains of trade liberalisation. The third chapter focuses on the paradox, by which, contrary to theoretical predictions, empirical evidence shows that a decrease in international transport costs causes an increase in foreign direct investments (FDIs). Here we propose an explanation to this apparent puzzle by exploiting an approach which delivers a continuum of Bertrand- Nash equilibria ranging above marginal cost pricing. In our setting, two Bertrand firms, supplying a homogeneous good with a convex cost function, enter the market of a foreign country. We show that allowing for a softer price competition may indeed more than offset the standard effect generated by a decrease in trade costs, thereby restoring FDI incentives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This PhD Thesis is composed of three chapters, each discussing a specific type of risk that banks face. The first chapter talks about Systemic Risk and how banks get exposed to it through the Interbank Funding Market. Exposures in the said market have Systemic Risk implications because the market creates linkages, where the failure of one party can affect the others in the market. By showing that CDS Spreads, as bank risk indicators, are positively related to their Net Interbank Funding Market Exposures, this chapter establishes the above Systemic Risk Implications of Interbank Funding. Meanwhile, the second chapter discusses how banks may handle Illiquidity Risk, defined as the possibility of having sudden funding needs. Illiquidity Risk is embodied in this chapter through Loan Commitments as they oblige banks to lend to its clients, up to a certain amount of funds at any time. This chapter points out that using Securitization as funding facility, could allow the banks to manage this Illiquidity Risk. To make this case, this chapter demonstrates empirically that banks having an increase in Loan Commitments, may experience an increase in risk profile but such can be offset by an accompanying increase in Securitization Activity. Lastly, the third chapter focuses on how banks manage Credit Risk also through Securitization. Securitization has a Credit Risk management property by allowing the offloading of risk. This chapter investigates how banks use such property by looking at the effect of securitization on the banks’ loan portfolios and overall risk and returns. The findings are that securitization is positively related to loan portfolio size and the portfolio share of risky loans, which translates to higher risk and returns. Thus, this chapter points out that Credit Risk management through Securitization may be have been done towards higher risk taking for high returns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This Ph.D. dissertation reports on the work performed at the Wireless Communication Laboratory - University of Bologna and National Research Council - as well as, for six months, at the Fraunhofer Institute for Integrated Circuit (IIS) in Nürnberg. The work of this thesis is in the area of wireless communications, especially with regards to cooperative communications aspects in narrow-band and ultra-wideband systems, cooperative links characterization, network geometry, power allocation techniques,and synchronization between nodes. The underpinning of this work is devoted to developing a general framework for design and analysis of wireless cooperative communication systems, which depends on propagation environment, transmission technique, diversity method, power allocation for various scenarios and relay positions. The optimal power allocation for minimizing the bit error probability at the destination is derived. In addition, a syncronization algorithm for master-slave communications is proposed with the aim of jointly compensate the clock drift and offset of wireless nodes composing the network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Population growth in urban areas is a world-wide phenomenon. According to a recent United Nations report, over half of the world now lives in cities. Numerous health and environmental issues arise from this unprecedented urbanization. Recent studies have demonstrated the effectiveness of urban green spaces and the role they play in improving both the aesthetics and the quality of life of its residents. In particular, urban green spaces provide ecosystem services such as: urban air quality improvement by removing pollutants that can cause serious health problems, carbon storage, carbon sequestration and climate regulation through shading and evapotranspiration. Furthermore, epidemiological studies with controlled age, sex, marital and socio-economic status, have provided evidence of a positive relationship between green space and the life expectancy of senior citizens. However, there is little information on the role of public green spaces in mid-sized cities in northern Italy. To address this need, a study was conducted to assess the ecosystem services of urban green spaces in the city of Bolzano, South Tyrol, Italy. In particular, we quantified the cooling effect of urban trees and the hourly amount of pollution removed by the urban forest. The information was gathered using field data collected through local hourly air pollution readings, tree inventory and simulation models. During the study we quantified pollution removal for ozone, nitrogen dioxide, carbon monoxide and particulate matter (<10 microns). We estimated the above ground carbon stored and annually sequestered by the urban forest. Results have been compared to transportation CO2 emissions to determine the CO2 offset potential of urban streetscapes. Furthermore, we assessed commonly used methods for estimating carbon stored and sequestered by urban trees in the city of Bolzano. We also quantified ecosystem disservices such as hourly urban forest volatile organic compound emissions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have modeled various soft-matter systems with molecular dynamics (MD) simulations. The first topic concerns liquid crystal (LC) biaxial nematic (Nb) phases, that can be possibly used in fast displays. We have investigated the phase organization of biaxial Gay-Berne (GB) mesogens, considering the effects of the orientation, strength and position of a molecular dipole. We have observed that for systems with a central dipole, nematic biaxial phases disappear when increasing dipole strength, while for systems characterized by an offset dipole, the Nb phase is stabilized at very low temperatures. In a second project, in view of their increasing importance as nanomaterials in LC phases, we are developing a DNA coarse-grained (CG) model, in which sugar and phosphate groups are represented with Lennard-Jones spheres, while bases with GB ellipsoids. We have obtained shape, position and orientation parameters for each bead, to best reproduce the atomistic structure of a B-DNA helix. Starting from atomistic simulations results, we have completed a first parametrization of the force field terms, accounting for bonded (bonds, angles and dihedrals) and non-bonded interactions (H-bond and stacking). We are currently validating the model, by investigating stability and melting temperature of various sequences. Finally, in a third project, we aim to explain the mechanism of enantiomeric discrimination due to the presence of a chiral helix of poly(gamma-benzyl L-glutamate) (PBLG), in solution of dimethylformamide (DMF), interacting with chiral or pro-chiral molecules (in our case heptyl butyrate, HEP), after tuning properly an atomistic force field (AMBER). We have observed that DMF and HEP molecules solvate uniformly the PBLG helix, but the pro-chiral solute is on average found closer to the helix with respect to the DMF. The solvent presents a faster isotropic diffusion, twice as HEP, also indicating a stronger interaction of the solute with the helix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, thanks to the technological advances, electromagnetic methods for non-invasive shallow subsurface characterization have been increasingly used in many areas of environmental and geoscience applications. Among all the geophysical electromagnetic methods, the Ground Penetrating Radar (GPR) has received unprecedented attention over the last few decades due to its capability to obtain, spatially and temporally, high-resolution electromagnetic parameter information thanks to its versatility, its handling, its non-invasive nature, its high resolving power, and its fast implementation. The main focus of this thesis is to perform a dielectric site characterization in an efficient and accurate way studying in-depth a physical phenomenon behind a recent developed GPR approach, the so-called early-time technique, which infers the electrical properties of the soil in the proximity of the antennas. In particular, the early-time approach is based on the amplitude analysis of the early-time portion of the GPR waveform using a fixed-offset ground-coupled antenna configuration where the separation between the transmitting and receiving antenna is on the order of the dominant pulse-wavelength. Amplitude information can be extracted from the early-time signal through complex trace analysis, computing the instantaneous-amplitude attributes over a selected time-duration of the early-time signal. Basically, if the acquired GPR signals are considered to represent the real part of a complex trace, and the imaginary part is the quadrature component obtained by applying a Hilbert transform to the GPR trace, the amplitude envelope is the absolute value of the resulting complex trace (also known as the instantaneous-amplitude). Analysing laboratory information, numerical simulations and natural field conditions, and summarising the overall results embodied in this thesis, it is possible to suggest the early-time GPR technique as an effective method to estimate physical properties of the soil in a fast and non-invasive way.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biochar is the solid C-rich matrix obtained by pyrolysis of biomasses, currently promoted as a soil amendment with the aim to offset anthropogenic C emissions, while ameliorating soil properties and growth conditions. Benefits from biochar seem promising, although scientific understandings are beginning to be explored. In this project, I performed a suite of experiments in controlled and in field conditions with the aims to investigate the effect of biochar on: a) the interaction with minerals; b) Fe nutrition in kiwifruit; c) soil leaching, soil fertility, soil CO2 emissions partitioning, soil bacterial profile and key gene expression of soil nitrification-involved bacteria; d) plant growth, nutritional status, yield, fruit quality and e) its physical-chemical changes as affected by long-term environmental exposure. Biochar released K, P and Mg but retained Fe, Mn, Cu and Zn on its surface which in turn hindered Fe nutrition of kiwifruit trees. A redox reaction on the biochar surface exposed to a Fe source was elucidated. Biochar reduced the amount of leached NH4+-N but increased that of Hg, K, P, Mo, Se and Sn. Furthermore, biochar synergistically interacted with compost increasing soil field capacity, fertility, leaching of DOC, TDN and RSOC, suggesting a priming effect. However, in field conditions, biochar did not affect yield, nutritional status and fruit quality. Actinomadura flavalba, Saccharomonospora viridis, Thermosporomyces composti and Enterobacter spp. were peculiar of the soil amended with biochar plus compost which exhibited the highest band richness and promoted gene expression levels of Nitrosomonas spp., Nitrobacter spp. and enzymatic-related activity. Environmental exposure reduced C, K, pH and water infiltration of biochar which instead resulted in a higher O, Si, N, Na, Al, Ca, Mn and Fe at%. Oxidation occurred on the aged biochar surface, it decreased progressively with depth and induced the development of O-containing functional groups, up to 75nm depth.