4 resultados para Vigarani, Carlo, 17th century

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long paleoseismic histories are necessary for understanding the full range of behavior of faults, as the most destructive events often have recurrence intervals longer than local recorded history. The Sunda megathrust, the interface along which the Australian plate subducts beneath Southeast Asia, provides an ideal natural laboratory for determining a detailed paleoseismic history over many seismic cycles. The outer-arc islands above the seismogenic portion of the megathrust cyclically rise and subside in response to processes on the underlying megathrust, providing uncommonly good illumination of megathrust behavior. Furthermore, the growth histories of coral microatolls, which record tectonic uplift and subsidence via relative sea level, can be used to investigate the detailed coseismic and interseismic deformation patterns. One particularly interesting area is the Mentawai segment of the megathrust, which has been shown to characteristically fail in a series of ruptures over decades, rather than a single end-to-end rupture. This behavior has been termed a seismic “supercycle.” Prior to the current rupture sequence, which began in 2007, the segment previously ruptured during the 14th century, the late 16th to late 17th century, and most recently during historical earthquakes in 1797 and 1833. In this study, we examine each of these previous supercycles in turn.

First, we expand upon previous analysis of the 1797–1833 rupture sequence with a comprehensive review of previously published coral microatoll data and the addition of a significant amount of new data. We present detailed maps of coseismic uplift during the two great earthquakes and of interseismic deformation during the periods 1755–1833 and 1950–1997 and models of the corresponding slip and coupling on the underlying megathrust. We derive magnitudes of Mw 8.7–9.0 for the two historical earthquakes, and determine that the 1797 earthquake fundamentally changed the state of coupling on the fault for decades afterward. We conclude that while major earthquakes generally do not involve rupture of the entire Mentawai segment, they undoubtedly influence the progression of subsequent ruptures, even beyond their own rupture area. This concept is of vital importance for monitoring and forecasting the progression of the modern rupture sequence.

Turning our attention to the 14th century, we present evidence of a shallow slip event in approximately A.D. 1314, which preceded the “conventional” megathrust rupture sequence. We calculate a suite of slip models, slightly deeper and/or larger than the 2010 Pagai Islands earthquake, that are consistent with the large amount of subsidence recorded at our study site. Sea-level records from older coral microatolls suggest that these events occur at least once every millennium, but likely far less frequently than their great downdip neighbors. The revelation that shallow slip events are important contributors to the seismic cycle of the Mentawai segment further complicates our understanding of this subduction megathrust and our assessment of the region’s exposure to seismic and tsunami hazards.

Finally, we present an outline of the complex intervening rupture sequence that took place in the 16th and 17th centuries, which involved at least five distinct uplift events. We conclude that each of the supercycles had unique features, and all of the types of fault behavior we observe are consistent with highly heterogeneous frictional properties of the megathrust beneath the south-central Mentawai Islands. We conclude that the heterogeneous distribution of asperities produces terminations and overlap zones between fault ruptures, resulting in the seismic “supercycle” phenomenon.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the 2d O(3) model with the standard action by Monte Carlo simulation at couplings β up to 2.05. We measure the energy density, mass gap and susceptibility of the model, and gather high statistics on lattices of size L ≤ 1024 using the Floating Point Systems T-series vector hypercube and the Thinking Machines Corp.'s Connection Machine 2. Asymptotic scaling does not appear to set in for this action, even at β = 2.10, where the correlation length is 420. We observe a 20% difference between our estimate m/Λ^─_(Ms) = 3.52(6) at this β and the recent exact analytical result . We use the overrelaxation algorithm interleaved with Metropolis updates and show that decorrelation time scales with the correlation length and the number of overrelaxation steps per sweep. We determine its effective dynamical critical exponent to be z' = 1.079(10); thus critical slowing down is reduced significantly for this local algorithm that is vectorizable and parallelizable.

We also use the cluster Monte Carlo algorithms, which are non-local Monte Carlo update schemes which can greatly increase the efficiency of computer simulations of spin models. The major computational task in these algorithms is connected component labeling, to identify clusters of connected sites on a lattice. We have devised some new SIMD component labeling algorithms, and implemented them on the Connection Machine. We investigate their performance when applied to the cluster update of the two dimensional Ising spin model.

Finally we use a Monte Carlo Renormalization Group method to directly measure the couplings of block Hamiltonians at different blocking levels. For the usual averaging block transformation we confirm the renormalized trajectory (RT) observed by Okawa. For another improved probabilistic block transformation we find the RT, showing that it is much closer to the Standard Action. We then use this block transformation to obtain the discrete β-function of the model which we compare to the perturbative result. We do not see convergence, except when using a rescaled coupling β_E to effectively resum the series. For the latter case we see agreement for m/ Λ^─_(Ms) at , β = 2.14, 2.26, 2.38 and 2.50. To three loops m/Λ^─_(Ms) = 3.047(35) at β = 2.50, which is very close to the exact value m/ Λ^─_(Ms) = 2.943. Our last point at β = 2.62 disagrees with this estimate however.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis consists of three papers studying the relationship between democratic reform, expenditure on sanitation public goods and mortality in Britain in the second half of the nineteenth century. During this period decisions over spending on critical public goods such as water supply and sewer systems were made by locally elected town councils, leading to extensive variation in the level of spending across the country. This dissertation uses new historical data to examine the political factors determining that variation, and the consequences for mortality rates.

The first substantive chapter describes the spread of government sanitation expenditure, and analyzes the factors that determined towns' willingness to invest. The results show the importance of towns' financial constraints, both in terms of the available tax base and access to borrowing, in limiting the level of expenditure. This suggests that greater involvement by Westminster could have been very effective in expediting sanitary investment. There is little evidence, however, that democratic reform was an important driver of greater expenditure.

Chapter 3 analyzes the effect of extending voting rights to the poor on government public goods spending. A simple model predicts that the rich and the poor will desire lower levels of public goods expenditure than the middle class, and so extensions of the right to vote to the poor will be associated with lower spending. This prediction is tested using plausibly exogenous variation in the extent of the franchise. The results strongly support the theoretical prediction: expenditure increased following relatively small extensions of the franchise, but fell once more than approximately 50% of the adult male population held the right to vote.

Chapter 4 tests whether the sanitary expenditure was effective in combating the high mortality rates following the Industrial Revolution. The results show that increases in urban expenditure on sanitation-water supply, sewer systems and streets-was extremely effective in reducing mortality from cholera and diarrhea.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.