839 resultados para Trigonometric interpolation
Resumo:
In a sediment core AMK4-316 (460 cm long) on the basis of radiocarbon, oxygen isotope, and lithological data climatostratigraphy is established for time interval about 145 ka. The method of factor analysis and spline interpolation applied to data on distribution of planktic foraminifera species has allowed to reconstruct average annual and seasonal temperatures and salinity at the surface and at depth 100 m. The optimum of the Last Interglaciation (5e) is characterized by maximal temperatures, low amplitudes of seasonal fluctuations, and by increased thickness of the upper homogeneous layer. The glacial hydrological mode has arisen here 115 ka ago. Coolings outstripped appropriate events of the global continental glaciation. Minimal average annual temperatures (4-4.5°C) are reconstructed for 47-45, 42, 36, 29-30, and 10 ka. For 50-30 ka interval numerous strong temperature fluctuations that reflect migrations of the polar front are established. Maximal differences of salinity at the surface and depth 100 m showing influence of melting waters were in the beginning of deglaciations (135 and 20 ka) and repeatedly arose in 50-30 ka interval. The Last Glacial Maximum (18 ka) is characterized by the lowest salinity but not by a peak of low temperatures at the surface. Surface temperature was lowered up to 10 ka. Average annual surface temperature of the Holocene optimum was 2°C above the modern one and 2°C below temperature in the Interglaciation optimum (5e), thickness of the upper homogeneous layer exceeded 100 m.
Resumo:
The Antarctic Pack Ice Seal (APIS) Program was initiated in 1994 to estimate the abundance of four species of Antarctic phocids: the crabeater seal Lobodon carcinophaga, Weddell seal Leptonychotes weddellii, Ross seal Ommatophoca rossii and leopard seal Hydrurga leptonyx and to identify ecological relationships and habitat use patterns. The Atlantic sector of the Southern Ocean (the eastern sector of the Weddell Sea) was surveyed by research teams from Germany, Norway and South Africa using a range of aerial methods over five austral summers between 1996-1997 and 2000-2001. We used these observations to model densities of seals in the area, taking into account haul-out probabilities, survey-specific sighting probabilities and covariates derived from satellite-based ice concentrations and bathymetry. These models predicted the total abundance over the area bounded by the surveys (30°W and 10°E). In this sector of the coast, we estimated seal abundances of: 514 (95 % CI 337-886) x 10**3 crabeater seals, 60.0 (43.2-94.4) x 10**3 Weddell seals and 13.2 (5.50-39.7) x 10**3 leopard seals. The crabeater seal densities, approximately 14,000 seals per degree longitude, are similar to estimates obtained by surveys in the Pacific and Indian sectors by other APIS researchers. Very few Ross seals were observed (24 total), leading to a conservative estimate of 830 (119-2894) individuals over the study area. These results provide an important baseline against which to compare future changes in seal distribution and abundance.
Resumo:
A first-principles method is applied to find the intra and intervalley n-type carrier scattering rates for substitutional carbon in silicon. The method builds on a previously developed first-principles approach with the introduction of an interpolation technique to determine the intravalley scattering rates. Intravalley scattering is found to be the dominant alloy scattering process in Si1-xCx, followed by g-type intervalley scattering. Mobility calculations show that alloy scattering due to substitutional C alone cannot account for the experimentally observed degradation of the mobility. We show that the incorporation of additional charged impurity scattering due to electrically active interstitial C complexes models this residual resistivity well.
Resumo:
The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.
This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.
Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.
Resumo:
Dynamics of biomolecules over various spatial and time scales are essential for biological functions such as molecular recognition, catalysis and signaling. However, reconstruction of biomolecular dynamics from experimental observables requires the determination of a conformational probability distribution. Unfortunately, these distributions cannot be fully constrained by the limited information from experiments, making the problem an ill-posed one in the terminology of Hadamard. The ill-posed nature of the problem comes from the fact that it has no unique solution. Multiple or even an infinite number of solutions may exist. To avoid the ill-posed nature, the problem needs to be regularized by making assumptions, which inevitably introduce biases into the result.
Here, I present two continuous probability density function approaches to solve an important inverse problem called the RDC trigonometric moment problem. By focusing on interdomain orientations we reduced the problem to determination of a distribution on the 3D rotational space from residual dipolar couplings (RDCs). We derived an analytical equation that relates alignment tensors of adjacent domains, which serves as the foundation of the two methods. In the first approach, the ill-posed nature of the problem was avoided by introducing a continuous distribution model, which enjoys a smoothness assumption. To find the optimal solution for the distribution, we also designed an efficient branch-and-bound algorithm that exploits the mathematical structure of the analytical solutions. The algorithm is guaranteed to find the distribution that best satisfies the analytical relationship. We observed good performance of the method when tested under various levels of experimental noise and when applied to two protein systems. The second approach avoids the use of any model by employing maximum entropy principles. This 'model-free' approach delivers the least biased result which presents our state of knowledge. In this approach, the solution is an exponential function of Lagrange multipliers. To determine the multipliers, a convex objective function is constructed. Consequently, the maximum entropy solution can be found easily by gradient descent methods. Both algorithms can be applied to biomolecular RDC data in general, including data from RNA and DNA molecules.
Resumo:
Uncertainty quantification (UQ) is both an old and new concept. The current novelty lies in the interactions and synthesis of mathematical models, computer experiments, statistics, field/real experiments, and probability theory, with a particular emphasize on the large-scale simulations by computer models. The challenges not only come from the complication of scientific questions, but also from the size of the information. It is the focus in this thesis to provide statistical models that are scalable to massive data produced in computer experiments and real experiments, through fast and robust statistical inference.
Chapter 2 provides a practical approach for simultaneously emulating/approximating massive number of functions, with the application on hazard quantification of Soufri\`{e}re Hills volcano in Montserrate island. Chapter 3 discusses another problem with massive data, in which the number of observations of a function is large. An exact algorithm that is linear in time is developed for the problem of interpolation of Methylation levels. Chapter 4 and Chapter 5 are both about the robust inference of the models. Chapter 4 provides a new criteria robustness parameter estimation criteria and several ways of inference have been shown to satisfy such criteria. Chapter 5 develops a new prior that satisfies some more criteria and is thus proposed to use in practice.
Resumo:
This dissertation is the first comprehensive and synthetic study of the Irish presentation and legends of Longinus. Longinus was the soldier at the crucifixion who pierced Christ with a spear, who believed and, according to some texts, was healed of his blindness by the blood and water issuing from the wound, and who later was martyred for his belief. In my thesis I survey the knowledge and use of the legend of Longinus in Ireland over genres and over time. Sources used for the analyses include iconographic representations of the spear-bearer in manuscripts, metalwork and stone and textual representations of the figure of Longinus ranging over the history of Irish literature from the early medieval to the early modern period, as well as over Irish and HibernoLatin texts. The thesis consists of four core chapters, the analyses of the presentations of Longinus in early-medieval Irish texts and in the iconographic tradition (I,II), the editions of the extant Irish and the earliest surviving Latin texts of the Passion of Longinus and of a little-known short tract describing the healing of Longinus from Leabhar Breac (III), and the discussion of the later medieval Irish popular traditions (IV). Particular attention is given to the study of two intriguing peculiarities of the Irish tradition. Most early Irish Gospel books feature an interpolation of the episode of the spear-thrust in Matthew 27:49, directly preceding the death of Christ, implying its reading as the immediate cause of death. The image of Longinus as 'iugulator Christi' ('killer of Christ') appears to have been crucial for the development of the legend. Also, the blindness motif, which rarely features in other European popular traditions until the twelfth century, is attested as early as the eighth century in Ireland, which has led some scholars to suggest a potential Irish origin.
Resumo:
In the process of engineering design of structural shapes, the flat plate analysis results can be generalized to predict behaviors of complete structural shapes. In this case, the purpose of this project is to analyze a thin flat plate under conductive heat transfer and to simulate the temperature distribution, thermal stresses, total displacements, and buckling deformations. The current approach in these cases has been using the Finite Element Method (FEM), whose basis is the construction of a conforming mesh. In contrast, this project uses the mesh-free Scan Solve Method. This method eliminates the meshing limitation using a non-conforming mesh. I implemented this modeling process developing numerical algorithms and software tools to model thermally induced buckling. In addition, convergence analysis was achieved, and the results were compared with FEM. In conclusion, the results demonstrate that the method gives similar solutions to FEM in quality, but it is computationally less time consuming.
Resumo:
Daily records of nine meteorological variables covering the interval 1961-2013 were used in order to create a state-of-the-art homogenized climatic dataset over Romania at a spatial resolution of 0.1°. All meteorological stations with full data records, as well as stations with up to 30 % missing data, were used for the following variables: air pressure (150 stations); minimum, maximum, and average air temperature (150 stations); soil temperature (127 stations); precipitation (188 stations); sunshine hours (135 stations); cloud cover (104 stations); relative humidity (150 stations). For each parameter, the data series were first homogenized with the software MASH (Multiple Analysis of Series for Homogenization); then, the data series were gridded by means of the software MISH (Meteorological Interpolation based on Surface Homogenized Data).
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The highly dynamic nature of some sandy shores with continuous morphological changes require the development of efficient and accurate methodological strategies for coastal hazard assessment and morphodynamic characterisation. During the past decades, the general methodological approach for the establishment of coastal monitoring programmes was based on photogrammetry or classical geodetic techniques. With the advent of new geodetic techniques, space-based and airborne-based, new methodologies were introduced in coastal monitoring programmes. This paper describes the development of a monitoring prototype that is based on the use of global positioning system (GPS). The prototype has a GPS multiantenna mounted on a fast surveying platform, a land vehicle appropriate for driving in the sand (four-wheel quad). This system was conceived to perform a network of shore profiles in sandy shores stretches (subaerial beach) that extend for several kilometres from which high-precision digital elevation models can be generated. An analysis of the accuracy and precision of some differential GPS kinematic methodologies is presented. The development of an adequate survey methodology is the first step in morphodynamic shore characterisation or in coastal hazard assessment. The sample method and the computational interpolation procedures are important steps for producing reliable three-dimensional surface maps that are real as possible. The quality of several interpolation methods used to generate grids was tested in areas where there were data gaps. The results obtained allow us to conclude that with the developed survey methodology, it is possible to Survey sandy shores stretches, under spatial scales of kilometers, with a vertical accuracy of greater than 0.10 m in the final digital elevation models.
Resumo:
Les systèmes de communication optique avec des formats de modulation avancés sont actuellement l’un des sujets de recherche les plus importants dans le domaine de communication optique. Cette recherche est stimulée par les exigences pour des débits de transmission de donnée plus élevés. Dans cette thèse, on examinera les techniques efficaces pour la modulation avancée avec une détection cohérente, et multiplexage par répartition en fréquence orthogonale (OFDM) et multiples tonalités discrètes (DMT) pour la détection directe et la détection cohérente afin d’améliorer la performance de réseaux optiques. Dans la première partie, nous examinons la rétropropagation avec filtre numérique (DFBP) comme une simple technique d’atténuation de nonlinéarité d’amplificateur optique semiconducteur (SOA) dans le système de détection cohérente. Pour la première fois, nous démontrons expérimentalement l’efficacité de DFBP pour compenser les nonlinéarités générées par SOA dans un système de détection cohérente porteur unique 16-QAM. Nous comparons la performance de DFBP avec la méthode de Runge-Kutta quatrième ordre. Nous examinons la sensibilité de performance de DFBP par rapport à ses paramètres. Par la suite, nous proposons une nouvelle méthode d’estimation de paramètre pour DFBP. Finalement, nous démontrons la transmission de signaux de 16-QAM aux taux de 22 Gbaud sur 80km de fibre optique avec la technique d’estimation de paramètre proposée pour DFBP. Dans la deuxième partie, nous nous concentrons sur les techniques afin d’améliorer la performance des systèmes OFDM optiques en examinent OFDM optiques cohérente (CO-OFDM) ainsi que OFDM optiques détection directe (DDO-OFDM). Premièrement, nous proposons une combinaison de coupure et prédistorsion pour compenser les distorsions nonlinéaires d’émetteur de CO-OFDM. Nous utilisons une interpolation linéaire par morceaux (PLI) pour charactériser la nonlinéarité d’émetteur. Dans l’émetteur nous utilisons l’inverse de l’estimation de PLI pour compenser les nonlinéarités induites à l’émetteur de CO-OFDM. Deuxièmement, nous concevons des constellations irrégulières optimisées pour les systèmes DDO-OFDM courte distance en considérant deux modèles de bruit de canal. Nous démontrons expérimentalement 100Gb/s+ OFDM/DMT avec la détection directe en utilisant les constellations QAM optimisées. Dans la troisième partie, nous proposons une architecture réseaux optiques passifs (PON) avec DDO-OFDM pour la liaison descendante et CO-OFDM pour la liaison montante. Nous examinons deux scénarios pour l’allocations de fréquence et le format de modulation des signaux. Nous identifions la détérioration limitante principale du PON bidirectionnelle et offrons des solutions pour minimiser ses effets.
Resumo:
In this paper, we propose an orthogonal chirp division multiplexing (OCDM) technique for coherent optical communication. OCDM is the principle of orthogonally multiplexing a group of linear chirped waveforms for high-speed data communication, achieving the maximum spectral efficiency (SE) for chirp spread spectrum, in a similar way as the orthogonal frequency division multiplexing (OFDM) does for frequency division multiplexing. In the coherent optical (CO)-OCDM, Fresnel transform formulates the synthesis of the orthogonal chirps; discrete Fresnel transform (DFnT) realizes the CO-OCDM in the digital domain. As both the Fresnel and Fourier transforms are trigonometric transforms, the CO-OCDM can be easily integrated into the existing CO-OFDM systems. Analyses and numerical results are provided to investigate the transmission of CO-OCDM signals over optical fibers. Moreover, experiments of 36-Gbit/s CO-OCDM signal are carried out to validate the feasibility and confirm the analyses. It is shown that the CO-OCDM can effectively compensate the dispersion and is more resilient to fading and noise impairment than OFDM.