866 resultados para Ordinary Least Squares Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis some multivariate spectroscopic methods for the analysis of solutions are proposed. Spectroscopy and multivariate data analysis form a powerful combination for obtaining both quantitative and qualitative information and it is shown how spectroscopic techniques in combination with chemometric data evaluation can be used to obtain rapid, simple and efficient analytical methods. These spectroscopic methods consisting of spectroscopic analysis, a high level of automation and chemometric data evaluation can lead to analytical methods with a high analytical capacity, and for these methods, the term high-capacity analysis (HCA) is suggested. It is further shown how chemometric evaluation of the multivariate data in chromatographic analyses decreases the need for baseline separation. The thesis is based on six papers and the chemometric tools used are experimental design, principal component analysis (PCA), soft independent modelling of class analogy (SIMCA), partial least squares regression (PLS) and parallel factor analysis (PARAFAC). The analytical techniques utilised are scanning ultraviolet-visible (UV-Vis) spectroscopy, diode array detection (DAD) used in non-column chromatographic diode array UV spectroscopy, high-performance liquid chromatography with diode array detection (HPLC-DAD) and fluorescence spectroscopy. The methods proposed are exemplified in the analysis of pharmaceutical solutions and serum proteins. In Paper I a method is proposed for the determination of the content and identity of the active compound in pharmaceutical solutions by means of UV-Vis spectroscopy, orthogonal signal correction and multivariate calibration with PLS and SIMCA classification. Paper II proposes a new method for the rapid determination of pharmaceutical solutions by the use of non-column chromatographic diode array UV spectroscopy, i.e. a conventional HPLC-DAD system without any chromatographic column connected. In Paper III an investigation is made of the ability of a control sample, of known content and identity to diagnose and correct errors in multivariate predictions something that together with use of multivariate residuals can make it possible to use the same calibration model over time. In Paper IV a method is proposed for simultaneous determination of serum proteins with fluorescence spectroscopy and multivariate calibration. Paper V proposes a method for the determination of chromatographic peak purity by means of PCA of HPLC-DAD data. In Paper VI PARAFAC is applied for the decomposition of DAD data of some partially separated peaks into the pure chromatographic, spectral and concentration profiles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding the complex relationships between quantities measured by volcanic monitoring network and shallow magma processes is a crucial headway for the comprehension of volcanic processes and a more realistic evaluation of the associated hazard. This question is very relevant at Campi Flegrei, a volcanic quiescent caldera immediately north-west of Napoli (Italy). The system activity shows a high fumarole release and periodic ground slow movement (bradyseism) with high seismicity. This activity, with the high people density and the presence of military and industrial buildings, makes Campi Flegrei one of the areas with higher volcanic hazard in the world. In such a context my thesis has been focused on magma dynamics due to the refilling of shallow magma chambers, and on the geophysical signals detectable by seismic, deformative and gravimetric monitoring networks that are associated with this phenomenologies. Indeed, the refilling of magma chambers is a process frequently occurring just before a volcanic eruption; therefore, the faculty of identifying this dynamics by means of recorded signal analysis is important to evaluate the short term volcanic hazard. The space-time evolution of dynamics due to injection of new magma in the magma chamber has been studied performing numerical simulations with, and implementing additional features in, the code GALES (Longo et al., 2006), recently developed and still on the upgrade at the Istituto Nazionale di Geofisica e Vulcanologia in Pisa (Italy). GALES is a finite element code based on a physico-mathematical two dimensional, transient model able to treat fluids as multiphase homogeneous mixtures, compressible to incompressible. The fundamental equations of mass, momentum and energy balance are discretised both in time and space using the Galerkin Least-Squares and discontinuity-capturing stabilisation technique. The physical properties of the mixture are computed as a function of local conditions of magma composition, pressure and temperature.The model features enable to study a broad range of phenomenologies characterizing pre and sin-eruptive magma dynamics in a wide domain from the volcanic crater to deep magma feeding zones. The study of displacement field associated with the simulated fluid dynamics has been carried out with a numerical code developed by the Geophysical group at the University College Dublin (O’Brien and Bean, 2004b), with whom we started a very profitable collaboration. In this code, the seismic wave propagation in heterogeneous media with free surface (e.g. the Earth’s surface) is simulated using a discrete elastic lattice where particle interactions are controlled by the Hooke’s law. This method allows to consider medium heterogeneities and complex topography. The initial and boundary conditions for the simulations have been defined within a coordinate project (INGV-DPC 2004-06 V3_2 “Research on active volcanoes, precursors, scenarios, hazard and risk - Campi Flegrei”), to which this thesis contributes, and many researchers experienced on Campi Flegrei in volcanological, seismic, petrological, geochemical fields, etc. collaborate. Numerical simulations of magma and rock dynamis have been coupled as described in the thesis. The first part of the thesis consists of a parametric study aimed at understanding the eect of the presence in magma of carbon dioxide in magma in the convection dynamics. Indeed, the presence of this volatile was relevant in many Campi Flegrei eruptions, including some eruptions commonly considered as reference for a future activity of this volcano. A set of simulations considering an elliptical magma chamber, compositionally uniform, refilled from below by a magma with volatile content equal or dierent from that of the resident magma has been performed. To do this, a multicomponent non-ideal magma saturation model (Papale et al., 2006) that considers the simultaneous presence of CO2 and H2O, has been implemented in GALES. Results show that the presence of CO2 in the incoming magma increases its buoyancy force promoting convection ad mixing. The simulated dynamics produce pressure transients with frequency and amplitude in the sensitivity range of modern geophysical monitoring networks such as the one installed at Campi Flegrei . In the second part, simulations more related with the Campi Flegrei volcanic system have been performed. The simulated system has been defined on the basis of conditions consistent with the bulk of knowledge of Campi Flegrei and in particular of the Agnano-Monte Spina eruption (4100 B.P.), commonly considered as reference for a future high intensity eruption in this area. The magmatic system has been modelled as a long dyke refilling a small shallow magma chamber; magmas with trachytic and phonolitic composition and variable volatile content of H2O and CO2 have been considered. The simulations have been carried out changing the condition of magma injection, the system configuration (magma chamber geometry, dyke size) and the resident and refilling magma composition and volatile content, in order to study the influence of these factors on the simulated dynamics. Simulation results allow to follow each step of the gas-rich magma ascent in the denser magma, highlighting the details of magma convection and mixing. In particular, the presence of more CO2 in the deep magma results in more ecient and faster dynamics. Through this simulations the variation of the gravimetric field has been determined. Afterward, the space-time distribution of stress resulting from numerical simulations have been used as boundary conditions for the simulations of the displacement field imposed by the magmatic dynamics on rocks. The properties of the simulated domain (rock density, P and S wave velocities) have been based on data from literature on active and passive tomographic experiments, obtained through a collaboration with A. Zollo at the Dept. of Physics of the Federici II Univeristy in Napoli. The elasto-dynamics simulations allow to determine the variations of the space-time distribution of deformation and the seismic signal associated with the studied magmatic dynamics. In particular, results show that these dynamics induce deformations similar to those measured at Campi Flegrei and seismic signals with energies concentrated on the typical frequency bands observed in volcanic areas. The present work shows that an approach based on the solution of equations describing the physics of processes within a magmatic fluid and the surrounding rock system is able to recognise and describe the relationships between geophysical signals detectable on the surface and deep magma dynamics. Therefore, the results suggest that the combined study of geophysical data and informations from numerical simulations can allow in a near future a more ecient evaluation of the short term volcanic hazard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN]This paper shows a finite element method for pollutant transport with several pollutant sources. An Eulerian convection–diffusion–reaction model to simulate the pollutant dispersion is used. The discretization of the different sources allows to impose the emissions as boundary conditions. The Eulerian description can deal with the coupling of several plumes. An adaptive stabilized finite element formulation, specifically Least-Squares, with a Crank-Nicolson temporal integration is proposed to solve the problem. An splitting scheme has been used to treat separately the transport and the reaction. A mass-consistent model has been used to compute the wind field of the problem…

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In der vorliegenden Arbeit werden zwei physikalischeFließexperimente an Vliesstoffen untersucht, die dazu dienensollen, unbekannte hydraulische Parameter des Materials, wiez. B. die Diffusivitäts- oder Leitfähigkeitsfunktion, ausMeßdaten zu identifizieren. Die physikalische undmathematische Modellierung dieser Experimente führt auf einCauchy-Dirichlet-Problem mit freiem Rand für die degeneriertparabolische Richardsgleichung in derSättigungsformulierung, das sogenannte direkte Problem. Ausder Kenntnis des freien Randes dieses Problems soll dernichtlineare Diffusivitätskoeffizient derDifferentialgleichung rekonstruiert werden. Für diesesinverse Problem stellen wir einOutput-Least-Squares-Funktional auf und verwenden zu dessenMinimierung iterative Regularisierungsverfahren wie dasLevenberg-Marquardt-Verfahren und die IRGN-Methode basierendauf einer Parametrisierung des Koeffizientenraumes durchquadratische B-Splines. Für das direkte Problem beweisen wirunter anderem Existenz und Eindeutigkeit der Lösung desCauchy-Dirichlet-Problems sowie die Existenz des freienRandes. Anschließend führen wir formal die Ableitung desfreien Randes nach dem Koeffizienten, die wir für dasnumerische Rekonstruktionsverfahren benötigen, auf einlinear degeneriert parabolisches Randwertproblem zurück.Wir erläutern die numerische Umsetzung und Implementierungunseres Rekonstruktionsverfahrens und stellen abschließendRekonstruktionsergebnisse bezüglich synthetischer Daten vor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die grundlegenden Prinzipien und Möglichkeiten der Oberflächencharakterisierung mittels ToF-SIMS (Flugzeit-Sekundärionen Massenspektrometrie) werden an ausgewählten Beispielen aus einem aktuell laufenden und vom BMBF geförderten Verbundforschungsprojekt (Fkz: 03N8022A) zum Thema Nanofunktionalisierung von Grenzflächen vorgestellt. Ein Schwerpunkt innerhalb des Projekts stellen die nichtgeschlossenen Schichtsysteme dar, die entweder über Domänenstrukturen oder einer definierten Einzelfunktionalisierung neuartige funktionelle Oberflächen bereitstellen. Mithilfe der sehr oberflächensensitiven ToF-SIMS Methode sowie der Möglichkeit einer graphischen Darstellung lateraler Molekülionenverteilungen auf funktionalisierten Oberflächen können Informationen über Struktur und Belegungsdichte der Funktionsschicht gewonnen werden. Die Kombination des ToF-SIMS Experimentes und eines multivariaten Algorithmus (partial least squares, PLS) liefert eine interessante Möglichkeit zur quantitativen und simultanen Bestimmung von Oberflächeneigenschaften (Element- und molekulare Konzentrationen sowie Kontaktwinkelwerte). Da das ToF-SIMS Spektrum einer plasmafunktionalisierten Oberfläche im Allgemeinen eine Vielzahl unterschiedlicher Fragmentsignale enthält, lässt eine einfache eindimensionale Korrelation (z.B. CF3 - Fragmentintensität ßà CF3-Konzentration) den größten Teil der im Spektrum prinzipiell enthaltenen Information unberücksichtigt. Aufgrund der großen Anzahl von atomaren und molekularen Signalen, die repräsentativ für die chemische Struktur der analysierten Oberflächen sind, ist es sinnvoll, diese Fülle von Informationen zur Quantifizierung der Oberflächeneigenschaften (Elementkonzentrationen, Kontaktwinkel etc.) zu verwenden. Zusätzlich ermöglicht diese Methode eine quantitative Bestimmung der Oberflächeneigenschaften auf nur µm-großen Bereichen. Das ist vorteilhaft für Untersuchungen chemisch strukturierter Oberflächen, da die Größe der Strukturierung für viele Anwendungen in einem Bereich von mehreren µm liegt. Anhand eines Beispieles aus dem biologisch-medizinischen Fachgebiet, soll der erfolgreiche Einsatz multivariater Modelle aufgezeigt werden. In diesem Experiment wurden menschlichen Bindegewebs- (Fibroblasten) und Pankreaszellen auf plasmafunktionalisiserten Oberflächen kultiviert, um die Beeinflussung der Funktionalisierung auf das Zellwachstum zu untersuchen. Die plasmabehandelten Oberflächen wurden durch die Verwendung von TEM-Gittern mit µm-großen Öffnungen chemisch strukturiert und das Wachstumsverhalten der Zellen beobachtet. Jedem dieser µm-großen Bereiche können mithilfe der multivariaten Modelle quantitative Größen zugeordnet werden (Konzentrationen und Kontaktwinkelwerte), die zur Interpretation des Wachstumsverhaltens der Zellen beitragen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rural tourism is relatively new product in the process of diversification of the rural economy in Republic of Macedonia. This study used desk research and life story interviews of rural tourism entrepreneurs as qualitative research method to identify prevalent success influential factors. Further quantitative analysis was applied in order to measure the strength of influence of identified success factors. The primary data for the quantitative research was gathered using telephone questionnaire composed of 37 questions with 5-points Likert scale. The data was analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM) by SmartPLS 3.1.6. Results indicated that human capital, social capital, entrepreneurial personality and external business environment are predominant influential success factors. However, human capital has non-significant direct effect on success (p 0.493) nonetheless the effect was indirect with high level of partial mediation through entrepreneurial personality as mediator (VAF 73%). Personality of the entrepreneur, social capital and business environment have direct positive affect on entrepreneurial success (p 0.001, 0.003 and 0.045 respectably). Personality also mediates the positive effect of social capital on entrepreneurial success (VAF 28%). Opposite to the theory the data showed no interaction between social and human capital on the entrepreneurial success. This research suggests that rural tourism accommodation entrepreneurs could be more successful if there is increased support in development of social capital in form of conservation of cultural heritage and natural attractions. Priority should be finding the form to encourage and support the establishment of formal and informal associations of entrepreneurs in order to improve the conditions for management and marketing of the sector. Special support of family businesses in the early stages of the operation would have a particularly positive impact on the success of rural tourism. Local infrastructure, access to financial instruments, destination marketing and entrepreneurial personality have positive effect on success.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we study a model for the breast image reconstruction in Digital Tomosynthesis, that is a non-invasive and non-destructive method for the three-dimensional visualization of the inner structures of an object, in which the data acquisition includes measuring a limited number of low-dose two-dimensional projections of an object by moving a detector and an X-ray tube around the object within a limited angular range. The problem of reconstructing 3D images from the projections provided in the Digital Tomosynthesis is an ill-posed inverse problem, that leads to a minimization problem with an object function that contains a data fitting term and a regularization term. The contribution of this thesis is to use the techniques of the compressed sensing, in particular replacing the standard least squares problem of data fitting with the problem of minimizing the 1-norm of the residuals, and using as regularization term the Total Variation (TV). We tested two different algorithms: a new alternating minimization algorithm (ADM), and a version of the more standard scaled projected gradient algorithm (SGP) that involves the 1-norm. We perform some experiments and analyse the performance of the two methods comparing relative errors, iterations number, times and the qualities of the reconstructed images. In conclusion we noticed that the use of the 1-norm and the Total Variation are valid tools in the formulation of the minimization problem for the image reconstruction resulting from Digital Tomosynthesis and the new algorithm ADM has reached a relative error comparable to a version of the classic algorithm SGP and proved best in speed and in the early appearance of the structures representing the masses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes methods to circumvent the need to attach physical markers to bones for anatomical referencing in computer-assisted orthopedic surgery. Using ultrasound, a bone could be non-invasively referenced, and so the problem is formulated as the need for dynamic registration. A method for correspondence establishment is presented, and the matching step is based on three least-squares algorithms: two that are typically used in registration methods such as ICP, and the third is a form of the Unscented Kalman filter that was adapted to work in this context. A simulation was developed in order to reliably evaluate and compare the dynamic registration methods

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cultivation of dessert apples has to meet the consumer's increasing demand for high fruit quality and a sustainable mostly residue-free production while ensuring a competitive agricultural productivity. It is therefore of great interest to know the impact of different cultivation methods on the fruit quality and the chemical composition, respectively. Previous studies have demonstrated the feasibility of High Resolution Magic Angle Spinning (HR-MAS) NMR spectroscopy directly performed on apple tissue as analytical tool for metabonomic studies. In this study, HR-MAS NMR spectroscopy is applied to apple tissue to analyze the metabolic profiles of apples grown under 3 different cultivation methods. Golden Delicious apples were grown applying organic (Bio), integrated (IP) and low-input (LI) plant protection strategies. A total of 70 1H HR-MAS NMR spectra were analyzed by means of principle component analysis (PCA) and partial least squares discriminant analysis (PLS-DA). Apples derived from Bio-production could be well separated from the two other cultivation methods applying both, PCA and PLS-DA. Apples obtained from integrated (IP) and low-input (LI) production discriminated when taking the third PLS-component into account. The identified chemical composition and the compounds responsible for the separation, i.e. the PLS-loadings, are discussed. The results are compared with fruit quality parameters assessed by conventional methods. The present study demonstrates the potential of HR-MAS NMR spectroscopy of fruit tissue as analytical tool for finding markers for specific fruit production conditions like the cultivation method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The direct Bayesian admissible region approach is an a priori state free measurement association and initial orbit determination technique for optical tracks. In this paper, we test a hybrid approach that appends a least squares estimator to the direct Bayesian method on measurements taken at the Zimmerwald Observatory of the Astronomical Institute at the University of Bern. Over half of the association pairs agreed with conventional geometric track correlation and least squares techniques. The remaining pairs cast light on the fundamental limits of conducting tracklet association based solely on dynamical and geometrical information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is widely acknowledged in theoretical and empirical literature that social relationships, comprising of structural measures (social networks) and functional measures (perceived social support) have an undeniable effect on health outcomes. However, the actual mechanism of this effect has yet to be clearly understood or explicated. In addition, comorbidity is found to adversely affect social relationships and health related quality of life (a valued outcome measure in cancer patients and survivors). ^ This cross sectional study uses selected baseline data (N=3088) from the Women's Healthy Eating and Living (WHEL) study. Lisrel 8.72 was used for the latent variable structural equation modeling. Due to the ordinal nature of the data, Weighted Least Squares (WLS) method of estimation using Asymptotic Distribution Free covariance matrices was chosen for this analysis. The primary exogenous predictor variables are Social Networks and Comorbidity; Perceived Social Support is the endogenous predictor variable. Three dimensions of HRQoL, physical, mental and satisfaction with current quality of life were the outcome variables. ^ This study hypothesizes and tests the mechanism and pathways between comorbidity, social relationships and HRQoL using latent variable structural equation modeling. After testing the measurement models of social networks and perceived social support, a structural model hypothesizing associations between the latent exogenous and endogenous variables was tested. The results of the study after listwise deletion (N=2131) mostly confirmed the hypothesized relationships (TLI, CFI >0.95, RMSEA = 0.05, p=0.15). Comorbidity was adversely associated with all three HRQoL outcomes. Strong ties were negatively associated with perceived social support; social network had a strong positive association with perceived social support, which served as a mediator between social networks and HRQoL. Mental health quality of life was the most adversely affected by the predictor variables. ^ This study is a preliminary look at the integration of structural and functional measures of social relationships, comorbidity and three HRQoL indicators using LVSEM. Developing stronger social networks and forming supportive relationships is beneficial for health outcomes such as HRQoL of cancer survivors. Thus, the medical community treating cancer survivors as well as the survivor's social networks need to be informed and cognizant of these possible relationships. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new technique for the harmonic analysis of current observations is described. It consists in applying a linear band pass filter which separates the various species and removes the contribution of non-tidal effects at intertidal frequencies. The tidal constituents are then evaluated through the method of least squares. In spite of the narrowness of the filter, only three days of data are lost through the filtering procedure and the only requirement on the data is that the time interval between samples be an integer fraction of one day. This technique is illustrated through the analysis of a few French current observations from the English Channel within the framework of INOUT. The characteristics of the main tidal constituents are given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-resolution palynological analysis on annually laminated sediments of Sihailongwan Maar Lake (SHL) provides new insights into the Holocene vegetation and climate dynamics of NE China. The robust chronology of the presented record is based on varve counting and AMS radiocarbon dates from terrestrial plant macro-remains. In addition to the qualitative interpretation of the pollen data, we provide quantitative reconstructions of vegetation and climate based on the method of biomization and weighted averaging partial least squares regression (WA-PLS) technique, respectively. Power spectra were computed to investigate the frequency domain distribution of proxy signals and potential natural periodicities. Pollen assemblages, pollen-derived biome scores and climate variables as well as the cyclicity pattern indicate that NE China experienced significant changes in temperature and moisture conditions during the Holocene. Within the earliest phase of the Holocene, a large-scale reorganization of vegetation occurred, reflecting the reconstructed shift towards higher temperatures and precipitation values and the initial Holocene strengthening and northward expansion of the East Asian summer monsoon (EASM). Afterwards, summer temperatures remain at a high level, whereas the reconstructed precipitation shows an increasing trend until approximately 4000 cal. yr BP. Since 3500 cal. yr BP, temperature and precipitation values decline, indicating moderate cooling and weakening of the EASM. A distinct periodicity of 550-600 years and evidence of a Mid-Holocene transition from a temperature-triggered to a predominantly moisture-triggered climate regime are derived from the power spectra analysis. The results obtained from SHL are largely consistent with other palaeoenvironmental records from NE China, substantiating the regional nature of the reconstructed vegetation and climate patterns. However, the reconstructed climate changes contrast with the moisture evolution recorded in S China and the mid-latitude (semi-)arid regions of N China. Whereas a clear insolation-related trend of monsoon intensity over the Holocene is lacking from the SHL record, variations in the coupled atmosphere-Pacific Ocean system can largely explain the reconstructed changes in NE China.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Detailed information about the sediment properties and microstructure can be provided through the analysis of digital ultrasonic P wave seismograms recorded automatically during full waveform core logging. The physical parameter which predominantly affects the elastic wave propagation in water-saturated sediments is the P wave attenuation coefficient. The related sedimentological parameter is the grain size distribution. A set of high-resolution ultrasonic transmission seismograms (ca. 50-500 kHz), which indicate downcore variations in the grain size by their signal shape and frequency content, are presented. Layers of coarse-grained foraminiferal ooze can be identified by highly attenuated P waves, whereas almost unattenuated waves are recorded in fine-grained areas of nannofossil ooze. Color-encoded pixel graphics of the seismograms and instantaneous frequencies present full waveform images of the lithology and attenuation. A modified spectral difference method is introduced to determine the attenuation coefficient and its power law a = kfn. Applied to synthetic seismograms derived using a "constant Q" model, even low attenuation coefficients can be quantified. A downcore analysis gives an attenuation log which ranges from ca. 700 dB/m at 400 kHz and a power of n = 1-2 in coarse-grained sands to few decibels per meter and n ? 0.5 in fine-grained clays. A least squares fit of a second degree polynomial describes the mutual relationship between the mean grain size and the attenuation coefficient. When it is used to predict the mean grain size, an almost perfect coincidence with the values derived from sedimentological measurements is achieved.