966 resultados para Decomposition analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, numerical methods aiming at determining the eigenfunctions, their adjoint and the corresponding eigenvalues of the two-group neutron diffusion equations representing any heterogeneous system are investigated. First, the classical power iteration method is modified so that the calculation of modes higher than the fundamental mode is possible. Thereafter, the Explicitly-Restarted Arnoldi method, belonging to the class of Krylov subspace methods, is touched upon. Although the modified power iteration method is a computationally-expensive algorithm, its main advantage is its robustness, i.e. the method always converges to the desired eigenfunctions without any need from the user to set up any parameter in the algorithm. On the other hand, the Arnoldi method, which requires some parameters to be defined by the user, is a very efficient method for calculating eigenfunctions of large sparse system of equations with a minimum computational effort. These methods are thereafter used for off-line analysis of the stability of Boiling Water Reactors. Since several oscillation modes are usually excited (global and regional oscillations) when unstable conditions are encountered, the characterization of the stability of the reactor using for instance the Decay Ratio as a stability indicator might be difficult if the contribution from each of the modes are not separated from each other. Such a modal decomposition is applied to a stability test performed at the Swedish Ringhals-1 unit in September 2002, after the use of the Arnoldi method for pre-calculating the different eigenmodes of the neutron flux throughout the reactor. The modal decomposition clearly demonstrates the excitation of both the global and regional oscillations. Furthermore, such oscillations are found to be intermittent with a time-varying phase shift between the first and second azimuthal modes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Nocturnal frontal lobe epilepsy (NFLE) is a distinct syndrome of partial epilepsy whose clinical features comprise a spectrum of paroxysmal motor manifestations of variable duration and complexity, arising from sleep. Cardiovascular changes during NFLE seizures have previously been observed, however the extent of these modifications and their relationship with seizure onset has not been analyzed in detail. Objective: Aim of present study is to evaluate NFLE seizure related changes in heart rate (HR) and in sympathetic/parasympathetic balance through wavelet analysis of HR variability (HRV). Methods: We evaluated the whole night digitally recorded video-polysomnography (VPSG) of 9 patients diagnosed with NFLE with no history of cardiac disorders and normal cardiac examinations. Events with features of NFLE seizures were selected independently by three examiners and included in the study only if a consensus was reached. Heart rate was evaluated by measuring the interval between two consecutive R-waves of QRS complexes (RRi). RRi series were digitally calculated for a period of 20 minutes, including the seizures and resampled at 10 Hz using cubic spline interpolation. A multiresolution analysis was performed (Daubechies-16 form), and the squared level specific amplitude coefficients were summed across appropriate decomposition levels in order to compute total band powers in bands of interest (LF: 0.039062 - 0.156248, HF: 0.156248 - 0.624992). A general linear model was then applied to estimate changes in RRi, LF and HF powers during three different period (Basal) (30 sec, at least 30 sec before seizure onset, during which no movements occurred and autonomic conditions resulted stationary); pre-seizure period (preSP) (10 sec preceding seizure onset) and seizure period (SP) corresponding to the clinical manifestations. For one of the patients (patient 9) three seizures associated with ictal asystole were recorded, hence he was treated separately. Results: Group analysis performed on 8 patients (41 seizures) showed that RRi remained unchanged during the preSP, while a significant tachycardia was observed in the SP. A significant increase in the LF component was instead observed during both the preSP and the SP (p<0.001) while HF component decreased only in the SP (p<0.001). For patient 9 during the preSP and in the first part of SP a significant tachycardia was observed associated with an increased sympathetic activity (increased LF absolute values and LF%). In the second part of the SP a progressive decrease in HR that gradually exceeded basal values occurred before IA. Bradycardia was associated with an increase in parasympathetic activity (increased HF absolute values and HF%) contrasted by a further increase in LF until the occurrence of IA. Conclusions: These data suggest that changes in autonomic balance toward a sympathetic prevalence always preceded clinical seizure onset in NFLE, even when HR changes were not yet evident, confirming that wavelet analysis is a sensitive technique to detect sudden variations of autonomic balance occurring during transient phenomena. Finally we demonstrated that epileptic asystole is associated with a parasympathetic hypertonus counteracted by a marked sympathetic activation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research work carried out in focusing a novel multiphase-multilevel ac motor drive system much suitable for low-voltage high-current power applications. In specific, six-phase asymmetrical induction motor with open-end stator winding configuration, fed from four standard two-level three-phase voltage source inverters (VSIs). Proposed synchronous reference frame control algorithm shares the total dc source power among the 4 VSIs in each switching cycle with three degree of freedom. Precisely, first degree of freedom concerns with the current sharing between two three-phase stator windings. Based on modified multilevel space vector pulse width modulation shares the voltage between each single VSIs of two three-phase stator windings with second and third degree of freedom, having proper multilevel output waveforms. Complete model of whole ac motor drive based on three-phase space vector decomposition approach was developed in PLECS - numerical simulation software working in MATLAB environment. Proposed synchronous reference control algorithm was framed in MATLAB with modified multilevel space vector pulse width modulator. The effectiveness of the entire ac motor drives system was tested. Simulation results are given in detail to show symmetrical and asymmetrical, power sharing conditions. Furthermore, the three degree of freedom are exploited to investigate fault tolerant capabilities in post-fault conditions. Complete set of simulation results are provided when one, two and three VSIs are faulty. Hardware prototype model of quad-inverter was implemented with two passive three-phase open-winding loads using two TMS320F2812 DSP controllers. Developed McBSP (multi-channel buffered serial port) communication algorithm able to control the four VSIs for PWM communication and synchronization. Open-loop control scheme based on inverse three-phase decomposition approach was developed to control entire quad-inverter configuration and tested with balanced and unbalanced operating conditions with simplified PWM techniques. Both simulation and experimental results are always in good agreement with theoretical developments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis proposes an integrated holistic approach to the study of neuromuscular fatigue in order to encompass all the causes and all the consequences underlying the phenomenon. Starting from the metabolic processes occurring at the cellular level, the reader is guided toward the physiological changes at the motorneuron and motor unit level and from this to the more general biomechanical alterations. In Chapter 1 a list of the various definitions for fatigue spanning several contexts has been reported. In Chapter 2, the electrophysiological changes in terms of motor unit behavior and descending neural drive to the muscle have been studied extensively as well as the biomechanical adaptations induced. In Chapter 3 a study based on the observation of temporal features extracted from sEMG signals has been reported leading to the need of a more robust and reliable indicator during fatiguing tasks. Therefore, in Chapter 4, a novel bi-dimensional parameter is proposed. The study on sEMG-based indicators opened a scenario also on neurophysiological mechanisms underlying fatigue. For this purpose, in Chapter 5, a protocol designed for the analysis of motor unit-related parameters during prolonged fatiguing contractions is presented. In particular, two methodologies have been applied to multichannel sEMG recordings of isometric contractions of the Tibialis Anterior muscle: the state-of-the-art technique for sEMG decomposition and a coherence analysis on MU spike trains. The importance of a multi-scale approach has been finally highlighted in the context of the evaluation of cycling performance, where fatigue is one of the limiting factors. In particular, the last chapter of this thesis can be considered as a paradigm: physiological, metabolic, environmental, psychological and biomechanical factors influence the performance of a cyclist and only when all of these are kept together in a novel integrative way it is possible to derive a clear model and make correct assessments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the Generalized Beam Theory (GBT) is used as the main tool to analyze the mechanics of thin-walled beams. After an introduction to the subject and a quick review of some of the most well-known approaches to describe the behaviour of thin-walled beams, a novel formulation of the GBT is presented. This formulation contains the classic shear-deformable GBT available in the literature and contributes an additional description of cross-section warping that is variable along the wall thickness besides along the wall midline. Shear deformation is introduced in such a way that the classical shear strain components of the Timoshenko beam theory are recovered exactly. According to the new kinematics proposed, a reviewed form of the cross-section analysis procedure is devised, based on a unique modal decomposition. Later, a procedure for a posteriori reconstruction of all the three-dimensional stress components in the finite element analysis of thin-walled beams using the GBT is presented. The reconstruction is simple and based on the use of three-dimensional equilibrium equations and of the RCP procedure. Finally, once the stress reconstruction procedure is presented, a study of several existing issues on the constitutive relations in the GBT is carried out. Specifically, a constitutive law based on mirroring the kinematic constraints of the GBT model into a specific stress field assumption is proposed. It is shown that this method is equally valid for isotropic and orthotropic beams and coincides with the conventional GBT approach available in the literature. Later on, an analogous procedure is presented for the case of laminated beams. Lastly, as a way to improve an inherently poor description of shear deformability in the GBT, the introduction of shear correction factors is proposed. Throughout this work, numerous examples are provided to determine the validity of all the proposed contributions to the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Holding the major share of stellar mass in galaxies and being also old and passively evolving, early-type galaxies (ETGs) are the primary probes in investigating these various evolution scenarios, as well as being useful means to provide insights on cosmological parameters. In this thesis work I focused specifically on ETGs and on their capability in constraining galaxy formation and evolution; in particular, the principal aims were to derive some of the ETGs evolutionary parameters, such as age, metallicity and star formation history (SFH) and to study their age-redshift and mass-age relations. In order to infer galaxy physical parameters, I used the public code STARLIGHT: this program provides a best fit to the observed spectrum from a combination of many theoretical models defined in user-made libraries. the comparison between the output and input light-weighted ages shows a good agreement starting from SNRs of ∼ 10, with a bias of ∼ 2.2% and a dispersion 3%. Furthermore, also metallicities and SFHs are well reproduced. In the second part of the thesis I performed an analysis on real data, starting from Sloan Digital Sky Survey (SDSS) spectra. I found that galaxies get older with cosmic time and with increasing mass (for a fixed redshift bin); absolute light-weighted ages, instead, result independent from the fitting parameters or the synthetic models used. Metallicities, instead, are very similar from each other and clearly consistent with the ones derived from the Lick indices. The predicted SFH indicates the presence of a double burst of star formation. Velocity dispersions and extinctiona are also well constrained, following the expected behaviours. As a further step, I also fitted single SDSS spectra (with SNR∼ 20), to verify that stacked spectra gave the same results without introducing any bias: this is an important check, if one wants to apply the method at higher z, where stacked spectra are necessary to increase the SNR. Our upcoming aim is to adopt this approach also on galaxy spectra obtained from higher redshift Surveys, such as BOSS (z ∼ 0.5), zCOSMOS (z 1), K20 (z ∼ 1), GMASS (z ∼ 1.5) and, eventually, Euclid (z 2). Indeed, I am currently carrying on a preliminary study to estabilish the applicability of the method to lower resolution, as well as higher redshift (z 2) spectra, just like the Euclid ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Der atmosphärische Kreislauf reaktiver Stickstoffverbindungen beschäftigt sowohl die Naturwissenschaftler als auch die Politik. Dies ist insbesondere darauf zurückzuführen, dass reaktive Stickoxide die Bildung von bodennahem Ozon kontrollieren. Reaktive Stickstoffverbindungen spielen darüber hinaus als gasförmige Vorläufer von Feinstaubpartikeln eine wichtige Rolle und der Transport von reaktivem Stickstoff über lange Distanzen verändert den biogeochemischen Kohlenstoffkreislauf des Planeten, indem er entlegene Ökosysteme mit Stickstoff düngt. Die Messungen von stabilen Stickstoffisotopenverhältnissen (15N/14N) bietet ein Hilfsmittel, welches es erlaubt, die Quellen von reaktiven Stickstoffverbindungen zu identifizieren und die am Stickstoffkeislauf beteiligten Reaktionen mithilfe ihrer reaktionsspezifischen Isotopenfraktionierung genauer zu untersuchen. rnIn dieser Doktorarbeit demonstriere ich, dass es möglich ist, mit Hilfe von Nano-Sekundärionenmassenspektrometrie (NanoSIMS) verschiedene stickstoffhaltige Verbindungen, die üblicherweise in atmosphärischen Feinstaubpartikeln vorkommen, mit einer räumlichen Auflösung von weniger als einem Mikrometer zu analysieren und zu identifizieren. Die Unterscheidung verschiedener stickstoffhaltiger Verbindungen erfolgt anhand der relativen Signalintensitäten der positiven und negativen Sekundärionensignale, die beobachtet werden, wenn die Feinstaubproben mit einem Cs+ oder O- Primärionenstrahl beschossen werden. Die Feinstaubproben können direkt auf dem Probenahmesubstrat in das Massenspektrometer eingeführt werden, ohne chemisch oder physikalisch aufbereited zu werden. Die Methode wurde Mithilfe von Nitrat, Nitrit, Ammoniumsulfat, Harnstoff, Aminosären, biologischen Feinstaubproben (Pilzsporen) und Imidazol getestet. Ich habe gezeigt, dass NO2 Sekundärionen nur beim Beschuss von Nitrat und Nitrit (Salzen) mit positiven Primärionen entstehen, während NH4+ Sekundärionen nur beim Beschuss von Aminosäuren, Harnstoff und Ammoniumsalzen mit positiven Primärionen freigesetzt werden, nicht aber beim Beschuss biologischer Proben wie z.B. Pilzsporen. CN- Sekundärionen werden beim Beschuss aller stickstoffhaltigen Verbindungen mit positiven Primärionen beobachtet, da fast alle Proben oberflächennah mit Kohlenstoffspuren kontaminiert sind. Die relative Signalintensität der CN- Sekundärionen ist bei kohlenstoffhaltigen organischen Stickstoffverbindungen am höchsten.rnDarüber hinaus habe ich gezeigt, dass an reinen Nitratsalzproben (NaNO3 und KNO3), welche auf Goldfolien aufgebracht wurden speziesspezifische stabile Stickstoffisotopenverhältnisse mithilfe des 15N16O2- / 14N16O2- - Sekundärionenverhältnisses genau und richtig gemessen werden können. Die Messgenauigkeit auf Feldern mit einer Rastergröße von 5×5 µm2 wurde anhand von Langzeitmessungen an einem hausinternen NaNO3 Standard als ± 0.6 ‰ bestimmt. Die Differenz der matrixspezifischen instrumentellen Massenfraktionierung zwischen NaNO3 und KNO3 betrug 7.1 ± 0.9 ‰. 23Na12C2- Sekundärionen können eine ernst zu nehmende Interferenz darstellen wenn 15N16O2- Sekundärionen zur Messung des nitratspezifischen schweren Stickstoffs eingesetzt werden sollen und Natrium und Kohlenstoff im selben Feinstaubpartikel als interne Mischung vorliegt oder die natriumhaltige Probe auf einem kohlenstoffhaltigen Substrat abgelegt wurde. Selbst wenn, wie im Fall von KNO3, keine derartige Interferenz vorliegt, führt eine interne Mischung mit Kohlenstoff im selben Feinstaubpartikel zu einer matrixspezifischen instrumentellen Massenfraktionierung die mit der folgenden Gleichung beschrieben werden kann: 15Nbias = (101 ± 4) ∙ f − (101 ± 3) ‰, mit f = 14N16O2- / (14N16O2- + 12C14N-). rnWird das 12C15N- / 12C14N- Sekundärionenverhältnis zur Messung der stabilen Stickstoffisotopenzusammensetzung verwendet, beeinflusst die Probematrix die Messungsergebnisse nicht, auch wenn Stickstoff und Kohlenstoff in den Feinstaubpartikeln in variablen N/C–Verhältnissen vorliegen. Auch Interferenzen spielen keine Rolle. Um sicherzustellen, dass die Messung weiterhin spezifisch auf Nitratspezies eingeschränkt bleibt, kann eine 14N16O2- Maske bei der Datenauswertung verwendet werden. Werden die Proben auf einem kohlenstoffhaltigen, stickstofffreien Probennahmesubstrat gesammelt, erhöht dies die Signalintensität für reine Nitrat-Feinstaubpartikel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the outlook of improving seismic vulnerability assessment for the city of Bishkek (Kyrgyzstan), the global dynamic behaviour of four nine-storey r.c. large-panel buildings in elastic regime is studied. The four buildings were built during the Soviet era within a serial production system. Since they all belong to the same series, they have very similar geometries both in plan and in height. Firstly, ambient vibration measurements are performed in the four buildings. The data analysis composed of discrete Fourier transform, modal analysis (frequency domain decomposition) and deconvolution interferometry, yields the modal characteristics and an estimate of the linear impulse response function for the structures of the four buildings. Then, finite element models are set up for all four buildings and the results of the numerical modal analysis are compared with the experimental ones. The numerical models are finally calibrated considering the first three global modes and their results match the experimental ones with an error of less then 20%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In environmental epidemiology, exposure X and health outcome Y vary in space and time. We present a method to diagnose the possible influence of unmeasured confounders U on the estimated effect of X on Y and to propose several approaches to robust estimation. The idea is to use space and time as proxy measures for the unmeasured factors U. We start with the time series case where X and Y are continuous variables at equally-spaced times and assume a linear model. We define matching estimator b(u)s that correspond to pairs of observations with specific lag u. Controlling for a smooth function of time, St, using a kernel estimator is roughly equivalent to estimating the association with a linear combination of the b(u)s with weights that involve two components: the assumptions about the smoothness of St and the normalized variogram of the X process. When an unmeasured confounder U exists, but the model otherwise correctly controls for measured confounders, the excess variation in b(u)s is evidence of confounding by U. We use the plot of b(u)s versus lag u, lagged-estimator-plot (LEP), to diagnose the influence of U on the effect of X on Y. We use appropriate linear combination of b(u)s or extrapolate to b(0) to obtain novel estimators that are more robust to the influence of smooth U. The methods are extended to time series log-linear models and to spatial analyses. The LEP plot gives us a direct view of the magnitude of the estimators for each lag u and provides evidence when models did not adequately describe the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical shape analysis techniques commonly employed in the medical imaging community, such as active shape models or active appearance models, rely on principal component analysis (PCA) to decompose shape variability into a reduced set of interpretable components. In this paper we propose principal factor analysis (PFA) as an alternative and complementary tool to PCA providing a decomposition into modes of variation that can be more easily interpretable, while still being a linear efficient technique that performs dimensionality reduction (as opposed to independent component analysis, ICA). The key difference between PFA and PCA is that PFA models covariance between variables, rather than the total variance in the data. The added value of PFA is illustrated on 2D landmark data of corpora callosa outlines. Then, a study of the 3D shape variability of the human left femur is performed. Finally, we report results on vector-valued 3D deformation fields resulting from non-rigid registration of ventricles in MRI of the brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Campylobacter, a major zoonotic pathogen, displays seasonality in poultry and in humans. In order to identify temporal patterns in the prevalence of thermophilic Campylobacter spp. in a voluntary monitoring programme in broiler flocks in Germany and in the reported human incidence, time series methods were used. The data originated between May 2004 and June 2007. By the use of seasonal decomposition, autocorrelation and cross-correlation functions, it could be shown that an annual seasonality is present. However, the peak month differs between sample submission, prevalence in broilers and human incidence. Strikingly, the peak in human campylobacterioses preceded the peak in broiler prevalence in Lower Saxony rather than occurring after it. Significant cross-correlations between monthly temperature and prevalence in broilers as well as between human incidence, monthly temperature, rainfall and wind-force were identified. The results highlight the necessity to quantify the transmission of Campylobacter from broiler to humans and to include climatic factors in order to gain further insight into the epidemiology of this zoonotic disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Frequency-transformed EEG resting data has been widely used to describe normal and abnormal brain functional states as function of the spectral power in different frequency bands. This has yielded a series of clinically relevant findings. However, by transforming the EEG into the frequency domain, the initially excellent time resolution of time-domain EEG is lost. The topographic time-frequency decomposition is a novel computerized EEG analysis method that combines previously available techniques from time-domain spatial EEG analysis and time-frequency decomposition of single-channel time series. It yields a new, physiologically and statistically plausible topographic time-frequency representation of human multichannel EEG. The original EEG is accounted by the coefficients of a large set of user defined EEG like time-series, which are optimized for maximal spatial smoothness and minimal norm. These coefficients are then reduced to a small number of model scalp field configurations, which vary in intensity as a function of time and frequency. The result is thus a small number of EEG field configurations, each with a corresponding time-frequency (Wigner) plot. The method has several advantages: It does not assume that the data is composed of orthogonal elements, it does not assume stationarity, it produces topographical maps and it allows to include user-defined, specific EEG elements, such as spike and wave patterns. After a formal introduction of the method, several examples are given, which include artificial data and multichannel EEG during different physiological and pathological conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given a reproducing kernel Hilbert space (H,〈.,.〉)(H,〈.,.〉) of real-valued functions and a suitable measure μμ over the source space D⊂RD⊂R, we decompose HH as the sum of a subspace of centered functions for μμ and its orthogonal in HH. This decomposition leads to a special case of ANOVA kernels, for which the functional ANOVA representation of the best predictor can be elegantly derived, either in an interpolation or regularization framework. The proposed kernels appear to be particularly convenient for analyzing the effect of each (group of) variable(s) and computing sensitivity indices without recursivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A. Continental slope sediments off Spanish-Sahara and Senegal contain up to 4% organic carbon and up to 0.4% total nitrogen. The highest concentrations were found in sediments from water depths between 1000 and 2000 m. The regional and vertical distribution of organic matter differs significantly. Off Spanish-Sahara the organic matter content of sediment deposited during glacial times (Wuerm, Late Riss) is high whereas sediments deposited during interglacial times (Recent, Eem) are low in organic matter. Opposite distribution was found in sediments off Senegal. The sediments contain 30 to 130 ppm of fixed nitrogen. In most sediments this corresponds to 2-8 % of the total nitrogen. Only in sediments deposited during interglacial times off Spanish-Sahara up to 20 % of the total nitrogen is contained as inorganically bound nitrogen. Positive correlations of the fixed nitrogen concentrations to the amounts of clay, alumina, and potassium suggest that it is primarily fixed to illites. The amino acid nitrogen and hexosamine nitrogen account for 17 to 26 % and 1.3 to 2.4 %, respectively of the total nitrogen content of the sediments. The concentrations vary between 200 and 850 ppm amino acid nitrogen and 20 to 70 ppm hexosamine nitrogen, both parallel the fluctiations of organic matter in the sediment. Fulvic acids, humic acids, and the total organic matter of the sediments may be clearly differentiated from one another and their amino acid and hexosamine contents and their amino acid composition: a) Fulvic acids contain only half as much amino acids as humic acids b) The molar amino acid/hexosamine ratios of the fulvic acids are half those of the humic acids and the total organic matter of the sediment c) The amino acid spectra of fulvic acids are characterized by an enrichment of aspartic acid, alanine, and methionine sulfoxide and a depletion of glycine, valine, isoleucine, leucine, tyrosine, phenylalanine, lysine, and arginine compared to the spectra of the humic acids and those of the total organic matter fraction of the sediment. d) The amino acid spectra of the humic acids and those of the total organic matter fraction of the sediments are about the same with the exception that arginine is clearly enriched in the total organic matter. In general, as indicated by the amino compounds humic acids resemble closer the total organic matter composition than the low molecular fulvic acids do. This supports the general idea that during the course of diagenesis in reducing sediments organic matter stabilizes from a fulvic-like structure to humic-like structure and finally to kerogen. The decomposition rates of single aminio acids differ significantly from one another. Generally amino acids which are preferentially contained in humic acids and the total organic matter fraction show a smaller loss with time than those preferably well documented in case of the basic amino acids lysine and arginine which- although thermally unstable- are the most stable amino acids in the sediments. A favoured incorporation of these compounds into high molecular substances as well as into clay minerals may explain their relatively high "stability" in the sediment. The nitrogen loss from the sediments due to the activity of sulphate-reducing bacteria amounts to 20-40 % of the total organic nitrogen now present. At least 40 % of the organic nitrogen which is liberated by sulphate-reducing bacteria can be explained ny decomposition of amino acids alone. B. Deep-sea sediments from the Central Pacific The deep-seas sediments contain 1 to 2 orders of magnitude less organic matter than the continental slope sediments off NW Africa, i.e. 0.04 to 0.3 % organic carbon. The fixed nitrogen content of the deep-sea sediments ranges from 60 to 270 ppm or from 20 to 45 % of the total nitrogen content. While ammonia is the prevailing inorganic nitrogen compound in anoxic pore waters, nitrate predominates in the oxic environment of the deep-sea sediments. Near the sediment/water interface interstital nitrate concentrations of around 30 µg-at. N/l were recorded. These generally increase with sediment depth by 10 to 15 µg-at. NO3- N/l. This suggests the presence of free oxygen and the activity of nitrifying bacteria in the interstitial waters. The ammonia content of the interstitial water of the oxic deep-sea sediments ranges from 2 to 60 µg-at. N/l and thus is several orders of magnitude less than in anoxic sediments. In contrast to recorded nitrate gradients towards the sediments/water interface, there are no ammonia concentration gradients. However, ammonia concentrations appear to be characteristic for certain regional areas. It is suggested that this regional differentiation is caused by ion exchange reactions involving potassium and ammonium ions rather than by different decomposition rates of organic matter. C. C/N ratios All estimated C/N ratios of surface sediments vary between 3 and 9 in the deep-sea and the continental margin, respectively. Whereas the C/N ratios generally increase with depth in the sediment cores off NW Africa they decrease in the deep-sea cores. The lowest values of around 1.3 were found in the deeper sections of the deep-sea cores, the highest of around 10 in the sediments off NW Africa. The wide range of the C/N ratios as well as their opposite behaviour with increasing sediment depth in both the deep-sea and continental margin sediment cores, can be attributed mainly to the combination of the following three factors: 1. Inorganic and organic substances bound within the latticed of clay minerals tend to decrease the C/N ratios. 2. Organic matter not protected by absorption on the clay minerals tends to increase C/N ratios 3. Diagenetic alteration of organic matter by micro-organisms tends to increase C/N ratios through preferential loss of nitrogen The diagenetic changes of the microbially decomposable organic matter results in both oxic and anoxic environments in a preferential loss of nitrogen and hence in higher C/N ratios of the organic fraction. This holds true for most of the continental margin sediments off NW Africa which contain relatively high amounts of organic matter so that factors 2 and 3 predominate there. The relative low C/N ratios of the sediments deposited during interglacial times off Spanish-Sahara, which are low in organic carbon, show the increasing influence of factor 1 - the nitrogen-rich organic substances bound to clay minerals. In the deep-sea sediments from the Central Pacific this factor completely predominates so that the C/N rations of the sediments approach that of the substance absorbed to clay minerals with decreasing organic matter content. In the deeper core sections the unprotected organic matter has been completely destroyed so that the C/N ratios of the total sediments eventually fall into the same range as those of the pure clay mineral fraction.