932 resultados para physical layer network coding


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set includes properties of seawater, particulate matter and dissolved matter from physical, optical and imaging sensors mounted on a vertical sampling system (Rosette) used during the 2009-2013 tara Oceans Expedition. It comprised 2 pairs of conductivity and temperature sensors (SEABIRD components), and a complete set of WEtLabs optical sensors, including chrorophyll and CDOM fluorometers, a 25 cm transmissiometer, and a one-wavelength backscatter meter. In addition, a SATLANTIC ISUS nitrate sensor and a Hydroptic Underwater Vision Profiler (UVP) were mounted on the rosette. In the Arctic Ocean and Arctic Seas (2013), a second oxygen sensor (SBE43) and a four frequency Aquascat acoustic profiler were added. The system was powered on specific Li-Ion batteries and data were self-recorded at 24HZ. Sensors have all been factory calibrated before, during and after the four year program. Oxygen was validated using climatologies (WOA09). Nitrate and Fluorescence data were adjusted with discrete measurements from Niskin bottles mounted on the Rosette, and optical darks were performed monthly on board. A total of 839 quality checked vertical profiles were made during the tara Oceans expedition 2009-2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set includes properties of seawater, particulate matter and dissolved matter from physical, optical and imaging sensors mounted on a vertical sampling system (Rosette) used during the 2009-2013 tara Oceans Expedition. It comprised 2 pairs of conductivity and temperature sensors (SEABIRD components), and a complete set of WEtLabs optical sensors, including chrorophyll and CDOM fluorometers, a 25 cm transmissiometer, and a one-wavelength backscatter meter. In addition, a SATLANTIC ISUS nitrate sensor and a Hydroptic Underwater Vision Profiler (UVP) were mounted on the rosette. In the Arctic Ocean and Arctic Seas (2013), a second oxygen sensor (SBE43) and a four frequency Aquascat acoustic profiler were added. The system was powered on specific Li-Ion batteries and data were self-recorded at 24HZ. Sensors have all been factory calibrated before, during and after the four year program. Oxygen was validated using climatologies (WOA09). Nitrate and Fluorescence data were adjusted with discrete measurements from Niskin bottles mounted on the Rosette, and optical darks were performed monthly on board. A total of 839 quality checked vertical profiles were made during the tara Oceans expedition 2009-2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set includes properties of seawater, particulate matter and dissolved matter from physical, optical and imaging sensors mounted on a vertical sampling system (Rosette) used during the 2009-2013 tara Oceans Expedition. It comprised 2 pairs of conductivity and temperature sensors (SEABIRD components), and a complete set of WEtLabs optical sensors, including chrorophyll and CDOM fluorometers, a 25 cm transmissiometer, and a one-wavelength backscatter meter. In addition, a SATLANTIC ISUS nitrate sensor and a Hydroptic Underwater Vision Profiler (UVP) were mounted on the rosette. In the Arctic Ocean and Arctic Seas (2013), a second oxygen sensor (SBE43) and a four frequency Aquascat acoustic profiler were added. The system was powered on specific Li-Ion batteries and data were self-recorded at 24HZ. Sensors have all been factory calibrated before, during and after the four year program. Oxygen was validated using climatologies (WOA09). Nitrate and Fluorescence data were adjusted with discrete measurements from Niskin bottles mounted on the Rosette, and optical darks were performed monthly on board. A total of 839 quality checked vertical profiles were made during the tara Oceans expedition 2009-2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set includes properties of seawater, particulate matter and dissolved matter from physical, optical and imaging sensors mounted on a vertical sampling system (Rosette) used during the 2009-2013 tara Oceans Expedition. It comprised 2 pairs of conductivity and temperature sensors (SEABIRD components), and a complete set of WEtLabs optical sensors, including chrorophyll and CDOM fluorometers, a 25 cm transmissiometer, and a one-wavelength backscatter meter. In addition, a SATLANTIC ISUS nitrate sensor and a Hydroptic Underwater Vision Profiler (UVP) were mounted on the rosette. In the Arctic Ocean and Arctic Seas (2013), a second oxygen sensor (SBE43) and a four frequency Aquascat acoustic profiler were added. The system was powered on specific Li-Ion batteries and data were self-recorded at 24HZ. Sensors have all been factory calibrated before, during and after the four year program. Oxygen was validated using climatologies (WOA09). Nitrate and Fluorescence data were adjusted with discrete measurements from Niskin bottles mounted on the Rosette, and optical darks were performed monthly on board. A total of 839 quality checked vertical profiles were made during the tara Oceans expedition 2009-2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set includes properties of seawater, particulate matter and dissolved matter from physical, optical and imaging sensors mounted on a vertical sampling system (Rosette) used during the 2009-2013 tara Oceans Expedition. It comprised 2 pairs of conductivity and temperature sensors (SEABIRD components), and a complete set of WEtLabs optical sensors, including chrorophyll and CDOM fluorometers, a 25 cm transmissiometer, and a one-wavelength backscatter meter. In addition, a SATLANTIC ISUS nitrate sensor and a Hydroptic Underwater Vision Profiler (UVP) were mounted on the rosette. In the Arctic Ocean and Arctic Seas (2013), a second oxygen sensor (SBE43) and a four frequency Aquascat acoustic profiler were added. The system was powered on specific Li-Ion batteries and data were self-recorded at 24HZ. Sensors have all been factory calibrated before, during and after the four year program. Oxygen was validated using climatologies (WOA09). Nitrate and Fluorescence data were adjusted with discrete measurements from Niskin bottles mounted on the Rosette, and optical darks were performed monthly on board. A total of 839 quality checked vertical profiles were made during the tara Oceans expedition 2009-2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Ice Station POLarstern (ISPOL) cruise revisited the western Weddell Sea in late 2004 and obtained a comprehensive set of conductivity-temperature-depth (CTD) data. This study describes the thermohaline structure and diapycnal mixing environment observed in 2004 and compares them with conditions observed more than a decade earlier. Hydrographic conditions on the central western Weddell Sea continental slope, off Larsen C Ice Shelf, in late winter/early spring of 2004/2005 can be described as a well-stratified environment with upper layers evidencing relict structures from intense winter near-surface vertical fluxes, an intermediate depth temperature maximum, and a cold near-bottom layer marked by patchy property distributions. A well-developed surface mixed layer, isolated from the underlying Warm Deep Water (WDW) by a pronounced pycnocline and characterized by lack of warming and by minimal sea-ice basal melting, supports the assumption that upper ocean winter conditions persisted during most of the ISPOL experiment. Much of the western Weddell Sea water column has remained essentially unchanged since 1992; however, significant differences were observed in two of the regional water masses. The first, Modified Weddell Deep Water (MWDW), comprises the permanent pycnocline and was less saline than a decade earlier, whereas Weddell Sea Bottom Water (WSBW) was horizontally patchier and colder. Near-bottom temperatures observed in 2004 were the coldest on record for the western Weddell Sea over the continental slope. Minimum temperatures were ~0.4 and ~0.3 °C colder than during 1992-1993, respectively. The 2004 near-bottom temperature/salinity characteristics revealed the presence of two different WSBW types, whereby a warm, fresh layer overlays a colder, saltier layer (both formed in the western Weddell Sea). The deeper layer may have formed locally as high salinity shelf water (HSSW) that flowed intermittently down the continental slope, which is consistent with the observed horizontal patchiness. The latter can be associated with the near-bottom variability found in Powell Basin with consequences for the deep water outflow from the Weddell Sea.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Erasure control coding has been exploited in communication networks with an aim to improve the end-to-end performance of data delivery across the network. To address the concerns over the strengths and constraints of erasure coding schemes in this application, we examine the performance limits of two erasure control coding strategies, forward erasure recovery and adaptive erasure recovery. Our investigation shows that the throughput of a network using an (n, k) forward erasure control code is capped by r =k/n when the packet loss rate p ≤ (te/n) and by k(l-p)/(n-te) when p > (t e/n), where te is the erasure control capability of the code. It also shows that the lower bound of the residual loss rate of such a network is (np-te)/(n-te) for (te/n) < p ≤ 1. Especially, if the code used is maximum distance separable, the Shannon capacity of the erasure channel, i.e. 1-p, can be achieved and the residual loss rate is lower bounded by (p+r-1)/r, for (1-r) < p ≤ 1. To address the requirements in real-time applications, we also investigate the service completion time of different schemes. It is revealed that the latency of the forward erasure recovery scheme is fractionally higher than that of the scheme without erasure control coding or retransmission mechanisms (using UDP), but much lower than that of the adaptive erasure scheme when the packet loss rate is high. Results on comparisons between the two erasure control schemes exhibit their advantages as well as disadvantages in the role of delivering end-to-end services. To show the impact of the bounds derived on the end-to-end performance of a TCP/IP network, a case study is provided to demonstrate how erasure control coding could be used to maximize the performance of practical systems. © 2010 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The roles of long non-coding RNAs (lncRNAs) in regulating cancer and stem cells are being increasingly appreciated. Its diverse mechanisms provide the regulatory network with a bigger repertoire to increase complexity. Here we report a novel LncRNA, Lnc34a, that is enriched in colon cancer stem cells (CCSCs) and initiates asymmetric division by directly targeting the microRNA miR-34a to cause its spatial imbalance. Lnc34a recruits Dnmt3a via PHB2 and HDAC1 to methylate and deacetylate the miR-34a promoter simultaneously, hence epigenetically silencing miR-34a expression independent of its upstream regulator, p53. Lnc34a levels affect CCSC self-renewal and colorectal cancer (CRC) growth in xenograft models. Lnc34a is upregulated in late-stage CRCs, contributing to epigenetic miR-34a silencing and CRC proliferation. The fact that lncRNA targets microRNA highlights the regulatory complexity of non-coding RNAs (ncRNAs), which occupy the bulk of the genome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation focuses on two vital challenges in relation to whale acoustic signals: detection and classification.

In detection, we evaluated the influence of the uncertain ocean environment on the spectrogram-based detector, and derived the likelihood ratio of the proposed Short Time Fourier Transform detector. Experimental results showed that the proposed detector outperforms detectors based on the spectrogram. The proposed detector is more sensitive to environmental changes because it includes phase information.

In classification, our focus is on finding a robust and sparse representation of whale vocalizations. Because whale vocalizations can be modeled as polynomial phase signals, we can represent the whale calls by their polynomial phase coefficients. In this dissertation, we used the Weyl transform to capture chirp rate information, and used a two dimensional feature set to represent whale vocalizations globally. Experimental results showed that our Weyl feature set outperforms chirplet coefficients and MFCC (Mel Frequency Cepstral Coefficients) when applied to our collected data.

Since whale vocalizations can be represented by polynomial phase coefficients, it is plausible that the signals lie on a manifold parameterized by these coefficients. We also studied the intrinsic structure of high dimensional whale data by exploiting its geometry. Experimental results showed that nonlinear mappings such as Laplacian Eigenmap and ISOMAP outperform linear mappings such as PCA and MDS, suggesting that the whale acoustic data is nonlinear.

We also explored deep learning algorithms on whale acoustic data. We built each layer as convolutions with either a PCA filter bank (PCANet) or a DCT filter bank (DCTNet). With the DCT filter bank, each layer has different a time-frequency scale representation, and from this, one can extract different physical information. Experimental results showed that our PCANet and DCTNet achieve high classification rate on the whale vocalization data set. The word error rate of the DCTNet feature is similar to the MFSC in speech recognition tasks, suggesting that the convolutional network is able to reveal acoustic content of speech signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others.

This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system.

Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity.

Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption.

This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications.

Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level.

Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,\lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions.

Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke.

Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Backscatter communication is an emerging wireless technology that recently has gained an increase in attention from both academic and industry circles. The key innovation of the technology is the ability of ultra-low power devices to utilize nearby existing radio signals to communicate. As there is no need to generate their own energetic radio signal, the devices can benefit from a simple design, are very inexpensive and are extremely energy efficient compared with traditional wireless communication. These benefits have made backscatter communication a desirable candidate for distributed wireless sensor network applications with energy constraints.

The backscatter channel presents a unique set of challenges. Unlike a conventional one-way communication (in which the information source is also the energy source), the backscatter channel experiences strong self-interference and spread Doppler clutter that mask the information-bearing (modulated) signal scattered from the device. Both of these sources of interference arise from the scattering of the transmitted signal off of objects, both stationary and moving, in the environment. Additionally, the measurement of the location of the backscatter device is negatively affected by both the clutter and the modulation of the signal return.

This work proposes a channel coding framework for the backscatter channel consisting of a bi-static transmitter/receiver pair and a quasi-cooperative transponder. It proposes to use run-length limited coding to mitigate the background self-interference and spread-Doppler clutter with only a small decrease in communication rate. The proposed method applies to both binary phase-shift keying (BPSK) and quadrature-amplitude modulation (QAM) scheme and provides an increase in rate by up to a factor of two compared with previous methods.

Additionally, this work analyzes the use of frequency modulation and bi-phase waveform coding for the transmitted (interrogating) waveform for high precision range estimation of the transponder location. Compared to previous methods, optimal lower range sidelobes are achieved. Moreover, since both the transmitted (interrogating) waveform coding and transponder communication coding result in instantaneous phase modulation of the signal, cross-interference between localization and communication tasks exists. Phase discriminating algorithm is proposed to make it possible to separate the waveform coding from the communication coding, upon reception, and achieve localization with increased signal energy by up to 3 dB compared with previous reported results.

The joint communication-localization framework also enables a low-complexity receiver design because the same radio is used both for localization and communication.

Simulations comparing the performance of different codes corroborate the theoretical results and offer possible trade-off between information rate and clutter mitigation as well as a trade-off between choice of waveform-channel coding pairs. Experimental results from a brass-board microwave system in an indoor environment are also presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.

In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.

By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.

Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We calculate net community production (NCP) during summer 2005-2006 and spring 2006 in the Ross Sea using multiple approaches to determine the magnitude and consistency of rates. Water column carbon and nutrient inventories and surface ocean O2/Ar data are compared to satellite-derived primary productivity (PP) estimates and 14C uptake experiments. In spring, NCP was related to stratification proximal to upper ocean fronts. In summer, the most intense C drawdown was in shallow mixed layers affected by ice melt; depth-integrated C drawdown, however, increased with mixing depth. Delta O2/Ar-based methods, relying on gas exchange reconstructions, underestimate NCP due to seasonal variations in surface Delta O2/Ar and NCP rates. Mixed layer Delta O2/Ar requires approximately 60 days to reach steady state, starting from early spring. Additionally, cold temperatures prolong the sensitivity of gas exchange reconstructions to past NCP variability. Complex vertical structure, in addition to the seasonal cycle, affects interpretations of surface-based observations, including those made from satellites. During both spring and summer, substantial fractions of NCP were below the mixed layer. Satellite-derived estimates tended to overestimate PP relative to 14C-based estimates, most severely in locations of stronger upper water column stratification. Biases notwithstanding, NCP-PP comparisons indicated that community respiration was of similar magnitude to NCP. We observed that a substantial portion of NCP remained as suspended particulate matter in the upper water column, demonstrating a lag between production and export. Resolving the dynamic physical processes that structure variance in NCP and its fate will enhance the understanding of the carbon cycling in highly productive Antarctic environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent studies have suggested that the marine contribution of methane from shallow regions and melting marine terminating glaciers may have been underestimated. Here we report on methane sources and potential sinks associated with methane seeps in Cumberland Bay, South Georgia's largest fjord system. The average organic carbon content in the upper 8 meters of the sediment is around 0.65 wt.%; this observation combined with Parasound data suggest that the methane gas accumulations probably originate from peat-bearing sediments currently located several tens of meters below the seafloor. Only one of our cores indicates upward advection; instead most of the methane is transported via diffusion. Sulfate and methane flux estimates indicate that a large fraction of methane is consumed by anaerobic oxidation of methane (AOM). Carbon cycling at the sulfate-methane transition (SMT) results in a marked fractionation of the d13C-CH4 from an estimated source value of -65 per mil to a value as low as -96 per mil just below the SMT. Methane concentrations in sediments are high, especially close to the seepage sites (~40 mM); however, concentrations in the water column are relatively low (max. 58 nM) and can be observed only close to the seafloor. Methane is trapped in the lowermost water mass, however, measured microbial oxidation rates reveal very low activity with an average turnover of 3.1 years. We therefore infer that methane must be transported out of the bay in the bottom water layer. A mean sea-air flux of only 0.005 nM/m²/s confirms that almost no methane reaches the atmosphere.