871 resultados para Data detection
Resumo:
Systems for the identification and registration of cattle have gradually been receiving attention for use in syndromic surveillance, a relatively recent approach for the early detection of infectious disease outbreaks. Real or near real-time monitoring of deaths or stillbirths reported to these systems offer an opportunity to detect temporal or spatial clusters of increased mortality that could be caused by an infectious disease epidemic. In Switzerland, such data are recorded in the "Tierverkehrsdatenbank" (TVD). To investigate the potential of the Swiss TVD for syndromic surveillance, 3 years of data (2009-2011) were assessed in terms of data quality, including timeliness of reporting and completeness of geographic data. Two time-series consisting of reported on-farm deaths and stillbirths were retrospectively analysed to define and quantify the temporal patterns that result from non-health related factors. Geographic data were almost always present in the TVD data; often at different spatial scales. On-farm deaths were reported to the database by farmers in a timely fashion; stillbirths were less timely. Timeliness and geographic coverage are two important features of disease surveillance systems, highlighting the suitability of the TVD for use in a syndromic surveillance system. Both time series exhibited different temporal patterns that were associated with non-health related factors. To avoid false positive signals, these patterns need to be removed from the data or accounted for in some way before applying aberration detection algorithms in real-time. Evaluating mortality data reported to systems for the identification and registration of cattle is of value for comparing national data systems and as a first step towards a European-wide early detection system for emerging and re-emerging cattle diseases.
Resumo:
Syndromic surveillance (SyS) systems currently exploit various sources of health-related data, most of which are collected for purposes other than surveillance (e.g. economic). Several European SyS systems use data collected during meat inspection for syndromic surveillance of animal health, as some diseases may be more easily detected post-mortem than at their point of origin or during the ante-mortem inspection upon arrival at the slaughterhouse. In this paper we use simulation to evaluate the performance of a quasi-Poisson regression (also known as an improved Farrington) algorithm for the detection of disease outbreaks during post-mortem inspection of slaughtered animals. When parameterizing the algorithm based on the retrospective analyses of 6 years of historic data, the probability of detection was satisfactory for large (range 83-445 cases) outbreaks but poor for small (range 20-177 cases) outbreaks. Varying the amount of historical data used to fit the algorithm can help increasing the probability of detection for small outbreaks. However, while the use of a 0·975 quantile generated a low false-positive rate, in most cases, more than 50% of outbreak cases had already occurred at the time of detection. High variance observed in the whole carcass condemnations time-series, and lack of flexibility in terms of the temporal distribution of simulated outbreaks resulting from low reporting frequency (monthly), constitute major challenges for early detection of outbreaks in the livestock population based on meat inspection data. Reporting frequency should be increased in the future to improve timeliness of the SyS system while increased sensitivity may be achieved by integrating meat inspection data into a multivariate system simultaneously evaluating multiple sources of data on livestock health.
Resumo:
The State of Connecticut owns a LIght Detection and Ranging (LIDAR) data set that was collected in 2000 as part of the State’s periodic aerial reconnaissance missions. Although collected eight years ago, these data are just now becoming ready to be made available to the public. These data constitute a massive “point cloud”, being a long list of east-north-up triplets in the State Plane Coordinate System Zone 0600 (SPCS83 0600), orthometric heights (NAVD 88) in US Survey feet. Unfortunately, point clouds have no structure or organization, and consequently they are not as useful as Triangulated Irregular Networks (TINs), digital elevation models (DEMs), contour maps, slope and aspect layers, curvature layers, among others. The goal of this project was to provide the computational infrastructure to create a first cut of these products and to serve them to the public via the World Wide Web. The products are available at http://clear.uconn.edu/data/ct_lidar/index.htm.
Resumo:
Metabolic Syndrome (MetS) is a clustering of cardiovascular (CV) risk factors that includes obesity, dyslipidemia, hyperglycemia, and elevated blood pressure. Applying the criteria for MetS can serve as a clinically feasible tool for identifying patients at high risk for CV morbidity and mortality, particularly those who do not fall into traditional risk categories. The objective of this study was to examine the association between MetS and CV mortality among 10,940 American hypertensive adults, ages 30-69 years, participating in a large randomized controlled trial of hypertension treatment (HDFP 1973-1983). MetS was defined as the presence of hypertension and at least two of the following risk factors: obesity, dyslipidemia, or hyperglycemia. Of the 10,763 individuals with sufficient data available for analysis, 33.2% met criteria for MetS at baseline. The baseline prevalence of MetS was significantly higher among women (46%) than men (22%) and among non-blacks (37%) versus blacks (30%). All-cause and CV mortality was assessed for 10,763 individuals. Over a median follow-up of 7.8 years, 1,425 deaths were observed. Approximately 53% of these deaths were attributed to CV causes. Compared to individuals without MetS at baseline, those with MetS had higher rates of all-cause mortality (14.5% v. 12.6%) and CV mortality (8.2% versus 6.4%). The unadjusted risk of CV mortality among those with MetS was 1.31 (95% confidence interval [CI], 1.12-1.52) times that for those without MetS at baseline. After multiple adjustment for traditional risk factors of age, race, gender, history of cardiovascular disease (CVD), and smoking status, individuals with MetS, compared to those without MetS, were 1.42 (95% CI, 1.20-1.67) times more likely to die of CV causes. Of the individual components of MetS, hyperglycemia/diabetes conferred the strongest risk of CV mortality (OR 1.73; 95% CI, 1.39-2.15). Results of the present study suggest MetS defined as the presence of hypertension and 2 additional cardiometabolic risk factors (obesity, dyslipidemia, or hyperglycemia/diabetes) can be used with some success to predict CV mortality in middle-aged hypertensive adults. Ongoing and future prospective studies are vital to examine the association between MetS and cardiovascular morbidity and mortality in select high-risk subpopulations, and to continue evaluating the public health impact of aggressive, targeted screening, prevention, and treatment efforts to prevent future cardiovascular disability and death.^
Resumo:
Introduction. Despite the ban of lead-containing gasoline and paint, childhood lead poisoning remains a public health issue. Furthermore, a Medicaid-eligible child is 8 times more likely to have an elevated blood lead level (EBLL) than a non-Medicaid child, which is the primary reason for the early detection lead screening mandate for ages 12 and 24 months among the Medicaid population. Based on field observations, there was evidence that suggested a screening compliance issue. Objective. The purpose of this study was to analyze blood lead screening compliance in previously lead poisoned Medicaid children and test for an association between timely lead screening and timely childhood immunizations. The mean months between follow-up tests were also examined for a significant difference between the non-compliant and compliant lead screened children. Methods. Access to the surveillance data of all childhood lead poisoned cases in Bexar County was granted by the San Antonio Metropolitan Health District. A database was constructed and analyzed using descriptive statistics, logistic regression methods and non-parametric tests. Lead screening at 12 months of age was analyzed separately from lead screening at 24 months. The small portion of the population who were also related were included in one analysis and removed from a second analysis to check for significance. Gender, ethnicity, age of home, and having a sibling with an EBLL were ruled out as confounders for the association tests but ethnicity and age of home were adjusted in the nonparametric tests. Results. There was a strong significant association between lead screening compliance at 12 months and childhood immunization compliance, with or without including related children (p<0.00). However, there was no significant association between the two variables at the age of 24 months. Furthermore, there was no significant difference between the median of the mean months of follow-up blood tests among the non-compliant and compliant lead screened population for at the 12 month screening group but there was a significant difference at the 24 month screening group (p<0.01). Discussion. Descriptive statistics showed that 61% and 56% of the previously lead poisoned Medicaid population did not receive their 12 and 24 month mandated lead screening on time, respectively. This suggests that their elevated blood lead level may have been diagnosed earlier in their childhood. Furthermore, a child who is compliant with their lead screening at 12 months of age is 2.36 times more likely to also receive their childhood immunizations on time compared to a child who was not compliant with their 12 month screening. Even though there was no statistical significant association found for the 24 month group, the public health significance of a screening compliance issue is no less important. The Texas Medicaid program needs to enforce lead screening compliance because it is evident that there has been no monitoring system in place. Further recommendations include a need for an increased focus on parental education and the importance of taking their children for wellness exams on time.^
Resumo:
The relationship between degree of diastolic blood pressure (DBP) reduction and mortality was examined among hypertensives, ages 30-69, in the Hypertension Detection and Follow-up Program (HDFP). The HDFP was a multi-center community-based trial, which followed 10,940 hypertensive participants for five years. One-year survival was required for inclusion in this investigation since the one-year annual visit was the first occasion where change in blood pressure could be measured on all participants. During the subsequent four years of follow-up on 10,052 participants, 568 deaths occurred. For levels of change in DBP and for categories of variables related to mortality, the crude mortality rate was calculated. Time-dependent life tables were also calculated so as to utilize available blood pressure data over time. In addition, the Cox life table regression model, extended to take into account both time-constant and time-dependent covariates, was used to examine the relationship change in blood pressure over time and mortality.^ The results of the time-dependent life table and time-dependent Cox life table regression analyses supported the existence of a quadratic function which modeled the relationship between DBP reduction and mortality, even after adjusting for other risk factors. The minimum mortality hazard ratio, based on a particular model, occurred at a DBP reduction of 22.6 mm Hg (standard error = 10.6) in the whole population and 8.5 mm Hg (standard error = 4.6) in the baseline DBP stratum 90-104. After this reduction, there was a small increase in the risk of death. There was not evidence of the quadratic function after fitting the same model using systolic blood pressure. Methodologic issues involved in studying a particular degree of blood pressure reduction were considered. The confidence interval around the change corresponding to the minimum hazard ratio was wide and the obtained blood pressure level should not be interpreted as a goal for treatment. Blood pressure reduction was attributed, not only to pharmacologic therapy, but also to regression to the mean, and to other unknown factors unrelated to treatment. Therefore, the surprising results of this study do not provide direct implications for treatment, but strongly suggest replication in other populations. ^
Resumo:
Next-generation sequencing (NGS) technology has become a prominent tool in biological and biomedical research. However, NGS data analysis, such as de novo assembly, mapping and variants detection is far from maturity, and the high sequencing error-rate is one of the major problems. . To minimize the impact of sequencing errors, we developed a highly robust and efficient method, MTM, to correct the errors in NGS reads. We demonstrated the effectiveness of MTM on both single-cell data with highly non-uniform coverage and normal data with uniformly high coverage, reflecting that MTM’s performance does not rely on the coverage of the sequencing reads. MTM was also compared with Hammer and Quake, the best methods for correcting non-uniform and uniform data respectively. For non-uniform data, MTM outperformed both Hammer and Quake. For uniform data, MTM showed better performance than Quake and comparable results to Hammer. By making better error correction with MTM, the quality of downstream analysis, such as mapping and SNP detection, was improved. SNP calling is a major application of NGS technologies. However, the existence of sequencing errors complicates this process, especially for the low coverage (
Resumo:
Collisional and post-collisional volcanic rocks in the Ulubey (Ordu) area at the western edge of the Eastern Pontide Tertiary Volcanic Province (EPTVP) in NE Turkey are divided into four suites; Middle Eocene (49.4-44.6 Ma) aged Andesite-Trachyandesite (AT), Trachyandesite-Trachydacite-Rhyolite (TTR), Trachydacite-Dacite (TD) suites, and Middle Miocene (15.1 Ma) aged Trachybasalt (TB) suite. Local stratigraphy in the Ulubey area starts with shallow marine environment sediments of the Paleocene-Eocene time and then continues extensively with sub-aerial andesitic to rhyolitic and rare basaltic volcanism during Eocene and Miocene time, respectively. Petrographically, the volcanic rocks are composed primarily of andesites/trachyandesites, with minor trachydacites/rhyolites, basalts/trachybasalts and pyroclastics, and show porphyric, hyalo-microlitic porphyric and rarely glomeroporphyric, intersertal, intergranular, fluidal and sieve textures. The Ulubey (Ordu) volcanic rocks indicate magma evolution from tholeiitic-alkaline to calc-alkaline with medium-K contents. Primitive mantle normalized trace element and chondrite normalized rare earth element (REE) patterns show that the volcanic rocks have moderate light rare earth element (LREE)/heavy rare earth element (HREE) ratios relative to E-Type MORB and depletion in Nb, Ta and Ti. High Th/Yb ratios indicate parental magma(s) derived from an enriched source formed by mixing of slab and asthenospheric melts previously modified by fluids and sediments from a subduction zone. All of the volcanic rocks share similar incompatible element ratios (e.g., La/Sm, Zr/Nb, La/Nb) and chondrite-normalized REE patterns, indicating that the basic to acidic rocks originated from the same source. The volcanic rocks were produced by the slab dehydration-induced melting of an existing metasomatized mantle source, and the fluids from the slab dehydration introduced significant large ion lithophile element (LILE) and LREE to the source, masking its inherent HFSE-enriched characteristics. The initial 87Sr/86Sr (0.7044-0.7050) and eNd (-0.3 to +3.4) ratios of the volcanics suggest that they originated from an enriched lithospheric mantle source with low Sm/Nd ratios. Integration of the geochemical, petrological and isotopical with regional and local geological data suggest that the Tertiary volcanic rocks from the Ulubey (Ordu) area were derived from an enriched mantle, which had been previously metasomatized by fluids derived from subducted slab during Eocene to Miocene in collisional and post-collisional extension-related geodynamic setting following Late Mesozoic continental collision between the Eurasian plate and the Tauride-Anatolide platform.
Resumo:
This paper assesses the along strike variation of active bedrock fault scarps using long range terrestrial laser scanning (t-LiDAR) data in order to determine the distribution behaviour of scarp height and the subsequently calculate long term throw-rates. Five faults on Cretewhich display spectacular limestone fault scarps have been studied using high resolution digital elevation model (HRDEM) data. We scanned several hundred square metres of the fault system including the footwall, fault scarp and hanging wall of the investigated fault segment. The vertical displacement and the dip of the scarp were extracted every metre along the strike of the detected fault segment based on the processed HRDEM. The scarp variability was analysed by using statistical and morphological methods. The analysis was done in a geographical information system (GIS) environment. Results show a normal distribution for the scanned fault scarp's vertical displacement. Based on these facts, the mean value of height was chosen to define the authentic vertical displacement. Consequently the scarp can be divided into above, below and within the range of mean (within one standard deviation) and quantify the modifications of vertical displacement. Therefore, the fault segment can be subdivided into areas which are influenced by external modification like erosion and sedimentation processes. Moreover, to describe and measure the variability of vertical displacement along strike the fault, the semi-variance was calculated with the variogram method. This method is used to determine how much influence the external processes have had on the vertical displacement. By combining of morphological and statistical results, the fault can be subdivided into areas with high external influences and areas with authentic fault scarps, which have little or no external influences. This subdivision is necessary for long term throw-rate calculations, because without this differentiation the calculated rates would be misleading and the activity of a fault would be incorrectly assessed with significant implications for seismic hazard assessment since fault slip rate data govern the earthquake recurrence. Furthermore, by using this workflow areas with minimal external influences can be determined, not only for throw-rate calculations, but also for determining samples sites for absolute dating techniques such as cosmogenic nuclide dating. The main outcomes of this study include: i) there is no direct correlation between the fault's mean vertical displacement and dip (R² less than 0.31); ii) without subdividing the scanned scarp into areas with differing amounts of external influences, the along strike variability of vertical displacement is ±35%; iii) when the scanned scarp is subdivided the variation of the vertical displacement of the authentic scarp (exposed by earthquakes only) is in a range of ±6% (the varies depending on the fault from 7 to 12%); iv) the calculation of the long term throw-rate (since 13 ka) for four scarps in Crete using the authentic vertical displacement is 0.35 ± 0.04 mm/yr at Kastelli 1, 0.31 ± 0.01 mm/yr at Kastelli 2, 0.85 ± 0.06 mm/yr at the Asomatos fault (Sellia) and 0.55 ± 0.05 mm/yr at the Lastros fault.
Resumo:
The mid-Cretaceous is widely considered the archetypal ice-free greenhouse interval in Earth history, with a thermal maximum around Cenomanian-Turonian boundary time (ca. 90 Ma). However, contemporaneous glaciations have been hypothesized based on sequence stratigraphic evidence for rapid sea-level oscillation and oxygen isotope excursions in records generated from carbonates of questionable preservation and/or of low resolution. We present new oxygen isotope records for the mid-Cenomanian Demerara Rise that are of much higher resolution than previously available, taken from both planktic and benthic foraminifers, and utilizing only extremely well preserved glassy foraminifers. Our records show no evidence of glaciation, calling into question the hypothesized ice sheets and rendering the origin of inferred rapid sea-level oscillations enigmatic. Simple mass-balance calculations demonstrate that this Cretaceous sea-level paradox is unlikely to be explained by hidden ice sheets existing below the limit of d18O detection.
Resumo:
Three long-term temperature data series measured in Portugal were studied to detect and correct non-climatic homogeneity breaks and are now available for future studies of climate variability. Series of monthly minimum (Tmin) and maximum (Tmax) temperatures measured in the three Portuguese meteorological stations of Lisbon (from 1856 to 2008), Coimbra (from 1865 to 2005) and Porto (from 1888 to 2001) were studied to detect and correct non-climatic homogeneity breaks. These series together with monthly series of average temperature (Taver) and temperature range (DTR) derived from them were tested in order to detect homogeneity breaks, using, firstly, metadata, secondly, a visual analysis and, thirdly, four widely used homogeneity tests: von Neumann ratio test, Buishand test, standard normal homogeneity test and Pettitt test. The homogeneity tests were used in absolute (using temperature series themselves) and relative (using sea-surface temperature anomalies series obtained from HadISST2 close to the Portuguese coast or already corrected temperature series as reference series) modes. We considered the Tmin, Tmax and DTR series as most informative for the detection of homogeneity breaks due to the fact that Tmin and Tmax could respond differently to changes in position of a thermometer or other changes in the instrument's environment; Taver series have been used, mainly, as control. The homogeneity tests show strong inhomogeneity of the original data series, which could have both internal climatic and non-climatic origins. Homogeneity breaks which have been identified by the last three mentioned homogeneity tests were compared with available metadata containing data, such as instrument changes, changes in station location and environment, observing procedures, etc. Significant homogeneity breaks (significance 95% or more) that coincide with known dates of instrumental changes have been corrected using standard procedures. It was also noted that some significant homogeneity breaks, which could not be connected to the known dates of any changes in the park of instruments or stations location and environment, could be caused by large volcanic eruptions. The corrected series were again tested for homogeneity: the corrected series were considered free of non-climatic breaks when the tests of most of monthly series showed no significant (significance 95% or more) homogeneity breaks that coincide with dates of known instrument changes. Corrected series are now available in the frame of ERA-CLIM FP7 project for future studies of climate variability.
Resumo:
Documenting changes in distribution is necessary for understanding species' response to environmental changes, but data on species distributions are heterogeneous in accuracy and resolution. Combining different data sources and methodological approaches can fill gaps in knowledge about the dynamic processes driving changes in species-rich, but data-poor regions. We combined recent bird survey data from the Neotropical Biodiversity Mapping Initiative (NeoMaps) with historical distribution records to estimate potential changes in the distribution of eight species of Amazon parrots in Venezuela. Using environmental covariates and presence-only data from museum collections and the literature, we first used maximum likelihood to fit a species distribution model (SDM) estimating a historical maximum probability of occurrence for each species. We then used recent, NeoMaps survey data to build single-season occupancy models (OM) with the same environmental covariates, as well as with time- and effort-dependent detectability, resulting in estimates of the current probability of occurrence. We finally calculated the disagreement between predictions as a matrix of probability of change in the state of occurrence. Our results suggested negative changes for the only restricted, threatened species, Amazona barbadensis, which has been independently confirmed with field studies. Two of the three remaining widespread species that were detected, Amazona amazonica, Amazona ochrocephala, also had a high probability of negative changes in northern Venezuela, but results were not conclusive for Amazona farinosa. The four remaining species were undetected in recent field surveys; three of these were most probably absent from the survey locations (Amazona autumnalis, Amazona mercenaria and Amazona festiva), while a fourth (Amazona dufresniana) requires more intensive targeted sampling to estimate its current status. Our approach is unique in taking full advantage of available, but limited data, and in detecting a high probability of change even for rare and patchily-distributed species. However, it is presently limited to species meeting the strong assumptions required for maximum-likelihood estimation with presence-only data, including very high detectability and representative sampling of its historical distribution.
Resumo:
ENVISAT ASAR WSM images with pixel size 150 × 150 m, acquired in different meteorological, oceanographic and sea ice conditions were used to determined icebergs in the Amundsen Sea (Antarctica). An object-based method for automatic iceberg detection from SAR data has been developed and applied. The object identification is based on spectral and spatial parameters on 5 scale levels, and was verified with manual classification in four polygon areas, chosen to represent varying environmental conditions. The algorithm works comparatively well in freezing temperatures and strong wind conditions, prevailing in the Amundsen Sea during the year. The detection rate was 96% which corresponds to 94% of the area (counting icebergs larger than 0.03 km**2), for all seasons. The presented algorithm tends to generate errors in the form of false alarms, mainly caused by the presence of ice floes, rather than misses. This affects the reliability since false alarms were manually corrected post analysis.
Resumo:
This data set contains four time series of particulate and dissolved soil nitrogen measurements from the main experiment plots of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the main experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). Plots were maintained by bi-annual weeding and mowing. 1. Total nitrogen from solid phase: Stratified soil sampling was performed every two years since before sowing in April 2002 and was repeated in April 2004, 2006 and 2008 to a depth of 30 cm segmented to a depth resolution of 5 cm giving six depth subsamples per core. In 2002 five samples per plot were taken and analyzed independently. Averaged values per depth layer are reported. In later years, three samples per plot were taken, pooled in the field, and measured as a combined sample. Sampling locations were less than 30 cm apart from sampling locations in other years. All soil samples were passed through a sieve with a mesh size of 2 mm in 2002. In later years samples were further sieved to 1 mm. No additional mineral particles were removed by this procedure. Total nitrogen concentration was analyzed on ball-milled subsamples (time 4 min, frequency 30 s-1) by an elemental analyzer at 1150°C (Elementaranalysator vario Max CN; Elementar Analysensysteme GmbH, Hanau, Germany). 2. Total nitrogen from solid phase (high intensity sampling): In block 2 of the Jena Experiment, soil samples were taken to a depth of 1m (segmented to a depth resolution of 5 cm giving 20 depth subsamples per core) with three replicates per block ever 5 years starting before sowing in April 2002. Samples were processed as for the more frequent sampling but were always analyzed independently and never pooled. 3. Mineral nitrogen from KCl extractions: Five soil cores (diameter 0.01 m) were taken at a depth of 0 to 0.15 m (and between 2002 and 2004 also at a depth of 0.15 to 0.3 m) of the mineral soil from each of the experimental plots at various times over the years. In addition also plots of the management experiment, that altered mowing frequency and fertilized subplots (see further details below) were sampled in some later years. Samples of the soil cores per plot (subplots in case of the management experiment) were pooled during each sampling campaign. NO3-N and NH4-N concentrations were determined by extraction of soil samples with 1 M KCl solution and were measured in the soil extract with a Continuous Flow Analyzer (CFA, 2003-2005: Skalar, Breda, Netherlands; 2006-2007: AutoAnalyzer, Seal, Burgess Hill, United Kingdom). 4. Dissolved nitrogen in soil solution: Glass suction plates with a diameter of 12 cm, 1 cm thickness and a pore size of 1-1.6 µm (UMS GmbH, Munich, Germany) were installed in April 2002 in depths of 10, 20, 30 and 60 cm to collect soil solution. The sampling bottles were continuously evacuated to a negative pressure between 50 and 350 mbar, such that the suction pressure was about 50 mbar above the actual soil water tension. Thus, only the soil leachate was collected. Cumulative soil solution was sampled biweekly and analyzed for nitrate (NO3-), ammonium (NH4+) and total dissolved nitrogen concentrations with a continuous flow analyzer (CFA, Skalar, Breda, The Netherlands). Nitrate was analyzed photometrically after reduction to NO2- and reaction with sulfanilamide and naphthylethylenediamine-dihydrochloride to an azo-dye. Our NO3- concentrations contained an unknown contribution of NO2- that is expected to be small. Simultaneously to the NO3- analysis, NH4+ was determined photometrically as 5-aminosalicylate after a modified Berthelot reaction. The detection limits of NO3- and NH4+ were 0.02 and 0.03 mg N L-1, respectively. Total dissolved N in soil solution was analyzed by oxidation with K2S2O8 followed by reduction to NO2- as described above for NO3-. Dissolved organic N (DON) concentrations in soil solution were calculated as the difference between TDN and the sum of mineral N (NO3- + NH4+).
Resumo:
Bathymetry based on data recorded during POS317-3 between 19.09.2004 and 13.10.2004 in the Black Sea. This cruise concentrated on bathymetric mapping and mapping of gas seeps by hydro-acoustic detection of gas flares in the water column and the quantification of microbial turnover of gassy sediments and microbial mats. The major objective during POS317-3 was the characterization and identification of microorganisms involved in the anaerobic methane oxidation in the sediment and in microbial mats. As part of these investigations characteristic organic molecules will be identified, which can be used as biomarkers for anaerobic methane oxidizing microorganisms.