981 resultados para Intussusception, Recurrence Rate, Pathologic Lead Point, OperativeReduction, Barium Enema Reduction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since Dymond et al. (1992, doi:10.1029/92PA00181) proposed the paleoproductivity algorithm based on "Bio-Ba", which relies on a strong correlation between Ba and organic carbon fluxes in sediment traps, this proxy has been applied in many paleoproductivity studies. Barite, the main carrier of particulate barium in the water column and the phase associated with carbon export, has also been suggested as a reliable paleoproductivity proxy in some locations. We demonstrate that Ba(excess) (total barium minus the fraction associated with terrigenous material) frequently overestimates Ba(barite) (barium associated with the mineral barite), most likely due to the inclusion of barium from phases other than barite and terrigenous silicates (e.g., carbonate, organic matter, opal, Fe-Mn oxides, and hydroxides). A comparison between overlying oceanic carbon export and carbon export derived from Ba(excess) shows that the Dymond et al. (1992) algorithm frequently underestimates carbon export but is still a useful carbon export indicator if all caveats are considered before the algorithm is applied. Ba(barite) accumulation rates from a wide range of core top sediments from different oceanic settings are highly correlated to surface ocean 14C and Chlorophyll a measurements of primary production. This relationship varies by ocean basin, but with the application of the appropriate f ratio to 14C and Chlorophyll a primary production estimates, the plot of Ba(barite) accumulation and carbon export for the equatorial Pacific, Atlantic, and Southern Ocean converges to a global relationship that can be used to reconstruct paleo carbon export.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present data compilation includes dinoflagellates growth rate, grazing rate and gross growth efficiency determined either in the field or in laboratory experiments. From the existing literature, we synthesized all data that we could find on dinoflagellates. Some sources might be missing but none were purposefully ignored. We did not include autotrophic dinoflagellates in the database, but mixotrophic organisms may have been included. This is due to the large uncertainty about which taxa are mixotrophic, heterotrophic or symbiont bearing. Field data on microzooplankton grazing are mostly comprised of grazing rate using the dilution technique with a 24h incubation period. Laboratory grazing and growth data are focused on pelagic ciliates and heterotrophic dinoflagellates. The experiment measured grazing or growth as a function of prey concentration or at saturating prey concentration (maximal grazing rate). When considering every single data point available (each measured rate for a defined predator-prey pair and a certain prey concentration) there is a total of 801 data points for the dinoflagellates, counting experiments that measured growth and grazing simultaneously as 1 data point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present data compilation includes ciliates growth rate, grazing rate and gross growth efficiency determined either in the field or in laboratory experiments. From the existing literature, we synthesized all data that we could find on cilliate. Some sources might be missing but none were purposefully ignored. Field data on microzooplankton grazing are mostly comprised of grazing rate using the dilution technique with a 24h incubation period. Laboratory grazing and growth data are focused on pelagic ciliates and heterotrophic dinoflagellates. The experiment measured grazing or growth as a function of prey concentration or at saturating prey concentration (maximal grazing rate). When considering every single data point available (each measured rate for a defined predator-prey pair and a certain prey concentration) there is a total of 1485 data points for the ciliates, counting experiments that measured growth and grazing simultaneously as 1 data point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On the basis of two sedimentary records from the central Sea of Okhotsk, we reconstruct the closely coupled glacial/interglacial changes in terrigenous flux, marine productivity, and sea ice coverage over the past 1.1 Myr. The correspondance of our sedimentary records to the China loess grain size record (China loess particle timescale, CHILOPARTS) suggests that environmental changes in both the Sea of Okhotsk area and in SE Asia were closely related via the Siberian atmospheric high-pressure cell. During full glacial times our records point to a strong Siberian High causing northerly wind directions, the extension of the sea ice cover, and a reduced Amur River discharge. Deglacial maxima of terrigenous flux were succeeded by or synchronous to high-productivity events. Marine productivity was strengthened during glacial terminations because of an effective nutrient utilization at times of enhanced water column stratification and high nutrient supply from fluvial runoff and sea ice thawing. During interglacials, SE monsoonal winds prevailed, analogous to today's summer situation of a pronounced Mongolian Heat Low and a strong Hawaiian High. Strong freshwater discharge induced by high precipitation rates in the Amur drainage area and a seasonally reduced and mobile sea ice cover favored marine productivity (although being considerably lower than during the terminations) and a lowered flux of ice-rafted detritus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have analyzed the major, trace, and rare earth element composition of surface sediments collected from a transect across the Equator at 135°W longitude in the Pacific Ocean. Comparing the behavior of this suite of elements to the CaCO3, opal, and Corg fluxes (which record sharp maxima at the Equator, previously documented at the same sampling stations) enables us to assess the relative significance of the various pathways by which trace elements are transported to the equatorial Pacific seafloor. The 1. (1) high biogenic source at the Equator, associated with equatorial divergence of surface water and upwelling of nutrient-rich water, and 2. (2) high aluminosilicate flux at 4°N, associated with increased terrigenous input from elevated rainfall at the Intertropical Convergence Zone (ITCZ) of the tradewinds, are the two most important fluxes with which elemental transport is affiliated. The biogenic flux at the Equator transports Ca and Sr structurally bound to carbonate tests and Mn primarily as an adsorbed component. Trace elements such as Cr, As, Pb, and the REEs are also influenced by the biogenic flux at the Equator, although this affiliation is not regionally dominant. Normative calculations suggest that extremely large fluxes of Ba and P at the Equator are carried by only small proportions of barite and apatite phases. The high terrigenous flux at the ITCZ has a profound effect on chemical transport to the seafloor, with elemental fluxes increasing tremendously and in parallel with Ti. Normative calculations, however, indicate that these fluxes are far in excess of what can be supplied by lattice-bound terrigenous phases. The accumulation of Ba is greater than is affiliated with biogenic transport at the Equator, while the P flux at the ITCZ is only 10% less than at the Equator. This challenges the common view that Ba and P are essentially exclusively associated with biogenic fluxes. Many other elements (including Mn, Pb, As, and REEs) also record greater accumulation beneath the ITCZ than at the Equator. Thus, adsorptive scavenging by terrigenous paniculate matter, or phases intimately associated with them, appears to be an extremely important process regulating elemental transport to the equatorial Pacific seafloor. These findings emphasize the role of vertical transport to the sediment, and provide additional constraints on the paleochemical use of trace elements to track biogenic and terrigenous fluxes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geochemical barrier zones play an important role in determining various physical systems and characteristics of oceans, e.g. hydrodynamics, salinity, temperature and light. In the book each of more than 30 barrier zones are illustrated and defined by physical, chemical and biological parameters. Among the topics discussed are processes of inflow, transformation and precipitation of the sedimentary layer of the open oceans and more restricted areas such as the Baltic, Black and Mediterranean Seas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of eutrophication on short term changes in the microbial community were investigated using high resolution lipid biomarker and trace metal data for sediments from the eutrophic Lake Rotsee (Switzerland). The lake has been strongly influenced by sewage input since the 1850s and is an ideal site for studying an anthropogenically altered ecosystem. Historical remediation measures have had direct implications for productivity and microbial biota, leading to community composition changes and abundance shifts. The higher sewage and nutrient input resulted in a productivity increase, which led predominantly to a radiation in diatoms, primary producers and methanogens between about 1918 and 1921, but also affected all microorganism groups and macrophytes between about 1958 and 1972. Bacterial biomass increased in 1933, which may have been related to the construction of a mechanical sewage treatment plant. Biomarkers also allowed tracing of fossil organic matter/biodegraded oil contamination in the lake. Stephanodiscus parvus, Cyclotella radiosa and Asterionella formosa were the dominant sources of specific diatom biomarkers. Since the 1850s, the cell density of methanogenic Archaea (Methanosaeta spp.) ranged within ca. 0.5-1.8 x 10**9 cells/g dry sediment and the average lipid content of Rotsee Archaea was ca. 2.2 fg iGDGTs/cell. An altered BIT index (BITCH), indicating changes in terrestrial organic matter supply to the lake, is proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alzheimer’s Disease and other dementias are one of the most challenging illnesses confronting countries with ageing populations. Treatment options for dementia are limited, and the costs are significant. There is a growing need to develop new treatments for dementia, especially for the elderly. There is also growing evidence that centrally acting angiotensin converting enzyme (ACE) inhibitors, which cross the blood-brain barrier, are associated with a reduced rate of cognitive and functional decline in dementia, especially in Alzheimer’s disease (AD). The aim of this research is to investigate the effects of centrally acting ACE inhibitors (CACE-Is) on the rate of cognitive and functional decline in dementia, using a three phased KDD process. KDD, as a scientific way to process and analysis clinical data, is used to find useful insights from a variety of clinical databases. The data used are from three clinic databases: Geriatric Assessment Tool (GAT), the Doxycycline and Rifampin for Alzheimer’s Disease (DARAD), and the Qmci validation databases, which were derived from several different geriatric clinics in Canada. This research involves patients diagnosed with AD, vascular or mixed dementia only. Patients were included if baseline and end-point (at least six months apart) Standardised Mini-Mental State Examination (SMMSE), Quick Mild Cognitive Impairment (Qmci) or Activities Daily Living (ADL) scores were available. Basically, the rates of change are compared between patients taking CACE-Is, and those not currently treated with CACE-Is. The results suggest that there is a statistically significant difference in the rate of decline in cognitive and functional scores between CACE-I and NoCACE-I patients. This research also validates that the Qmci, a new short assessment test, has potential to replace the current popular screening tests for cognition in the clinic and clinical trials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lead isotopic compositions and Pb and Ba concentrations have been measured in ice cores from Law Dome, East Antarctica, covering the past 6500 years. 'Natural' background concentrations of Pb (ab. 0.4 pg/g) and Ba (ab. 1.3 pg/g) are observed until 1884 AD, after which increased Pb concentrations and lowered 206Pb/207Pb ratios indicate the influence of anthropogenic Pb. The isotopic composition of 'natural' Pb varies within the range 206Pb/207Pb=1.20-1.25 and 208Pb/207Pb=2.46-2.50, with an average rock and soil dust Pb contribution of 8-12%. A major pollution event is observed at Law Dome between 1884 and 1908 AD, elevating the Pb concentration four-fold and changing 206Pb/207Pb ratios in the ice to ab. 1.12. Based on Pb isotopic systematics and Pb emission statistics, this is attributed to Pb mined at Broken Hill and smelted at Broken Hill and Port Pirie, Australia. Anthropogenic Pb inputs are at their greatest from 1900 to 1910 and from ab. 1960 to ab. 1980. During the 20th century, Ba concentrations are consistently higher than 'natural' levels and are attributed to increased dust production, suggesting the influence of climate change and/or changes in land coverage with vegetation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Particle reactive elements are scavenged to a higher degree at ocean margins than in the open ocean due to higher fluxes of biogenic and terrigenous particles. In order to determine the influence of these processes on the depositional fluxes of 10Be and barium we have performed high-resolution measurements on sediment core GeoB1008-3 from the Congo Fan. Because the core is dominated by terrigenous matter supplied by the Congo River, it has a high average mass accumulation rate of 6.5 cm/kyr. Biogenic 10Be and Ba concentrations were calculated from total concentrations by subtracting the terrigenous components of10Be and Ba, which are assumed to be proportional to the flux of Al2O3. The mean Ba/Al weight ratio of the terrigenous component was determined to be 0.0045. The unusualy high terrigenous 10Be concentrations of 9.1 * 10**9 atoms/g Al2O3 are either due to input of particles with high10Be content by the Congo River or due to scavenging of oceanic 10Be by riverine particles. The maxima of biogenic 10Be and Ba concentrations coincide with maxima of the paleoproductivity rates. Time series analysis of the 10Be and of Ba concentration profiles reveals a strong dominance of the precessional period of 24 kyr, which also controls the rates of paleoproductivity in this core. During the maxima of productivity the flux of biogenic Ba is enhanced to a larger extent than that of biogenic 10Be. Applying a model for coastal scavenging, we ascribe the observed higher sensitivity of Ba to biogenic particle fluxes to the fact that the ocean residence time of Ba is approximately 10 times longer than that of 10Be.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presented thesis was written in the frame of a project called 'seepage water prognosis'. It was funded by the Federal Ministry for Education and Science (BMBF). 41 German institutions among them research institutes of universities, public authorities and engineering companies were financed for three years respectively. The aim was to work out the scientific basis that is needed to carry out a seepage water prognosis (Oberacker und Eberle, 2002). According to the Federal German Soil Protection Act (Federal Bulletin, 1998) a seepage water prognosis is required in order to avoid future soil impacts from the application of recycling products. The participants focused on the development of either methods to determine the source strength of the materials investigated, which is defined as the total mass flow caused by natural leaching or on models to predict the contaminants transport through the underlying soil. Annual meetings of all participants as well as separate meetings of the two subprojects were held. The department of Geosciences in Bremen participated with two subprojects. The aim of the subproject that resulted in this thesis was the development of easily applicable, valid, and generally accepted laboratory methods for the determination of the source strength. In the scope of the second subproject my colleague Veith Becker developed a computer model for the transport prognosis with the source strength as the main input parameter.