933 resultados para Random Pore Model
Resumo:
In all European Union countries, chemical residues are required to be routinely monitored in meat. Good farming and veterinary practice can prevent the contamination of meat with pharmaceutical substances, resulting in a low detection of drug residues through random sampling. An alternative approach is to target-monitor farms suspected of treating their animals with antimicrobials. The objective of this project was to assess, using a stochastic model, the efficiency of these two sampling strategies. The model integrated data on Swiss livestock as well as expert opinion and results from studies conducted in Switzerland. Risk-based sampling showed an increase in detection efficiency of up to 100% depending on the prevalence of contaminated herds. Sensitivity analysis of this model showed the importance of the accuracy of prior assumptions for conducting risk-based sampling. The resources gained by changing from random to risk-based sampling should be transferred to improving the quality of prior information.
Resumo:
In this paper, we propose a fully automatic, robust approach for segmenting proximal femur in conventional X-ray images. Our method is based on hierarchical landmark detection by random forest regression, where the detection results of 22 global landmarks are used to do the spatial normalization, and the detection results of the 59 local landmarks serve as the image cue for instantiation of a statistical shape model of the proximal femur. To detect landmarks in both levels, we use multi-resolution HoG (Histogram of Oriented Gradients) as features which can achieve better accuracy and robustness. The efficacy of the present method is demonstrated by experiments conducted on 150 clinical x-ray images. It was found that the present method could achieve an average point-to-curve error of 2.0 mm and that the present method was robust to low image contrast, noise and occlusions caused by implants.
Resumo:
Perceptual learning is a training induced improvement in performance. Mechanisms underlying the perceptual learning of depth discrimination in dynamic random dot stereograms were examined by assessing stereothresholds as a function of decorrelation. The inflection point of the decorrelation function was defined as the level of decorrelation corresponding to 1.4 times the threshold when decorrelation is 0%. In general, stereothresholds increased with increasing decorrelation. Following training, stereothresholds and standard errors of measurement decreased systematically for all tested decorrelation values. Post training decorrelation functions were reduced by a multiplicative constant (approximately 5), exhibiting changes in stereothresholds without changes in the inflection points. Disparity energy model simulations indicate that a post-training reduction in neuronal noise can sufficiently account for the perceptual learning effects. In two subjects, learning effects were retained over a period of six months, which may have application for training stereo deficient subjects.
Resumo:
The use of group-randomized trials is particularly widespread in the evaluation of health care, educational, and screening strategies. Group-randomized trials represent a subset of a larger class of designs often labeled nested, hierarchical, or multilevel and are characterized by the randomization of intact social units or groups, rather than individuals. The application of random effects models to group-randomized trials requires the specification of fixed and random components of the model. The underlying assumption is usually that these random components are normally distributed. This research is intended to determine if the Type I error rate and power are affected when the assumption of normality for the random component representing the group effect is violated. ^ In this study, simulated data are used to examine the Type I error rate, power, bias and mean squared error of the estimates of the fixed effect and the observed intraclass correlation coefficient (ICC) when the random component representing the group effect possess distributions with non-normal characteristics, such as heavy tails or severe skewness. The simulated data are generated with various characteristics (e.g. number of schools per condition, number of students per school, and several within school ICCs) observed in most small, school-based, group-randomized trials. The analysis is carried out using SAS PROC MIXED, Version 6.12, with random effects specified in a random statement and restricted maximum likelihood (REML) estimation specified. The results from the non-normally distributed data are compared to the results obtained from the analysis of data with similar design characteristics but normally distributed random effects. ^ The results suggest that the violation of the normality assumption for the group component by a skewed or heavy-tailed distribution does not appear to influence the estimation of the fixed effect, Type I error, and power. Negative biases were detected when estimating the sample ICC and dramatically increased in magnitude as the true ICC increased. These biases were not as pronounced when the true ICC was within the range observed in most group-randomized trials (i.e. 0.00 to 0.05). The normally distributed group effect also resulted in bias ICC estimates when the true ICC was greater than 0.05. However, this may be a result of higher correlation within the data. ^
Resumo:
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon’s implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike’s preceding ISI. As we show, the EIF’s exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron’s ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation.
Resumo:
Argillaceous rocks are considered to be a suitable geological barrier for the long-term containment of wastes. Their efficiency at retarding contaminant migration is assessed using reactive-transport experiments and modeling, the latter requiring a sound understanding of pore-water chemistry. The building of a pore-water model, which is mandatory for laboratory experiments mimicking in situ conditions, requires a detailed knowledge of the rock mineralogy and of minerals at equilibrium with present-day pore waters. Using a combination of petrological, mineralogical, and isotopic studies, the present study focused on the reduced Opalinus Clay formation (Fm) of the Benken borehole (30 km north of Zurich) which is intended for nuclear-waste disposal in Switzerland. A diagenetic sequence is proposed, which serves as a basis for determining the minerals stable in the formation and their textural relationships. Early cementation of dominant calcite, rare dolomite, and pyrite formed by bacterial sulfate reduction, was followed by formation of iron-rich calcite, ankerite, siderite, glauconite, (Ba, Sr) sulfates, and traces of sphalerite and galena. The distribution and abundance of siderite depends heavily on the depositional environment (and consequently on the water column). Benken sediment deposition during Aalenian times corresponds to an offshore environment with the early formation of siderite concretions at the water/sediment interface at the fluctuating boundary between the suboxic iron reduction and the sulfate reduction zones. Diagenetic minerals (carbonates except dolomite, sulfates, silicates) remained stable from their formation to the present. Based on these mineralogical and geochemical data, the mineral assemblage previously used for the geochemical model of the pore waters at Mont Terri may be applied to Benken without significant changes. These further investigations demonstrate the need for detailed mineralogical and geochemical study to refine the model of pore-water chemistry in a clay formation.
Resumo:
High-pressure mechanical squeezing was applied to sample pore waters from a sequence of highly indurated and overconsolidated sedimentary rocks in a drillcore from a deep borehole in NE Switzerland. The rocks are generally rich in clay minerals (28–71 wt.%), with low water contents of 3.5–5.6 wt.%, resulting in extremely low hydraulic conductivities of 10− 14–10− 13 m/s. First pore-water samples could generally be taken at 200 MPa, and further aliquots were obtained at 300, 400 and 500 MPa. Chemical and isotopic compositions of squeezed waters evolve with increasing pressure. Decreasing concentrations of Cl−, Br−, Na+ and K+ are explained by ion filtration due to the collapse of the pore space during squeezing. Increasing concentrations of Ca2 + and Mg2 + are considered to be a consequence of pressure-dependent solubilities of carbonate minerals in combination with sorption/desorption reactions. The pressure dependence was studied by model calculations considering equilibrium with carbonate minerals and the exchanger population on clay surfaces, and the trends observed in the experiments could be confirmed. The compositions of the squeezed waters were compared with results of independent methods, such as aqueous extraction and in-situ sampling of ground and pore waters. On this basis, it is concluded that the chemical and isotopic composition of pore water squeezed at the lowest pressure of 200 MPa closely represents that of the in-situ pore water. The feasibility of sampling pore waters with water contents down to 3.5 wt.% and possibly less opens new perspectives for studies targeted at palaeo-hydrogeological investigations using pore-water compositions in aquitards as geochemical archives.
Resumo:
Many biological processes depend on the sequential assembly of protein complexes. However, studying the kinetics of such processes by direct methods is often not feasible. As an important class of such protein complexes, pore-forming toxins start their journey as soluble monomeric proteins, and oligomerize into transmembrane complexes to eventually form pores in the target cell membrane. Here, we monitored pore formation kinetics for the well-characterized bacterial pore-forming toxin aerolysin in single cells in real time to determine the lag times leading to the formation of the first functional pores per cell. Probabilistic modeling of these lag times revealed that one slow and seven equally fast rate-limiting reactions best explain the overall pore formation kinetics. The model predicted that monomer activation is the rate-limiting step for the entire pore formation process. We hypothesized that this could be through release of a propeptide and indeed found that peptide removal abolished these steps. This study illustrates how stochasticity in the kinetics of a complex process can be exploited to identify rate-limiting mechanisms underlying multistep biomolecular assembly pathways.
Resumo:
We present a framework for fitting multiple random walks to animal movement paths consisting of ordered sets of step lengths and turning angles. Each step and turn is assigned to one of a number of random walks, each characteristic of a different behavioral state. Behavioral state assignments may be inferred purely from movement data or may include the habitat type in which the animals are located. Switching between different behavioral states may be modeled explicitly using a state transition matrix estimated directly from data, or switching probabilities may take into account the proximity of animals to landscape features. Model fitting is undertaken within a Bayesian framework using the WinBUGS software. These methods allow for identification of different movement states using several properties of observed paths and lead naturally to the formulation of movement models. Analysis of relocation data from elk released in east-central Ontario, Canada, suggests a biphasic movement behavior: elk are either in an "encamped" state in which step lengths are small and turning angles are high, or in an "exploratory" state, in which daily step lengths are several kilometers and turning angles are small. Animals encamp in open habitat (agricultural fields and opened forest), but the exploratory state is not associated with any particular habitat type.
Resumo:
This study investigates a theoretical model where a longitudinal process, that is a stationary Markov-Chain, and a Weibull survival process share a bivariate random effect. Furthermore, a Quality-of-Life adjusted survival is calculated as the weighted sum of survival time. Theoretical values of population mean adjusted survival of the described model are computed numerically. The parameters of the bivariate random effect do significantly affect theoretical values of population mean. Maximum-Likelihood and Bayesian methods are applied on simulated data to estimate the model parameters. Based on the parameter estimates, predicated population mean adjusted survival can then be calculated numerically and compared with the theoretical values. Bayesian method and Maximum-Likelihood method provide parameter estimations and population mean prediction with comparable accuracy; however Bayesian method suffers from poor convergence due to autocorrelation and inter-variable correlation. ^
Resumo:
Objective. To measure the demand for primary care and its associated factors by building and estimating a demand model of primary care in urban settings.^ Data source. Secondary data from 2005 California Health Interview Survey (CHIS 2005), a population-based random-digit dial telephone survey, conducted by the UCLA Center for Health Policy Research in collaboration with the California Department of Health Services, and the Public Health Institute between July 2005 and April 2006.^ Study design. A literature review was done to specify the demand model by identifying relevant predictors and indicators. CHIS 2005 data was utilized for demand estimation.^ Analytical methods. The probit regression was used to estimate the use/non-use equation and the negative binomial regression was applied to the utilization equation with the non-negative integer dependent variable.^ Results. The model included two equations in which the use/non-use equation explained the probability of making a doctor visit in the past twelve months, and the utilization equation estimated the demand for primary conditional on at least one visit. Among independent variables, wage rate and income did not affect the primary care demand whereas age had a negative effect on demand. People with college and graduate educational level were associated with 1.03 (p < 0.05) and 1.58 (p < 0.01) more visits, respectively, compared to those with no formal education. Insurance was significantly and positively related to the demand for primary care (p < 0.01). Need for care variables exhibited positive effects on demand (p < 0.01). Existence of chronic disease was associated with 0.63 more visits, disability status was associated with 1.05 more visits, and people with poor health status had 4.24 more visits than those with excellent health status. ^ Conclusions. The average probability of visiting doctors in the past twelve months was 85% and the average number of visits was 3.45. The study emphasized the importance of need variables in explaining healthcare utilization, as well as the impact of insurance, employment and education on demand. The two-equation model of decision-making, and the probit and negative binomial regression methods, was a useful approach to demand estimation for primary care in urban settings.^
Resumo:
A Bayesian approach to estimation of the regression coefficients of a multinominal logit model with ordinal scale response categories is presented. A Monte Carlo method is used to construct the posterior distribution of the link function. The link function is treated as an arbitrary scalar function. Then the Gauss-Markov theorem is used to determine a function of the link which produces a random vector of coefficients. The posterior distribution of the random vector of coefficients is used to estimate the regression coefficients. The method described is referred to as a Bayesian generalized least square (BGLS) analysis. Two cases involving multinominal logit models are described. Case I involves a cumulative logit model and Case II involves a proportional-odds model. All inferences about the coefficients for both cases are described in terms of the posterior distribution of the regression coefficients. The results from the BGLS method are compared to maximum likelihood estimates of the regression coefficients. The BGLS method avoids the nonlinear problems encountered when estimating the regression coefficients of a generalized linear model. The method is not complex or computationally intensive. The BGLS method offers several advantages over Bayesian approaches. ^
Resumo:
Early diagenetic dolomite beds were sampled during the Ocean Drilling Programme (ODP) Leg 201 at four reoccupied ODP Leg 112 sites on the Peru continental margin (Sites 1227/684, 1228/680, 1229/681 and 1230/685) and analysed for petrography, mineralogy, d13C, d18O and 87Sr/86Sr values. The results are compared with the chemistry, and d13C and 87Sr/86Sr values of the associated porewater. Petrographic relationships indicate that dolomite forms as a primary precipitate in porous diatom ooze and siliciclastic sediment and is not replacing the small amounts of precursor carbonate. Dolomite precipitation often pre-dates the formation of framboidal pyrite. Most dolomite layers show 87Sr/86Sr-ratios similar to the composition of Quaternary seawater and do not indicate a contribution from the hypersaline brine, which is present at a greater burial depth. Also, the d13C values of the dolomite are not in equilibrium with the d13C values of the dissolved inorganic carbon in the associated modern porewater. Both petrography and 87Sr/86Sr ratios suggest a shallow depth of dolomite formation in the uppermost sediment (<30 m below the seafloor). A significant depletion in the dissolved Mg and Ca in the porewater constrains the present site of dolomite precipitation, which co-occurs with a sharp increase in alkalinity and microbial cell concentration at the sulphate-methane interface. It has been hypothesized that microbial 'hot-spots', such as the sulphate-methane interface, may act as focused sites of dolomite precipitation. Varying d13C values from -15 per mil to +15 per mil for the dolomite are consistent with precipitation at a dynamic sulphate-methane interface, where d13C of the dissolved inorganic carbon would likewise be variable. A dynamic deep biosphere with upward and downward migration of the sulphate-methane interface can be simulated using a simple numerical diffusion model for sulphate concentration in a sedimentary sequence with variable input of organic matter. Thus, the study of dolomite layers in ancient organic carbon-rich sedimentary sequences can provide a useful window into the palaeo-dynamics of the deep biosphere.
Resumo:
We analyzed Nd and Sr isotopic compositions of Neogene fossil fish teeth from two sites in the Pacific in order to determine the effect of cleaning protocols and burial diagenesis on the preservation of seawater isotopic values. Sr is incorporated into the teeth at the time of growth; thus Sr isotopes are potentially valuable for chemostratigraphy. Nd isotopes are potential conservative tracers of paleocirculation; however, Nd is incorporated post-mortem, and may record diagenetic pore waters rather than seawater. We evaluated samples from two sites (Site 807A, Ontong Java Plateau and Site 786A, Izu-Bonin Arc) that were exposed to similar bottom waters, but have distinct lithologies and pore water chemistries. The Sr isotopic values of the fish teeth appear to accurately reflect contemporaneous seawater at both sites. The excellent correlation between the Nd isotopic values of teeth from the two sites suggests that the Nd is incorporated while the teeth are in chemical equilibrium with seawater, and that the signal is preserved over geologic timescales and subsequent burial. These data also corroborate paleoseawater Nd isotopic compositions derived from Pacific ferromanganese crusts that were recovered from similar water depths (Ling et al., 1997; doi:10.1016/S0012-821X(96)00224-5). This corroboration strongly suggests that both materials preserve seawater Nd isotope values. Variations in Pacific deepwater e-Nd values are consistent with predictions for the shoaling of the Isthmus of Panama and the subsequent initiation of nonradiogenic North Atlantic Deep Water that entered the Pacific via the Antarctic Circumpolar Current.
Resumo:
Anaerobic methane oxidation (AMO) was characterized in sediment cores from the Blake Ridge collected during Ocean Drilling Program (ODP) Leg 164. Three independent lines of evidence support the occurrence and scale of AMO at Sites 994 and 995. First, concentration depth profiles of methane from Hole 995B exhibit a region of upward concavity suggestive of methane consumption. Diagenetic modeling of the concentration profile indicates a 1.85-m-thick zone of AMO centered at 21.22 mbsf, with a peak rate of 12.4 nM/d. Second, subsurface maxima in tracer-based sulfate reduction rates from Holes 994B and 995B were observed at depths that coincide with the model-predicted AMO zone. The subsurface zone of sulfate reduction was 2 m thick and had a depth integrated rate that compared favorably to that of AMO (1.3 vs. 1.1 nmol/cm**2/d, respectively). These features suggest close coupling of AMO and sulfate reduction in the Blake Ridge sediments. Third, measured d13CH4 values are lightest at the point of peak model-predicted methane oxidation and become increasingly 13C-enriched with decreasing sediment depth, consistent with kinetic isotope fractionation during bacterially mediated methane oxidation. The isotopic data predict a somewhat (60 cm) shallower maximum depth of methane oxidation than do the model and sulfate reduction data.