47 resultados para Multi-Point Method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clays and claystones are used as backfill and barrier materials in the design of waste repositories, because they act as hydraulic barriers and retain contaminants. Transport through such barriers occurs mainly by molecular diffusion. There is thus an interest to relate the diffusion properties of clays to their structural properties. In previous work, we have developed a concept for up-scaling pore-scale molecular diffusion coefficients using a grid-based model for the sample pore structure. Here we present an operational algorithm which can generate such model pore structures of polymineral materials. The obtained pore maps match the rock’s mineralogical components and its macroscopic properties such as porosity, grain and pore size distributions. Representative ensembles of grains in 2D or 3D are created by a lattice Monte Carlo (MC) method, which minimizes the interfacial energy of grains starting from an initial grain distribution. Pores are generated at grain boundaries and/or within grains. The method is general and allows to generate anisotropic structures with grains of approximately predetermined shapes, or with mixtures of different grain types. A specific focus of this study was on the simulation of clay-like materials. The generated clay pore maps were then used to derive upscaled effective diffusion coefficients for non-sorbing tracers using a homogenization technique. The large number of generated maps allowed to check the relations between micro-structural features of clays and their effective transport parameters, as is required to explain and extrapolate experimental diffusion results. As examples, we present a set of 2D and 3D simulations and investigated the effects of nanopores within particles (interlayer pores) and micropores between particles. Archie’s simple power law is followed in systems with only micropores. When nanopores are present, additional parameters are required; the data reveal that effective diffusion coefficients could be described by a sum of two power functions, related to the micro- and nanoporosity. We further used the model to investigate the relationships between particle orientation and effective transport properties of the sample.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An accurate and coherent chronological framework is essential for the interpretation of climatic and environmental records obtained from deep polar ice cores. Until now, one common ice core age scale had been developed based on an inverse dating method (Datice), combining glaciological modelling with absolute and stratigraphic markers between 4 ice cores covering the last 50 ka (thousands of years before present) (Lemieux-Dudon et al., 2010). In this paper, together with the companion paper of Veres et al. (2013), we present an extension of this work back to 800 ka for the NGRIP, TALDICE, EDML, Vostok and EDC ice cores using an improved version of the Datice tool. The AICC2012 (Antarctic Ice Core Chronology 2012) chronology includes numerous new gas and ice stratigraphic links as well as improved evaluation of background and associated variance scenarios. This paper concentrates on the long timescales between 120–800 ka. In this framework, new measurements of δ18Oatm over Marine Isotope Stage (MIS) 11–12 on EDC and a complete δ18Oatm record of the TALDICE ice cores permit us to derive additional orbital gas age constraints. The coherency of the different orbitally deduced ages (from δ18Oatm, δO2/N2 and air content) has been verified before implementation in AICC2012. The new chronology is now independent of other archives and shows only small differences, most of the time within the original uncertainty range calculated by Datice, when compared with the previous ice core reference age scale EDC3, the Dome F chronology, or using a comparison between speleothems and methane. For instance, the largest deviation between AICC2012 and EDC3 (5.4 ka) is obtained around MIS 12. Despite significant modifications of the chronological constraints around MIS 5, now independent of speleothem records in AICC2012, the date of Termination II is very close to the EDC3 one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Methane is a strong greenhouse gas and large uncertainties exist concerning the future evolution of its atmospheric abundance. Analyzing methane atmospheric mixing and stable isotope ratios in air trapped in polar ice sheets helps in reconstructing the evolution of its sources and sinks in the past. This is important to improve predictions of atmospheric CH4 mixing ratios in the future under the influence of a changing climate. The aim of this study is to assess whether past atmospheric δ13C(CH4) variations can be reliably reconstructed from firn air measurements. Isotope reconstructions obtained with a state of the art firn model from different individual sites show unexpectedly large discrepancies and are mutually inconsistent. We show that small changes in the diffusivity profiles at individual sites lead to strong differences in the firn fractionation, which can explain a large part of these discrepancies. Using slightly modified diffusivities for some sites, and neglecting samples for which the firn fractionation signals are strongest, a combined multi-site inversion can be performed, which returns an isotope reconstruction that is consistent with firn data. However, the isotope trends are lower than what has been concluded from Southern Hemisphere (SH) archived air samples and high-accumulation ice core data. We conclude that with the current datasets and understanding of firn air transport, a high precision reconstruction of δ13C of CH4 from firn air samples is not possible, because reconstructed atmospheric trends over the last 50 yr of 0.3–1.5 ‰ are of the same magnitude as inherent uncertainties in the method, which are the firn fractionation correction (up to ~2 ‰ at individual sites), the Kr isobaric interference (up to ~0.8 ‰, system dependent), inter-laboratory calibration offsets (~0.2 ‰) and uncertainties in past CH4 levels (~0.5 ‰).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We derive explicit lower and upper bounds for the probability generating functional of a stationary locally stable Gibbs point process, which can be applied to summary statistics such as the F function. For pairwise interaction processes we obtain further estimates for the G and K functions, the intensity, and higher-order correlation functions. The proof of the main result is based on Stein's method for Poisson point process approximation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose novel methodologies for the automatic segmentation and recognition of multi-food images. The proposed methods implement the first modules of a carbohydrate counting and insulin advisory system for type 1 diabetic patients. Initially the plate is segmented using pyramidal mean-shift filtering and a region growing algorithm. Then each of the resulted segments is described by both color and texture features and classified by a support vector machine into one of six different major food classes. Finally, a modified version of the Huang and Dom evaluation index was proposed, addressing the particular needs of the food segmentation problem. The experimental results prove the effectiveness of the proposed method achieving a segmentation accuracy of 88.5% and recognition rate equal to 87%

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to study further the long-range correlations ("ridge") observed recently in p+Pb collisions at sqrt(s_NN) =5.02 TeV, the second-order azimuthal anisotropy parameter of charged particles, v_2, has been measured with the cumulant method using the ATLAS detector at the LHC. In a data sample corresponding to an integrated luminosity of approximately 1 microb^(-1), the parameter v_2 has been obtained using two- and four-particle cumulants over the pseudorapidity range |eta|<2.5. The results are presented as a function of transverse momentum and the event activity, defined in terms of the transverse energy summed over 3.1methods, and to predictions from hydrodynamic models of p+Pb collisions. Despite the small transverse spatial extent of the p+Pb collision system, the large magnitude of v_2 and its similarity to hydrodynamic predictions provide additional evidence for the importance of final-state effects in p+Pb reactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many extensions of the Standard Model posit the existence of heavy particles with long lifetimes. In this Letter, results are presented of a search for events containing one or more such particles, which decay at a significant distance from their production point, using a final state containing charged hadrons and an associated muon. This analysis uses a data sample of proton-proton collisions at root s = 7 TeV corresponding to an integrated luminosity of 4.4 fb(-1) collected in 2011 by the ATLAS detector operating at the Large Hadron Collider. Results are interpreted in the context of R-parity violating supersymmetric scenarios. No events in the signal region are observed and limits are set on the production cross section for pair production of supersymmetric particles, multiplied by the square of the branching fraction for a neutralino to decay to charged hadrons and a muon, based on the scenario where both of the produced supersymmetric particles give rise to neutralinos that decay in this way. However, since the search strategy is based on triggering on and reconstructing the decay products of individual long-lived particles, irrespective of the rest of the event, these limits can easily be reinterpreted in scenarios with different numbers of long-lived particles per event. The limits are presented as a function of neutralino lifetime, and for a range of squark and neutralino masses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE Currently, the diagnosis of pedicle screw (PS) loosening is based on a subjectively assessed halo sign, that is, a radiolucent line around the implant wider than 1 mm in plain radiographs. We aimed at development and validation of a quantitative method to diagnose PS loosening on radiographs. METHODS Between 11/2004 and 1/2010 36 consecutive patients treated with thoraco-lumbar spine fusion with PS instrumentation without PS loosening were compared with 37 other patients who developed a clinically manifesting PS loosening. Three different angles were measured and compared regarding their capability to discriminate the loosened PS over the postoperative course. The inter-observer invariance was tested and a receiver operating characteristics curve analysis was performed. RESULTS The angle measured between the PS axis and the cranial endplate was significantly different between the early and all later postoperative images. The Spearman correlation coefficient for the measurements of two observers at each postoperative time point ranged between 0.89 at 2 weeks to 0.94 at 2 months and 1 year postoperative. The angle change of 1.9° between immediate postoperative and 6-month postoperative was 75% sensitive and 89% specific for the identification of loosened screws (AUC = 0.82). DISCUSSION The angle between the PS axis and the cranial endplate showed good ability to change in PS loosening. A change of this angle of at least 2° had a relatively high sensitivity and specificity to diagnose screw loosening.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ATLAS measurements of the azimuthal anisotropy in lead–lead collisions at √sNN = 2.76 TeV are shown using a dataset of approximately 7μb−1 collected at the LHC in 2010. The measurements are performed for charged particles with transversemomenta 0.5 < pT < 20 GeV and in the pseudorapidity range |η| < 2.5. The anisotropy is characterized by the Fourier coefficients, vn, of the charged-particle azimuthal angle distribution for n = 2–4. The Fourier coefficients are evaluated using multi-particle cumulants calculated with the generating function method. Results on the transverse momentum, pseudorapidity and centrality dependence of the vn coefficients are presented. The elliptic flow, v2, is obtained from the two-, four-, six- and eight-particle cumulants while higher-order coefficients, v3 and v4, are determined with two- and four-particle cumulants. Flow harmonics vn measured with four-particle cumulants are significantly reduced compared to the measurement involving two-particle cumulants. A comparison to vn measurements obtained using different analysis methods and previously reported by the LHC experiments is also shown. Results of measurements of flow fluctuations evaluated with multiparticle cumulants are shown as a function of transverse momentum and the collision centrality. Models of the initial spatial geometry and its fluctuations fail to describe the flow fluctuations measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The nematode Caenorhabditis elegans is a well-known model organism used to investigate fundamental questions in biology. Motility assays of this small roundworm are designed to study the relationships between genes and behavior. Commonly, motility analysis is used to classify nematode movements and characterize them quantitatively. Over the past years, C. elegans' motility has been studied across a wide range of environments, including crawling on substrates, swimming in fluids, and locomoting through microfluidic substrates. However, each environment often requires customized image processing tools relying on heuristic parameter tuning. In the present study, we propose a novel Multi-Environment Model Estimation (MEME) framework for automated image segmentation that is versatile across various environments. The MEME platform is constructed around the concept of Mixture of Gaussian (MOG) models, where statistical models for both the background environment and the nematode appearance are explicitly learned and used to accurately segment a target nematode. Our method is designed to simplify the burden often imposed on users; here, only a single image which includes a nematode in its environment must be provided for model learning. In addition, our platform enables the extraction of nematode ‘skeletons’ for straightforward motility quantification. We test our algorithm on various locomotive environments and compare performances with an intensity-based thresholding method. Overall, MEME outperforms the threshold-based approach for the overwhelming majority of cases examined. Ultimately, MEME provides researchers with an attractive platform for C. elegans' segmentation and ‘skeletonizing’ across a wide range of motility assays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses two major topics concerning the role of expectations in the formation of reference points. First, we show that when expectations are present, they have a significant impact on reference point formation. Second, we find that decision-makers employ expected values when forming reference points (integrated mechanism) as opposed to single possible outcomes (segregated mechanism). Despite the importance of reference points in prospect theory, to date, there is no standard method of examining these. We develop a new experimental design that employs an indirect approach and extends an existing direct approach. Our findings are consistent across the two approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We explore a method developed in statistical physics which has been argued to have exponentially small finite-volume effects, in order to determine the critical temperature Tc of pure SU(3) gauge theory close to the continuum limit. The method allows us to estimate the critical coupling βc of the Wilson action for temporal extents up to Nτ∼20 with ≲0.1% uncertainties. Making use of the scale setting parameters r0 and t0−−√ in the same range of β-values, these results lead to the independent continuum extrapolations Tcr0=0.7457(45) and Tct0−−√=0.2489(14), with the latter originating from a more convincing fit. Inserting a conversion of r0 from literature (unfortunately with much larger errors) yields Tc/ΛMS¯¯¯¯¯=1.24(10).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We used multiple sets of simulations both at the atomistic and coarse-grained level of resolution to investigate interaction and binding of α-tochoperol transfer protein (α-TTP) to phosphatidylinositol phosphate lipids (PIPs). Our calculations indicate that enrichment of membranes with such lipids facilitate membrane anchoring. Atomistic models suggest that PIP can be incorporated into the binding cavity of α-TTP and therefore confirm that such protein can work as lipid exchanger between the endosome and the plasma membrane. Comparison of the atomistic models of the α-TTP-PIPs complex with membrane-bound α-TTP revealed different roles for the various basic residues composing the basic patch that is key for the protein/ligand interaction. Such residues are of critical importance as several point mutations at their position lead to severe forms of ataxia with vitamin E deficiency (AVED) phenotypes. Specifically, R221 is main residue responsible for the stabilization of the complex. R68 and R192 exchange strong interactions in the protein or in the membrane complex only, suggesting that the two residues alternate contact formation, thus facilitating lipid flipping from the membrane into the protein cavity during the lipid exchange process. Finally, R59 shows weaker interactions with PIPs anyway with a clear preference for specific phosphorylation positions, hinting a role in early membrane selectivity for the protein. Altogether, our simulations reveal significant aspects at the atomistic scale of interactions of α-TTP with the plasma membrane and with PIP, providing clarifications on the mechanism of intracellular vitamin E trafficking and helping establishing the role of key residue for the functionality of α-TTP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Taking Carnap’s classic exposition as a starting point, this paper develops a pragmatic account of the method of explication, defends it against a range of challenges and proposes a detailed recipe for the practice of explicating. It is then argued that confusions are involved in characterizing explications as definitions, and in advocating precising definitions as an alternative to explications. Explication is better characterized as conceptual re-engineering for theoretical purposes, in contrast to conceptual re-engineering for other purposes and improving exactness for purely practical reasons. Finally, three limitations which call for further development of the method of explication are discussed.