886 resultados para Gaussian complexities


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The low-temperature isothermal magnetization curves, M(H), of SmCo4 and Fe3Tb thin films are studied according to the two-dimensional correlated spin-glass model of Chudnovsky. We have calculated the magnetization law in approach to saturation and shown that the M(H) data fit well the theory at high and low fields. In our fit procedure we have used three different correlation functions. The Gaussian decay correlation function fits well the experimental data for both samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an analytical scheme, easily implemented numerically, to generate synthetic Gaussian turbulent flows by using a linear Langevin equation, where the noise term acts as a stochastic stirring force. The characteristic parameters of the velocity field are well introduced, in particular the kinematic viscosity and the spectrum of energy. As an application, the diffusion of a passive scalar is studied for two different energy spectra. Numerical results are compared favorably with analytical calculations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A nonlinear calculation of the dynamics of transient pattern formation in the Fréedericksz transition is presented. A Gaussian decoupling is used to calculate the time dependence of the structure factor. The calculation confirms the range of validity of linear calculations argued in earlier work. In addition, it describes the decay of the transient pattern.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ab initio periodic unrestricted Hartree-Fock method has been applied in the investigation of the ground-state structural, electronic, and magnetic properties of the rutile-type compounds MF2 (M=Mn, Fe, Co, and Ni). All electron Gaussian basis sets have been used. The systems turn out to be large band-gap antiferromagnetic insulators; the optimized geometrical parameters are in good agreement with experiment. The calculated most stable electronic state shows an antiferromagnetic order in agreement with that resulting from neutron scattering experiments. The magnetic coupling constants between nearest-neighbor magnetic ions along the [001], [111], and [100] (or [010]) directions have been calculated using several supercells. The resulting ab initio magnetic coupling constants are reasonably satisfactory when compared with available experimental data. The importance of the Jahn-Teller effect in FeF2 and CoF2 is also discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Except for the first 2 years since July 29, 1968, Arenal volcano has continuously erupted compositionally monotonous and phenocryst-rich (similar to35%) basaltic andesites composed of plagioclase (plag), orthopyroxene (opx), clinopyroxene (cpx), spinel olivine. Detailed textural and compositional analyses of phenocrysts, mineral inclusions, and microlites reveal comparable complexities in any given sample and identify mineral components that require a minimum of four crystallization environments. We suggest three distinct crystallization environments crystallized low Mg# (<78) silicate phases from andesitic magma but at different physical conditions, such as variable pressure of crystallization and water conditions. The dominant environment, i.e., the one which accounts for the majority of minerals and overprinted all other assemblages near rims of phenocrysts, cocrystallized clinopyroxene (Mg# similar to71-78), orthopyroxene (Mg# similar to71-78), titanomagnetite and plagioclase (An(60) to An(85)). The second environment cocrystallized clinopyroxene (Mg# 71-78), olivine (<Fo(78)), titanomagnetite, and very high An (similar to90) plagioclase, while the third cocrystallized clinopyroxene (Mg# 71-78) with high (>7) Al/Ti and high (>4 wt.%) Al2O3, titanomagnetite with considerable Al2O3 (10-18 wt.%) and possibly olivine but appears to lack plagioclase. A fourth crystallization environment is characterized by clinopyroxene (e.g., Mg#=similar to78-85; Cr2O3=0.15-0.7 wt.%), Al-, Cr-rich spinel olivine (similar toFo(80)), and in some circumstances high-An (>80) plagioclase. This assemblage seems to record mafic inputs into the Arenal system and crystallization at high to low pressures. Single crystals cannot be completely classified as xenocrysts, antecrysts (cognate crystals), or phenocrysts, because they often contain different parts each representing a different crystallization environment and thus belong to different categories. Bulk compositions are mostly too mafic to have crystallized the bulk of ferromagnesian minerals and thus likely do not represent liquid compositions. On the other hand, they are the cumulative products of multiple mixing events assembling melts and minerals from a variety of sources. The driving force for this multistage mixing evolution to generate erupting basaltic andesites is thought to be the ascent of mafic magma from lower crustal levels to subvolcanic depths which at the same time may also go through compositional modification by fractionation and assimilation of country rocks. Thus, mafic magmas become basaltic andesite through mixing, fractionation and assimilation by the time they arrive at subvolcanic depths. We infer new increments of basaltic andesite are supplied nearly continuously to the subvolcanic reservoir concurrently to the current eruption and that these new increments are blended into the residing, subvolcanic magma. Thus, the compositional monotony is mostly the product of repetitious production of very similar basaltic andesite. Furthermore, we propose that this quasi-constant supply of small increments of magma is the fundamental cause for small-scale, decade-long continuous volcanic activity; that is, the current eruption of Arenal is flux-controlled by inputs of mantle magmas. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Genotypes obtained with commercial SNP arrays have been extensively used in many large case-control or population-based cohorts for SNP-based genome-wide association studies for a multitude of traits. Yet, these genotypes capture only a small fraction of the variance of the studied traits. Genomic structural variants (GSV) such as Copy Number Variation (CNV) may account for part of the missing heritability, but their comprehensive detection requires either next-generation arrays or sequencing. Sophisticated algorithms that infer CNVs by combining the intensities from SNP-probes for the two alleles can already be used to extract a partial view of such GSV from existing data sets. RESULTS: Here we present several advances to facilitate the latter approach. First, we introduce a novel CNV detection method based on a Gaussian Mixture Model. Second, we propose a new algorithm, PCA merge, for combining copy-number profiles from many individuals into consensus regions. We applied both our new methods as well as existing ones to data from 5612 individuals from the CoLaus study who were genotyped on Affymetrix 500K arrays. We developed a number of procedures in order to evaluate the performance of the different methods. This includes comparison with previously published CNVs as well as using a replication sample of 239 individuals, genotyped with Illumina 550K arrays. We also established a new evaluation procedure that employs the fact that related individuals are expected to share their CNVs more frequently than randomly selected individuals. The ability to detect both rare and common CNVs provides a valuable resource that will facilitate association studies exploring potential phenotypic associations with CNVs. CONCLUSION: Our new methodologies for CNV detection and their evaluation will help in extracting additional information from the large amount of SNP-genotyping data on various cohorts and use this to explore structural variants and their impact on complex traits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present research deals with the review of the analysis and modeling of Swiss franc interest rate curves (IRC) by using unsupervised (SOM, Gaussian Mixtures) and supervised machine (MLP) learning algorithms. IRC are considered as objects embedded into different feature spaces: maturities; maturity-date, parameters of Nelson-Siegel model (NSM). Analysis of NSM parameters and their temporal and clustering structures helps to understand the relevance of model and its potential use for the forecasting. Mapping of IRC in a maturity-date feature space is presented and analyzed for the visualization and forecasting purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Expanded abstract: Iowa Department of Transportation (IA DOT) is finalizing research to streamline field inventory/inspection of culverts by Maintenance and Construction staff while maximizing the use of tablet technologies. The project began in 2011 to develop some new best practices for field staff to assist in the inventory, inspection and maintenance of assets along the roadway. The team has spent the past year working through the complexities of identifying the most appropriate tablet hardware for field data collection. A small scale deployment of tablets occurred in spring of 2013 to collect several safety related assets (culverts, signs, guardrail, and incidents). Data can be collected in disconnected or connected modes and there is an associated desktop environment where data can be viewed and queried after being synced into the master database. The development of a deployment plan and related workflow processes are underway; which will eventually feed information into IA DOTs larger asset management system and make the information available for decision making. The team is also working with the IA DOT Design Office on Computer Aided Drafting (CAD) data processing and the IA DOT Construction office with a new digital As-Built plan process to leverage the complete data life-cycle so information can be developed once and leveraged by the Maintenance staff farther along in the process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Predictive groundwater modeling requires accurate information about aquifer characteristics. Geophysical imaging is a powerful tool for delineating aquifer properties at an appropriate scale and resolution, but it suffers from problems of ambiguity. One way to overcome such limitations is to adopt a simultaneous multitechnique inversion strategy. We have developed a methodology for aquifer characterization based on structural joint inversion of multiple geophysical data sets followed by clustering to form zones and subsequent inversion for zonal parameters. Joint inversions based on cross-gradient structural constraints require less restrictive assumptions than, say, applying predefined petro-physical relationships and generally yield superior results. This approach has, for the first time, been applied to three geophysical data types in three dimensions. A classification scheme using maximum likelihood estimation is used to determine the parameters of a Gaussian mixture model that defines zonal geometries from joint-inversion tomograms. The resulting zones are used to estimate representative geophysical parameters of each zone, which are then used for field-scale petrophysical analysis. A synthetic study demonstrated how joint inversion of seismic and radar traveltimes and electrical resistance tomography (ERT) data greatly reduces misclassification of zones (down from 21.3% to 3.7%) and improves the accuracy of retrieved zonal parameters (from 1.8% to 0.3%) compared to individual inversions. We applied our scheme to a data set collected in northeastern Switzerland to delineate lithologic subunits within a gravel aquifer. The inversion models resolve three principal subhorizontal units along with some important 3D heterogeneity. Petro-physical analysis of the zonal parameters indicated approximately 30% variation in porosity within the gravel aquifer and an increasing fraction of finer sediments with depth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop several results on hitting probabilities of random fields which highlight the role of the dimension of the parameter space. This yields upper and lower bounds in terms of Hausdorff measure and Bessel-Riesz capacity, respectively. We apply these results to a system of stochastic wave equations in spatial dimension k >- 1 driven by a d-dimensional spatially homogeneous additive Gaussian noise that is white in time and colored in space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present an efficient numerical scheme for the recently introduced geodesic active fields (GAF) framework for geometric image registration. This framework considers the registration task as a weighted minimal surface problem. Hence, the data-term and the regularization-term are combined through multiplication in a single, parametrization invariant and geometric cost functional. The multiplicative coupling provides an intrinsic, spatially varying and data-dependent tuning of the regularization strength, and the parametrization invariance allows working with images of nonflat geometry, generally defined on any smoothly parametrizable manifold. The resulting energy-minimizing flow, however, has poor numerical properties. Here, we provide an efficient numerical scheme that uses a splitting approach; data and regularity terms are optimized over two distinct deformation fields that are constrained to be equal via an augmented Lagrangian approach. Our approach is more flexible than standard Gaussian regularization, since one can interpolate freely between isotropic Gaussian and anisotropic TV-like smoothing. In this paper, we compare the geodesic active fields method with the popular Demons method and three more recent state-of-the-art algorithms: NL-optical flow, MRF image registration, and landmark-enhanced large displacement optical flow. Thus, we can show the advantages of the proposed FastGAF method. It compares favorably against Demons, both in terms of registration speed and quality. Over the range of example applications, it also consistently produces results not far from more dedicated state-of-the-art methods, illustrating the flexibility of the proposed framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Duchenne muscular dystrophy (DMD) is an X-linked genetic disease, caused by the absence of the dystrophin protein. Although many novel therapies are under development for DMD, there is currently no cure and affected individuals are often confined to a wheelchair by their teens and die in their twenties/thirties. DMD is a rare disease (prevalence <5/10,000). Even the largest countries do not have enough affected patients to rigorously assess novel therapies, unravel genetic complexities, and determine patient outcomes. TREAT-NMD is a worldwide network for neuromuscular diseases that provides an infrastructure to support the delivery of promising new therapies for patients. The harmonized implementation of national and ultimately global patient registries has been central to the success of TREAT-NMD. For the DMD registries within TREAT-NMD, individual countries have chosen to collect patient information in the form of standardized patient registries to increase the overall patient population on which clinical outcomes and new technologies can be assessed. The registries comprise more than 13,500 patients from 31 different countries. Here, we describe how the TREAT-NMD national patient registries for DMD were established. We look at their continued growth and assess how successful they have been at fostering collaboration between academia, patient organizations, and industry.