165 resultados para Gaussian assumption
Resumo:
Numerous sources of evidence point to the fact that heterogeneity within the Earth's deep crystalline crust is complex and hence may be best described through stochastic rather than deterministic approaches. As seismic reflection imaging arguably offers the best means of sampling deep crustal rocks in situ, much interest has been expressed in using such data to characterize the stochastic nature of crustal heterogeneity. Previous work on this problem has shown that the spatial statistics of seismic reflection data are indeed related to those of the underlying heterogeneous seismic velocity distribution. As of yet, however, the nature of this relationship has remained elusive due to the fact that most of the work was either strictly empirical or based on incorrect methodological approaches. Here, we introduce a conceptual model, based on the assumption of weak scattering, that allows us to quantitatively link the second-order statistics of a 2-D seismic velocity distribution with those of the corresponding processed and depth-migrated seismic reflection image. We then perform a sensitivity study in order to investigate what information regarding the stochastic model parameters describing crustal velocity heterogeneity might potentially be recovered from the statistics of a seismic reflection image using this model. Finally, we present a Monte Carlo inversion strategy to estimate these parameters and we show examples of its application at two different source frequencies and using two different sets of prior information. Our results indicate that the inverse problem is inherently non-unique and that many different combinations of the vertical and lateral correlation lengths describing the velocity heterogeneity can yield seismic images with the same 2-D autocorrelation structure. The ratio of all of these possible combinations of vertical and lateral correlation lengths, however, remains roughly constant which indicates that, without additional prior information, the aspect ratio is the only parameter describing the stochastic seismic velocity structure that can be reliably recovered.
Resumo:
For the general practitioner to be able to prescribe optimal therapy to his individual hypertensive patients, he needs accurate information on the therapeutic agents he is going to administer and practical treatment strategies. The information on drugs and drug combinations has to be applicable to the treatment of individual patients and not just patient study groups. A basic requirement is knowledge of the dose-response relationship for each compound in order to choose the optimal therapeutic dose. Contrary to general assumption, this key information is difficult to obtain and often not available to the physician for many years after marketing of a drug. As a consequence, excessive doses are often used. Furthermore, the physician needs comparative data on the various antihypertensive drugs that are applicable to the treatment of individual patients. In order to minimize potential side effects due to unnecessary combinations of compounds, the strategy of sequential monotherapy is proposed, with the goal of treating as many patients as possible with monotherapy at optimal doses. More drug trials of a crossover design and more individualized analyses of the results are badly needed to provide the physician with information that he can use in his daily practice. In this time of continuous intensive development of new antihypertensive agents, much could be gained in enhanced efficacy and reduced incidence of side effects by taking a closer look at the drugs already available and using them more appropriately in individual patients.
Resumo:
Résumé :Une famille souffrant d'un nouveau syndrome oculo-auriculaire, appelé syndrome de Schorderet-Munier, a été identifiée. Ce syndrome est caractérisé par une déformation du lobe de l'oreille et des anomalies ophtalmiques, notamment une microphtalmie, une cataracte, un colobome et une dégénérescence rétinienne. Le gène impliqué dans ce syndrome est NKX5-3 codant un facteur de transcription contenant un homéodomaine. Chez les patient atteints, le gène comporte une délétion de 26 nucléotides provoquant probablement l'apparition d'un codon stop précoce. Ce gène n'est exprimé que dans certains organes dont les testicules et les ganglions cervicaux supérieurs, ainsi que dans les organes touchés par ce syndrome, à savoir le pavillon de l'oreille et l'oeil, surtout lors du développement embryonnaire. Au niveau de la rétine, NKX5-3 est présent dans la couche nucléaire interne et dans la couche dè cellules ganglionnaires et est exprimé de manière polarisée selon un axe temporal > nasal et ventral > dorsal. Son expression in vitro est régulée par Spl, un facteur de transcription exprimé durant le développement de l'oeil chez la souris. NKX5-3 semble lui-même provoquer une inhibition de l'expression de SHH et de EPHA6. Ces gènes sont tous les deux impliqués à leur manière dans le guidage des axones des cellules ganglionnaires de la rétine. Pris ensemble, ces résultats nous permettent donc d'émettre une hypothèse quant à un rôle potentiel de NKX5-3 dans ce processus.Abstract :A family with a new oculo-auricular syndrome, called syndrome of Schorderet-Munier, was identified. This disease is characterised by a deformation of the ear lobule and by several ophthalmic abnormalities, like microphthalmia, cataract, coloboma and a retinal degeneration. The gene, which causes this syndrome, is NKX5-3 coding for a transcription factor contaning a homeodomain. In the affectd patients, the defect consists of a deletion of 26 nucleotides probably producing a premature stop codon. This gene is only expressed in a few organs like testis and superior cervical ganglions, as well as in organs affected by this syndrome, namely the ear pinna and the eye, mainly during embryonic development. In the retina, NKX5-3 is present in the inner nuclear layer and in the ganglion cells layer. It is expressed along a gradient ranging from the temporal retina to nasal retina and from the ventral to the dorsal part. Its in vitro expression is regulated by Spl, a transcription factor expressed during the murine eye development. NKX5-3 seems to inhibit the expression of SHH and EPHA6. These genes are both implicated, in their own way, in the axon guidance of the retinal ganglion cells. Taken together, these results allow us to make an assumption about a potential role of NKX5-3 in this process.
Resumo:
It is well established that interactions between CD4(+) T cells and major histocompatibility complex class II (MHCII) positive antigen-presenting cells (APCs) of hematopoietic origin play key roles in both the maintenance of tolerance and the initiation and development of autoimmune and inflammatory disorders. In sharp contrast, despite nearly three decades of intensive research, the functional relevance of MHCII expression by non-hematopoietic tissue-resident cells has remained obscure. The widespread assumption that MHCII expression by non-hematopoietic APCs has an impact on autoimmune and inflammatory diseases has in most instances neither been confirmed nor excluded by indisputable in vivo data. Here we review and put into perspective conflicting in vitro and in vivo results on the putative impact of MHCII expression by non-hematopoietic APCs-in both target organs and secondary lymphoid tissues-on the initiation and development of representative autoimmune and inflammatory disorders. Emphasis will be placed on the lacunar status of our knowledge in this field. We also discuss new mouse models-developed on the basis of our understanding of the molecular mechanisms that regulate MHCII expression-that constitute valuable tools for filling the severe gaps in our knowledge on the functions of non-hematopoietic APCs in inflammatory conditions.
Resumo:
PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.
Resumo:
BACKGROUND: Genotypes obtained with commercial SNP arrays have been extensively used in many large case-control or population-based cohorts for SNP-based genome-wide association studies for a multitude of traits. Yet, these genotypes capture only a small fraction of the variance of the studied traits. Genomic structural variants (GSV) such as Copy Number Variation (CNV) may account for part of the missing heritability, but their comprehensive detection requires either next-generation arrays or sequencing. Sophisticated algorithms that infer CNVs by combining the intensities from SNP-probes for the two alleles can already be used to extract a partial view of such GSV from existing data sets. RESULTS: Here we present several advances to facilitate the latter approach. First, we introduce a novel CNV detection method based on a Gaussian Mixture Model. Second, we propose a new algorithm, PCA merge, for combining copy-number profiles from many individuals into consensus regions. We applied both our new methods as well as existing ones to data from 5612 individuals from the CoLaus study who were genotyped on Affymetrix 500K arrays. We developed a number of procedures in order to evaluate the performance of the different methods. This includes comparison with previously published CNVs as well as using a replication sample of 239 individuals, genotyped with Illumina 550K arrays. We also established a new evaluation procedure that employs the fact that related individuals are expected to share their CNVs more frequently than randomly selected individuals. The ability to detect both rare and common CNVs provides a valuable resource that will facilitate association studies exploring potential phenotypic associations with CNVs. CONCLUSION: Our new methodologies for CNV detection and their evaluation will help in extracting additional information from the large amount of SNP-genotyping data on various cohorts and use this to explore structural variants and their impact on complex traits.
Resumo:
The present research deals with the review of the analysis and modeling of Swiss franc interest rate curves (IRC) by using unsupervised (SOM, Gaussian Mixtures) and supervised machine (MLP) learning algorithms. IRC are considered as objects embedded into different feature spaces: maturities; maturity-date, parameters of Nelson-Siegel model (NSM). Analysis of NSM parameters and their temporal and clustering structures helps to understand the relevance of model and its potential use for the forecasting. Mapping of IRC in a maturity-date feature space is presented and analyzed for the visualization and forecasting purposes.
Resumo:
Aim: Climatic niche modelling of species and community distributions implicitly assumes strong and constant climatic determinism across geographic space. This assumption had however never been tested so far. We tested it by assessing how stacked-species distribution models (S-SDMs) perform for predicting plant species assemblages along elevation. Location: Western Swiss Alps. Methods: Using robust presence-absence data, we first assessed the ability of topo-climatic S-SDMs to predict plant assemblages in a study area encompassing a 2800 m wide elevation gradient. We then assessed the relationships among several evaluation metrics and trait-based tests of community assembly rules. Results: The standard errors of individual SDMs decreased significantly towards higher elevations. Overall, the S-SDM overpredicted far more than they underpredicted richness and could not reproduce the humpback curve along elevation. Overprediction was greater at low and mid-range elevations in absolute values but greater at high elevations when standardised by the actual richness. Looking at species composition, the evaluation metrics accounting for both the presence and absence of species (overall prediction success and kappa) or focusing on correctly predicted absences (specificity) increased with increasing elevation, while the metrics focusing on correctly predicted presences (Jaccard index and sensitivity) decreased. The best overall evaluation - as driven by specificity - occurred at high elevation where species assemblages were shown to be under significant environmental filtering of small plants. In contrast, the decreased overall accuracy in the lowlands was associated with functional patterns representing any type of assembly rule (environmental filtering, limiting similarity or null assembly). Main Conclusions: Our study reveals interesting patterns of change in S-SDM errors with changes in assembly rules along elevation. Yet, significant levels of assemblage prediction errors occurred throughout the gradient, calling for further improvement of SDMs, e.g., by adding key environmental filters that act at fine scales and developing approaches to account for variations in the influence of predictors along environmental gradients.
Resumo:
Predictive groundwater modeling requires accurate information about aquifer characteristics. Geophysical imaging is a powerful tool for delineating aquifer properties at an appropriate scale and resolution, but it suffers from problems of ambiguity. One way to overcome such limitations is to adopt a simultaneous multitechnique inversion strategy. We have developed a methodology for aquifer characterization based on structural joint inversion of multiple geophysical data sets followed by clustering to form zones and subsequent inversion for zonal parameters. Joint inversions based on cross-gradient structural constraints require less restrictive assumptions than, say, applying predefined petro-physical relationships and generally yield superior results. This approach has, for the first time, been applied to three geophysical data types in three dimensions. A classification scheme using maximum likelihood estimation is used to determine the parameters of a Gaussian mixture model that defines zonal geometries from joint-inversion tomograms. The resulting zones are used to estimate representative geophysical parameters of each zone, which are then used for field-scale petrophysical analysis. A synthetic study demonstrated how joint inversion of seismic and radar traveltimes and electrical resistance tomography (ERT) data greatly reduces misclassification of zones (down from 21.3% to 3.7%) and improves the accuracy of retrieved zonal parameters (from 1.8% to 0.3%) compared to individual inversions. We applied our scheme to a data set collected in northeastern Switzerland to delineate lithologic subunits within a gravel aquifer. The inversion models resolve three principal subhorizontal units along with some important 3D heterogeneity. Petro-physical analysis of the zonal parameters indicated approximately 30% variation in porosity within the gravel aquifer and an increasing fraction of finer sediments with depth.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
The general public seems to be convinced that juvenile delinquency has massively increased over the last decades. However, this assumption is much less popular among academics and some media where doubts about the reality of this trend are often expressed. In the present paper, trends are followed using conviction statistics over 50 years, police and victimization data since the 1980s, and self-report data collected since 1992. All sources consistently point to a massive increase of offending among juveniles, particularly for violent offences during the 1990s. Given that trends were similar in most European countries, explanations should be sought at the European rather than the national level. The available evidence points to possible effects of increased opportunities for property offences since 1950, and although causality remains hard to prove, effects of increased exposure to extreme media violence since 1985.
Resumo:
BACKGROUND AND PURPOSE: Most of the neuropathological studies in brain aging were based on the assumption of a symmetrical right-left hemisphere distribution of both Alzheimer disease and vascular pathology. To explore the impact of asymmetrical lesion formation on cognition, we performed a clinicopathological analysis of 153 cases with mixed pathology except macroinfarcts. METHODS: Cognitive status was assessed prospectively using the Clinical Dementia Rating scale; neuropathological evaluation included assessment of Braak neurofibrillary tangle and Ass deposition staging, microvascular pathology, and lacunes. The right-left hemisphere differences in neuropathological scores were evaluated using the Wilcoxon signed rank test. The relationship between the interhemispheric distribution of lesions and Clinical Dementia Rating scores was assessed using ordered logistic regression. RESULTS: Unlike Braak neurofibrillary tangle and Ass deposition staging, vascular scores were significantly higher in the left hemisphere for all Clinical Dementia Rating scores. A negative relationship was found between Braak neurofibrillary tangle, but not Ass staging, and vascular scores in cases with moderate to severe dementia. In both hemispheres, Braak neurofibrillary tangle staging was the main determinant of cognitive decline followed by vascular scores and Ass deposition staging. The concomitant predominance of Alzheimer disease and vascular pathology in the right hemisphere was associated with significantly higher Clinical Dementia Rating scores. CONCLUSIONS: Our data show that the cognitive impact of Alzheimer disease and vascular lesions in mixed cases may be assessed unilaterally without major information loss. However, interhemispheric differences and, in particular, increased vascular and Alzheimer disease burden in the right hemisphere may increase the risk for dementia in this group.
Resumo:
In this paper, we present an efficient numerical scheme for the recently introduced geodesic active fields (GAF) framework for geometric image registration. This framework considers the registration task as a weighted minimal surface problem. Hence, the data-term and the regularization-term are combined through multiplication in a single, parametrization invariant and geometric cost functional. The multiplicative coupling provides an intrinsic, spatially varying and data-dependent tuning of the regularization strength, and the parametrization invariance allows working with images of nonflat geometry, generally defined on any smoothly parametrizable manifold. The resulting energy-minimizing flow, however, has poor numerical properties. Here, we provide an efficient numerical scheme that uses a splitting approach; data and regularity terms are optimized over two distinct deformation fields that are constrained to be equal via an augmented Lagrangian approach. Our approach is more flexible than standard Gaussian regularization, since one can interpolate freely between isotropic Gaussian and anisotropic TV-like smoothing. In this paper, we compare the geodesic active fields method with the popular Demons method and three more recent state-of-the-art algorithms: NL-optical flow, MRF image registration, and landmark-enhanced large displacement optical flow. Thus, we can show the advantages of the proposed FastGAF method. It compares favorably against Demons, both in terms of registration speed and quality. Over the range of example applications, it also consistently produces results not far from more dedicated state-of-the-art methods, illustrating the flexibility of the proposed framework.
Resumo:
This chapter explores the institutional environments in which standards for the service sector are expected to support the rise of a global knowledgebased economy. The analysis relies on global political economy approaches to extend to the area of services standards the assumption that the process of globalisation is not opposing states and markets, but a joint expression of both of them including new patterns and agents of structural change through formal and informal power and regulatory practices. It analyses how services standards gain authority in the institutional environment in Europe and in the United States and the extent to which this authority is recognised at the transnational level. In contrast to conventional views opposing the European and American standardisation systems, the chapter shows that institutional developments of services standards are likely to face trade-offs and compromises across those systems.
Resumo:
Estimation of the spatial statistics of subsurface velocity heterogeneity from surface-based geophysical reflection survey data is a problem of significant interest in seismic and ground-penetrating radar (GPR) research. A method to effectively address this problem has been recently presented, but our knowledge regarding the resolution of the estimated parameters is still inadequate. Here we examine this issue using an analytical approach that is based on the realistic assumption that the subsurface velocity structure can be characterized as a band-limited scale-invariant medium. Our work importantly confirms recent numerical findings that the inversion of seismic or GPR reflection data for the geostatistical properties of the probed subsurface region is sensitive to the aspect ratio of the velocity heterogeneity and to the decay of its power spectrum, but not to the individual values of the horizontal and vertical correlation lengths.