957 resultados para Points distribution in high dimensional space
Resumo:
Although the Standard Model of particle physics (SM) provides an extremely successful description of the ordinary matter, one knows from astronomical observations that it accounts only for around 5% of the total energy density of the Universe, whereas around 30% are contributed by the dark matter. Motivated by anomalies in cosmic ray observations and by attempts to solve questions of the SM like the (g-2)_mu discrepancy, proposed U(1) extensions of the SM gauge group have raised attention in recent years. In the considered U(1) extensions a new, light messenger particle, the hidden photon, couples to the hidden sector as well as to the electromagnetic current of the SM by kinetic mixing. This allows for a search for this particle in laboratory experiments exploring the electromagnetic interaction. Various experimental programs have been started to search for hidden photons, such as in electron-scattering experiments, which are a versatile tool to explore various physics phenomena. One approach is the dedicated search in fixed-target experiments at modest energies as performed at MAMI or at JLAB. In these experiments the scattering of an electron beam off a hadronic target e+(A,Z)->e+(A,Z)+l^+l^- is investigated and a search for a very narrow resonance in the invariant mass distribution of the lepton pair is performed. This requires an accurate understanding of the theoretical basis of the underlying processes. For this purpose it is demonstrated in the first part of this work, in which way the hidden photon can be motivated from existing puzzles encountered at the precision frontier of the SM. The main part of this thesis deals with the analysis of the theoretical framework for electron scattering fixed-target experiments searching for hidden photons. As a first step, the cross section for the bremsstrahlung emission of hidden photons in such experiments is studied. Based on these results, the applicability of the Weizsäcker-Williams approximation to calculate the signal cross section of the process, which is widely used to design such experimental setups, is investigated. In a next step, the reaction e+(A,Z)->e+(A,Z)+l^+l^- is analyzed as signal and background process in order to describe existing data obtained by the A1 experiment at MAMI with the aim to give accurate predictions of exclusion limits for the hidden photon parameter space. Finally, the derived methods are used to find predictions for future experiments, e.g., at MESA or at JLAB, allowing for a comprehensive study of the discovery potential of the complementary experiments. In the last part, a feasibility study for probing the hidden photon model by rare kaon decays is performed. For this purpose, invisible as well as visible decays of the hidden photon are considered within different classes of models. This allows one to find bounds for the parameter space from existing data and to estimate the reach of future experiments.
Resumo:
With recent advances in mass spectrometry techniques, it is now possible to investigate proteins over a wide range of molecular weights in small biological specimens. This advance has generated data-analytic challenges in proteomics, similar to those created by microarray technologies in genetics, namely, discovery of "signature" protein profiles specific to each pathologic state (e.g., normal vs. cancer) or differential profiles between experimental conditions (e.g., treated by a drug of interest vs. untreated) from high-dimensional data. We propose a data analytic strategy for discovering protein biomarkers based on such high-dimensional mass-spectrometry data. A real biomarker-discovery project on prostate cancer is taken as a concrete example throughout the paper: the project aims to identify proteins in serum that distinguish cancer, benign hyperplasia, and normal states of prostate using the Surface Enhanced Laser Desorption/Ionization (SELDI) technology, a recently developed mass spectrometry technique. Our data analytic strategy takes properties of the SELDI mass-spectrometer into account: the SELDI output of a specimen contains about 48,000 (x, y) points where x is the protein mass divided by the number of charges introduced by ionization and y is the protein intensity of the corresponding mass per charge value, x, in that specimen. Given high coefficients of variation and other characteristics of protein intensity measures (y values), we reduce the measures of protein intensities to a set of binary variables that indicate peaks in the y-axis direction in the nearest neighborhoods of each mass per charge point in the x-axis direction. We then account for a shifting (measurement error) problem of the x-axis in SELDI output. After these pre-analysis processing of data, we combine the binary predictors to generate classification rules for cancer, benign hyperplasia, and normal states of prostate. Our approach is to apply the boosting algorithm to select binary predictors and construct a summary classifier. We empirically evaluate sensitivity and specificity of the resulting summary classifiers with a test dataset that is independent from the training dataset used to construct the summary classifiers. The proposed method performed nearly perfectly in distinguishing cancer and benign hyperplasia from normal. In the classification of cancer vs. benign hyperplasia, however, an appreciable proportion of the benign specimens were classified incorrectly as cancer. We discuss practical issues associated with our proposed approach to the analysis of SELDI output and its application in cancer biomarker discovery.
Resumo:
In biostatistical applications, interest often focuses on the estimation of the distribution of time T between two consecutive events. If the initial event time is observed and the subsequent event time is only known to be larger or smaller than an observed monitoring time, then the data is described by the well known singly-censored current status model, also known as interval censored data, case I. We extend this current status model by allowing the presence of a time-dependent process, which is partly observed and allowing C to depend on T through the observed part of this time-dependent process. Because of the high dimension of the covariate process, no globally efficient estimators exist with a good practical performance at moderate sample sizes. We follow the approach of Robins and Rotnitzky (1992) by modeling the censoring variable, given the time-variable and the covariate-process, i.e., the missingness process, under the restriction that it satisfied coarsening at random. We propose a generalization of the simple current status estimator of the distribution of T and of smooth functionals of the distribution of T, which is based on an estimate of the missingness. In this estimator the covariates enter only through the estimate of the missingness process. Due to the coarsening at random assumption, the estimator has the interesting property that if we estimate the missingness process more nonparametrically, then we improve its efficiency. We show that by local estimation of an optimal model or optimal function of the covariates for the missingness process, the generalized current status estimator for smooth functionals become locally efficient; meaning it is efficient if the right model or covariate is consistently estimated and it is consistent and asymptotically normal in general. Estimation of the optimal model requires estimation of the conditional distribution of T, given the covariates. Any (prior) knowledge of this conditional distribution can be used at this stage without any risk of losing root-n consistency. We also propose locally efficient one step estimators. Finally, we show some simulation results.
Resumo:
Water flow and solute transport through soils are strongly influenced by the spatial arrangement of soil materials with different hydraulic and chemical properties. Knowing the specific or statistical arrangement of these materials is considered as a key toward improved predictions of solute transport. Our aim was to obtain two-dimensional material maps from photographs of exposed profiles. We developed a segmentation and classification procedure and applied it to the images of a very heterogeneous sand tank, which was used for a series of flow and transport experiments. The segmentation was based on thresholds of soil color, estimated from local median gray values, and of soil texture, estimated from local coefficients of variation of gray values. Important steps were the correction of inhomogeneous illumination and reflection, and the incorporation of prior knowledge in filters used to extract the image features and to smooth the results morphologically. We could check and confirm the success of our mapping by comparing the estimated with the designed sand distribution in the tank. The resulting material map was used later as input to model flow and transport through the sand tank. Similar segmentation procedures may be applied to any high-density raster data, including photographs or spectral scans of field profiles.
Resumo:
The field of molecule-based magnets is a relatively new branch of chemistry, which involves the design and study of molecular compounds that exhibit a spontaneous magnetic ordering below a critical temperature, Tc. One major goal involves the design of materials with tuneable Tc's for specific applications in memory storage devices. Molecule-based magnets with high magnetic ordering temperatures have recently been obtained from bimetallic and mixed-valence transition metal μ-cyanide complexes of the Prussian blue family. Since the μ-cyanide linkages permit an interaction between paramagnetic metal ions, cyanometalate building blocks have found useful applications in the field of molecule-based magnets. Our work involves the use of octacyanometalate building blocks for the self-assembly of two new classes of magnetic materials namely, high-spin molecular clusters which exhibit both ferromagnetic intra- and intercluster coupling, and specific extended network topologies which show long-range ferromagnetic ordering.
Resumo:
From 1978 to 1981, intensive sedimentological investigations were carried out on the Northfrisian intertidal shoals between the small island of Gröde and Nordstrand lsland as a part of an interdisciplinary research projekt. The objective of this sedimentological study was to reveal long and short term tendencies in sedimentation and erosion in the environment. The presented study mainly concentrated on surface mapping in the tidal flats which based on more than 5000 sediment samples. The relative amounts of the grain-size fractions <0.063 mm and >0.125 mm are presented on maps. Predominant sediment typs are well sorted fine sands ("Wattsand") and muddy sands ("Schlicksand"), pure muds covering only small areas. The fine-grained deposits are either found in the lee-side of an island in elongated bays having a low waterdepth during high tide near the shore or near exposed "Klei" outcrops as well as sporadically on the edge of tidal rills. Together with standardized fields observations of biological and physical properties, the maps indicate a slight erosive tendency in large sections of the investigated area.