935 resultados para restriction of parameter space
Resumo:
BACKGROUND: The Life-Space Assessment (LSA), developed in the USA, is an instrument focusing on mobility with respect to reaching different areas defined as life-spaces, extending from the room where the person sleeps to mobility outside one's hometown. A newly translated Swedish version of the LSA (LSA-S) has been tested for test-retest reliability, but the validity remains to be tested. The purpose of the present study was to examine the concurrent validity of the LSA-S, by comparing and correlating the LSA scores to other measures of mobility. METHOD: The LSA was included in a population-based study of health, functioning and mobility among older persons in Sweden, and the present analysis comprised 312 community-dwelling participants. To test the concurrent validity, the LSA scores were compared to a number of other mobility-related variables, including the Short Physical Performance Battery (SPPB) as well as "stair climbing", "transfers", "transportation", "food shopping", "travel for pleasure" and "community activities". The LSA total mean scores for different levels of the other mobility-related variables, and measures of correlation were calculated. RESULTS: Higher LSA total mean scores were observed with higher levels of all the other mobility related variables. Most of the correlations between the LSA and the other mobility variables were large (r = 0.5-1.0) and significant at the 0.01 level. The LSA total score, as well as independent life-space and assistive life-space correlated with transportation (0.63, 0.66, 0.64) and food shopping (0.55, 0.58, 0.55). Assistive life-space also correlated with SPPB (0.47). With respect to maximal life-space, the correlations with the mobility-related variables were generally lower (below 0.5), probably since this aspect of life-space mobility is highly influenced by social support and is not so dependent on the individual's own physical function. CONCLUSION: LSA was shown to be a valid measure of mobility when using the LSA total, independent LS or assistive LSA.
Resumo:
This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.
Resumo:
Dynamical models of stellar systems represent a powerful tool to study their internal structure and dynamics, to interpret the observed morphological and kinematical fields, and also to support numerical simulations of their evolution. We present a method especially designed to build axisymmetric Jeans models of galaxies, assumed as stationary and collisionless stellar systems. The aim is the development of a rigorous and flexible modelling procedure of multicomponent galaxies, composed of different stellar and dark matter distributions, and a central supermassive black hole. The stellar components, in particular, are intended to represent different galaxy structures, such as discs, bulges, halos, and can then have different structural (density profile, flattening, mass, scale-length), dynamical (rotation, velocity dispersion anisotropy), and population (age, metallicity, initial mass function, mass-to-light ratio) properties. The theoretical framework supporting the modelling procedure is presented, with the introduction of a suitable nomenclature, and its numerical implementation is discussed, with particular reference to the numerical code JASMINE2, developed for this purpose. We propose an approach for efficiently scaling the contributions in mass, luminosity, and rotational support, of the different matter components, allowing for fast and flexible explorations of the model parameter space. We also offer different methods of the computation of the gravitational potentials associated of the density components, especially convenient for their easier numerical tractability. A few galaxy models are studied, showing internal, and projected, structural and dynamical properties of multicomponent galaxies, with a focus on axisymmetric early-type galaxies with complex kinematical morphologies. The application of galaxy models to the study of initial conditions for hydro-dynamical and $N$-body simulations of galaxy evolution is also addressed, allowing in particular to investigate the large number of interesting combinations of the parameters which determine the structure and dynamics of complex multicomponent stellar systems.
Resumo:
Cool giant and supergiant stars are among the brightest populations in any stellar system and they are easily observable out to large distances, especially at infrared wavelengths. These stars also dominate the integrated light of star clusters in a wide range of ages, making them powerful tracers of stellar populations in more distant galaxies. High-resolution near-IR spectroscopy is a key tool for quantitatively investigating their kinematic, evolutionary and chemical properties. However, the systematic exploration and calibration of the NIR spectral diagnostics to study these cool stellar populations based on high-resolution spectroscopy is still in its pioneering stage. Any effort to make progress in the field is innovative and of impact on stellar archaeology and stellar evolution. This PhD project takes the challenge of exploring that new parameter space and characterizing the physical properties, the chemical content and the kinematics of cool giants and supergiants in selected disc fields and clusters of our Galaxy, with the ultimate goal of tracing their past and recent star formation and chemical enrichment history. By using optical HARPS-N and near-infrared GIANO-B high-resolution stellar spectra in the context of the large program SPA-Stellar Population Astrophysics: the detailed, age-resolved chemistry of the Milky Way disk” (PI L. Origlia), an extensive study of Arcturus, a standard calibrator for red giant stars, has been performed. New diagnostics of stellar parameters as well as optimal linelists for chemical analysis have been provided. Then, such diagnostics have been used to determine evolutionary properties, detailed chemical abundances of almost 30 different elements and mixing processes of a homogeneous sample of red supergiant stars in the Perseus complex.
Resumo:
The navigation of deep space spacecraft requires accurate measurement of the probe’s state and attitude with respect to a body whose ephemerides may not be known with good accuracy. The heliocentric state of the spacecraft is estimated through radiometric techniques (ranging, Doppler, and Delta-DOR), while optical observables can be introduced to improve the uncertainty in the relative position and attitude with respect to the target body. In this study, we analyze how simulated optical observables affect the estimation of parameters in an orbit determination problem, considering the case of the ESA’s Hera mission towards the binary asteroid system composed of Didymos and Dimorphos. To this extent, a shape model and a photometric function are used to create synthetic onboard camera images. Then, using a stereophotoclinometry technique on some of the simulated images, we create a database of maplets that describe the 3D geometry of the surface around a set of landmarks. The matching of maplets with the simulated images provides the optical observables, expressed as pixel coordinates in the camera frame, which are fed to an orbit determination filter to estimate a certain number of solve-for parameters. The noise introduced in the output optical observables by the image processing can be quantified using as a metric the quality of the residuals, which is used to fine-tune the maplet-matching parameters. In particular, the best results are obtained when using small maplets, with high correlation coefficients and occupation factors.
Resumo:
The aim of this study is to test the feasibility and reproducibility of diffusion-weighted magnetic resonance imaging (DW-MRI) evaluations of the fetal brains in cases of twin-twin transfusion syndrome (TTTS). From May 2011 to June 2012, 24 patients with severe TTTS underwent MRI scans for evaluation of the fetal brains. Datasets were analyzed offline on axial DW images and apparent diffusion coefficient (ADC) maps by two radiologists. The subjective evaluation was described as the absence or presence of water diffusion restriction. The objective evaluation was performed by the placement of 20-mm(2) circular regions of interest on the DW image and ADC maps. Subjective interobserver agreement was assessed by the kappa correlation coefficient. Objective intraobserver and interobserver agreements were assessed by proportionate Bland-Altman tests. Seventy-four DW-MRI scans were performed. Sixty of them (81.1%) were considered to be of good quality. Agreement between the radiologists was 100% for the absence or presence of diffusion restriction of water. For both intraobserver and interobserver agreement of ADC measurements, proportionate Bland-Altman tests showed average percentage differences of less than 1.5% and 95% CI of less than 18% for all sites evaluated. Our data demonstrate that DW-MRI evaluation of the fetal brain in TTTS is feasible and reproducible.
Resumo:
We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 <= r <= 21 (85.2%) and r >= 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 <= r <= 21, with much lower contamination than all but the Ball et al. classifier. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (> 80%) while simultaneously achieving low contamination (similar to 2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 <= r <= 21.
Resumo:
The study of displaced vertices containing two b-jets may provide a double discovery at the Large Hadron Collider (LHC): we show how it may not only reveal evidence for supersymmetry, but also provide a way to uncover the Higgs boson necessary in the formulation of the electroweak theory in a large region of the parameter space. We quantify this explicitly using the simplest minimal supergravity model with bilinear breaking of R-parity, which accounts for the observed pattern of neutrino masses and mixings seen in neutrino oscillation experiments.
Resumo:
We investigate a neutrino mass model in which the neutrino data is accounted for by bilinear R-parity violating supersymmetry with anomaly mediated supersymmetry breaking. We focus on the CERN Large Hadron Collider (LHC) phenomenology, studying the reach of generic supersymmetry search channels with leptons, missing energy and jets. A special feature of this model is the existence of long-lived neutralinos and charginos which decay inside the detector leading to detached vertices. We demonstrate that the largest reach is obtained in the displaced vertices channel and that practically all of the reasonable parameter space will be covered with an integrated luminosity of 10 fb(-1). We also compare the displaced vertex reaches of the LHC and Tevatron.
Resumo:
We examine the possibility that a new strong interaction is accessible to the Tevatron and the LHC. In an effective theory approach, we consider a scenario with a new color-octet interaction with strong couplings to the top quark, as well as the presence of a strongly coupled fourth generation which could be responsible for electroweak symmetry breaking. We apply several constraints, including the ones from flavor physics. We study the phenomenology of the resulting parameter space at the Tevatron, focusing on the forward-backward asymmetry in top pair production, as well as in the production of the fourth-generation quarks. We show that if the excess in the top production asymmetry is indeed the result of this new interaction, the Tevatron could see the first hints of the strongly coupled fourth-generation quarks. Finally, we show that the LHC with root s = 7 TeV and 1 fb(-1) integrated luminosity should observe the production of fourth-generation quarks at a level at least 1 order of magnitude above the QCD prediction for the production of these states.
Resumo:
We use the recent results on dark matter searches of the 22-string IceCube detector to probe the remaining allowed window for strongly interacting dark matter in the mass range 10(4) < m(X) < 10(15) GeV. We calculate the expected signal in the 22-string IceCube detector from the annihilation of such particles captured in the Sun and compare it to the detected background. As a result, the remaining allowed region in the mass versus cross section parameter space is ruled out. We also show the expected sensitivity of the complete IceCube detector with 86 strings.
Resumo:
This work deals with the problem of minimizing the waste of space that occurs on a rotational placement of a set of irregular bi-dimensional items inside a bi-dimensional container. This problem is approached with a heuristic based on Simulated Annealing (SA) with adaptive neighborhood. The objective function is evaluated in a constructive approach, where the items are placed sequentially. The placement is governed by three different types of parameters: sequence of placement, the rotation angle and the translation. The rotation applied and the translation of the polygon are cyclic continuous parameters, and the sequence of placement defines a combinatorial problem. This way, it is necessary to control cyclic continuous and discrete parameters. The approaches described in the literature deal with only type of parameter (sequence of placement or translation). In the proposed SA algorithm, the sensibility of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensibility of each parameter is associated to its probability distribution in the definition of the next candidate.
Resumo:
Three main models of parameter setting have been proposed: the Variational model proposed by Yang (2002; 2004), the Structured Acquisition model endorsed by Baker (2001; 2005), and the Very Early Parameter Setting (VEPS) model advanced by Wexler (1998). The VEPS model contends that parameters are set early. The Variational model supposes that children employ statistical learning mechanisms to decide among competing parameter values, so this model anticipates delays in parameter setting when critical input is sparse, and gradual setting of parameters. On the Structured Acquisition model, delays occur because parameters form a hierarchy, with higher-level parameters set before lower-level parameters. Assuming that children freely choose the initial value, children sometimes will miss-set parameters. However when that happens, the input is expected to trigger a precipitous rise in one parameter value and a corresponding decline in the other value. We will point to the kind of child language data that is needed in order to adjudicate among these competing models.
Resumo:
Nearest–neighbour balance is considered a desirable property for an experiment to possess in situations where experimental units are influenced by their neighbours. This paper introduces a measure of the degree of nearest–neighbour balance of a design. The measure is used in an algorithm which generates nearest–neighbour balanced designs and is readily modified to obtain designs with various types of nearest–neighbour balance. Nearest–neighbour balanced designs are produced for a wide class of parameter settings, and in particular for those settings for which such designs cannot be found by existing direct combinatorial methods. In addition, designs with unequal row and column sizes, and designs with border plots are constructed using the approach presented here.
Resumo:
Silicic volcanic eruptions are typically accompanied by repetitive Long-Period (LP) seismicity that originates from a small region of the upper conduit. These signals have the capability to advance eruption prediction, since they commonly precede a change in the eruption vigour. Shear bands forming along the conduit wall, where the shear stresses are highest, have been linked to providing the seismic trigger. However, existing computational models are unable to generate shear bands at the depths where the LP signals originate using simple magma strength models. Presented here is a model in which the magma strength is determined from a constitutive relationship dependent upon crystallinity and pressure. This results in a depth-dependent magma strength, analogous to planetary lithospheres. Hence, in shallow highly-crystalline regions a macroscopically discontinuous brittle type of deformation will prevail, whilst in deeper crystal-poor regions there will be a macroscopically continuous plastic deformation mechanism. This will result in a depth where the brittle-ductile transition occurs, and here shear bands disconnected from the free-surface may develop. We utilize the Finite Element Method and use axi-symmetric coordinates to model magma flow as a viscoplastic material, simulating quasi-static shear bands along the walls of a volcanic conduit. Model results constrained to the Soufrière Hills Volcano, Montserrat, show the generation of two types of shear bands: upper-conduit shear bands that form between the free-surface to a few 100 metres below it and discrete shear bands that form at the depths where LP seismicity is measured to occur corresponding to the brittle-ductile transition and the plastic shear region. It is beyond the limitation of the model to simulate a seismic event, although the modelled viscosity within the discrete shear bands suggests a failure and healing cycle time that supports the observed LP seismicity repeat times. However, due to the paucity of data and large parameter space available these results can only be considered to be qualitative rather than quantitative at this stage.