990 resultados para threshold random variable
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
We present an exact test for whether two random variables that have known bounds on their support are negatively correlated. The alternative hypothesis is that they are not negatively correlated. No assumptions are made on the underlying distributions. We show by example that the Spearman rank correlation test as the competing exact test of correlation in nonparametric settings rests on an additional assumption on the data generating process without which it is not valid as a test for correlation.We then show how to test for the significance of the slope in a linear regression analysis that invovles a single independent variable and where outcomes of the dependent variable belong to a known bounded set.
Resumo:
We propose a novel compressed sensing technique to accelerate the magnetic resonance imaging (MRI) acquisition process. The method, coined spread spectrum MRI or simply s(2)MRI, consists of premodulating the signal of interest by a linear chirp before random k-space under-sampling, and then reconstructing the signal with nonlinear algorithms that promote sparsity. The effectiveness of the procedure is theoretically underpinned by the optimization of the coherence between the sparsity and sensing bases. The proposed technique is thoroughly studied by means of numerical simulations, as well as phantom and in vivo experiments on a 7T scanner. Our results suggest that s(2)MRI performs better than state-of-the-art variable density k-space under-sampling approaches.
Resumo:
This paper discusses inference in self exciting threshold autoregressive (SETAR)models. Of main interest is inference for the threshold parameter. It iswell-known that the asymptotics of the corresponding estimator depend uponwhether the SETAR model is continuous or not. In the continuous case, thelimiting distribution is normal and standard inference is possible. Inthe discontinuous case, the limiting distribution is non-normal and cannotbe estimated consistently. We show valid inference can be drawn by theuse of the subsampling method. Moreover, the method can even be extendedto situations where the (dis)continuity of the model is unknown. In thiscase, also the inference for the regression parameters of the modelbecomes difficult and subsampling can be used advantageously there aswell. In addition, we consider an hypothesis test for the continuity ofthe SETAR model. A simulation study examines small sample performance.
Resumo:
Most facility location decision models ignore the fact that for a facility to survive it needs a minimum demand level to cover costs. In this paper we present a decision model for a firm thatwishes to enter a spatial market where there are several competitors already located. This market is such that for each outlet there is a demand threshold level that has to be achievedin order to survive. The firm wishes to know where to locate itsoutlets so as to maximize its market share taking into account the threshold level. It may happen that due to this new entrance, some competitors will not be able to meet the threshold and therefore will disappear. A formulation is presented together with a heuristic solution method and computational experience.
Resumo:
This paper presents a new framework for studying irreversible (dis)investment whena market follows a random number of random-length cycles (such as a high-tech productmarket). It is assumed that a firm facing such market evolution is always unsure aboutwhether the current cycle is the last one, although it can update its beliefs about theprobability of facing a permanent decline by observing that no further growth phasearrives. We show that the existence of regime shifts in fluctuating markets suffices for anoption value of waiting to (dis)invest to arise, and we provide a marginal interpretationof the optimal (dis)investment policies, absent in the real options literature. Thepaper also shows that, despite the stochastic process of the underlying variable has acontinuous sample path, the discreteness in the regime changes implies that the samplepath of the firm s value experiences jumps whenever the regime switches all of a sudden,irrespective of whether the firm is active or not.
Resumo:
This paper proposes a common and tractable framework for analyzingdifferent definitions of fixed and random effects in a contant-slopevariable-intercept model. It is shown that, regardless of whethereffects (i) are treated as parameters or as an error term, (ii) areestimated in different stages of a hierarchical model, or whether (iii)correlation between effects and regressors is allowed, when the sameinformation on effects is introduced into all estimation methods, theresulting slope estimator is also the same across methods. If differentmethods produce different results, it is ultimately because differentinformation is being used for each methods.
Resumo:
The aim of the present study was to investigate to what extent interstitial lung disease (ILD) in common variable immunodeficiency disorder (CVID)-associated granulomatous disease (GD) is similar to pulmonary sarcoidosis 20 patients with CVID/GD were included in a retrospective study conducted by the Groupe Sarcoïdose Francophone. Medical records were centralised. Patients were compared with 60 controls with sarcoidosis. Clinical examination showed more frequent crackles in patients than controls (45% versus 1.7%, respectively; p<0.001). On thoracic computed tomography scans, nodules (often multiple and with smooth margins), air bronchograms and halo signs were more frequent in patients than controls (80% versus 42%, respectively; p=0.004) as well as bronchiectasis (65% versus 23%, respectively; p<0.001). The micronodule distribution was perilymphatic in 100% of controls and in 42% of patients (p<0.001). Bronchoalveolar lavage analysis showed lower T-cell CD4/CD8 ratios in patients than in controls (mean±sd 1.6±1.1 versus 5.3±4, respectively; p<0.01). On pathological analysis, nodules and consolidations corresponded to granulomatous lesions with or without lymphocytic disorders in most cases. Mortality was higher in patients than controls (30% versus 0%, respectively) and resulted from common variable immunodeficiency complications. ILD in CVID/GD presents a specific clinical picture and evolution that are markedly different from those of sarcoidosis.
Resumo:
PURPOSE: To characterize the clinical, psychophysical, and electrophysiological phenotypes in a five-generation Swiss family with dominantly inherited retinitis pigmentosa caused by a T494M mutation in the Precursor mRNA-Processing factor 3 (PRPF3) gene, and to relate the phenotype to the underlying genetic mutation. METHODS: Eleven affected patients were ascertained for phenotypic and genotypic characterization. Ophthalmologic evaluations included color vision testing, Goldmann perimetry, and digital fundus photography. Some patients had autofluorescence imaging, Optical Coherence Tomography, and ISCEV-standard full-field electroretinography. All affected patients had genetic testing. RESULTS: The age of onset of night blindness and the severity of the progression of the disease varied between members of the family. Some patients reported early onset of night blindness at age three, with subsequent severe deterioration of visual acuity, which was 0.4 in the best eye after their fifties. The second group of patients had a later onset of night blindness, in the mid-twenties, with a milder disease progression and a visual acuity of 0.8 at age 70. Fundus autofluorescence imaging and electrophysiological and visual field abnormalities also showed some degree of varying phenotypes. The autofluorescence imaging showed a large high-density ring bilaterally. Myopia (range: -0.75 to -8) was found in 10/11 affected subjects. Fundus findings showed areas of atrophy along the arcades. A T494M change was found in exon 11 of the PRPF3 gene. The change segregates with the disease in the family. CONCLUSIONS: A mutation in the PRPF3 gene is rare compared to other genes causing autosomal dominant retinitis pigmentosa (ADRP). Although a T494M change has been reported, the family in our study is the first with variable expressivity. Mutations in the PRPF3 gene can cause a variable ADRP phenotype, unlike in the previously described Danish, English, and Japanese families. Our report, based on one of the largest affected pedigree, provides a better understanding as to the phenotype/genotype description of ADRP caused by a PRPF3 mutation.
Resumo:
In many research areas (such as public health, environmental contamination, and others) one deals with the necessity of using data to infer whether some proportion (%) of a population of interest is (or one wants it to be) below and/or over some threshold, through the computation of tolerance interval. The idea is, once a threshold is given, one computes the tolerance interval or limit (which might be one or two - sided bounded) and then to check if it satisfies the given threshold. Since in this work we deal with the computation of one - sided tolerance interval, for the two-sided case we recomend, for instance, Krishnamoorthy and Mathew [5]. Krishnamoorthy and Mathew [4] performed the computation of upper tolerance limit in balanced and unbalanced one-way random effects models, whereas Fonseca et al [3] performed it based in a similar ideas but in a tow-way nested mixed or random effects model. In case of random effects model, Fonseca et al [3] performed the computation of such interval only for the balanced data, whereas in the mixed effects case they dit it only for the unbalanced data. For the computation of twosided tolerance interval in models with mixed and/or random effects we recomend, for instance, Sharma and Mathew [7]. The purpose of this paper is the computation of upper and lower tolerance interval in a two-way nested mixed effects models in balanced data. For the case of unbalanced data, as mentioned above, Fonseca et al [3] have already computed upper tolerance interval. Hence, using the notions persented in Fonseca et al [3] and Krishnamoorthy and Mathew [4], we present some results on the construction of one-sided tolerance interval for the balanced case. Thus, in order to do so at first instance we perform the construction for the upper case, and then the construction for the lower case.
Resumo:
Recently, several anonymization algorithms have appeared for privacy preservation on graphs. Some of them are based on random-ization techniques and on k-anonymity concepts. We can use both of them to obtain an anonymized graph with a given k-anonymity value. In this paper we compare algorithms based on both techniques in orderto obtain an anonymized graph with a desired k-anonymity value. We want to analyze the complexity of these methods to generate anonymized graphs and the quality of the resulting graphs.
Resumo:
L'étude du mouvement des organismes est essentiel pour la compréhension du fonctionnement des écosystèmes. Dans le cas des écosystèmes marins exploités, cela amène à s'intéresser aux stratégies spatiales des pêcheurs. L'une des approches les plus utilisées pour la modélisation du mouvement des prédateurs supé- rieurs est la marche aléatoire de Lévy. Une marche aléatoire est un modèle mathématique composé par des déplacements aléatoires. Dans le cas de Lévy, les longueurs des déplacements suivent une loi stable de Lévy. Dans ce cas également, les longueurs, lorsqu'elles tendent vers l'in ni (in praxy lorsqu'elles sont grandes, grandes par rapport à la médiane ou au troisième quartile par exemple), suivent une loi puissance caractéristique du type de marche aléatoire de Lévy (Cauchy, Brownien ou strictement Lévy). Dans la pratique, outre que cette propriété est utilisée de façon réciproque sans fondement théorique, les queues de distribution, notion par ailleurs imprécise, sont modélisée par des lois puissances sans que soient discutées la sensibilité des résultats à la dé nition de la queue de distribution, et la pertinence des tests d'ajustement et des critères de choix de modèle. Dans ce travail portant sur les déplacements observés de trois bateaux de pêche à l'anchois du Pérou, plusieurs modèles de queues de distribution (log-normal, exponentiel, exponentiel tronqué, puissance et puissance tronqué) ont été comparés ainsi que deux dé nitions possible de queues de distribution (de la médiane à l'in ni ou du troisième quartile à l'in ni). Au plan des critères et tests statistiques utilisés, les lois tronquées (exponentielle et puissance) sont apparues les meilleures. Elles intègrent en outre le fait que, dans la pratique, les bateaux ne dépassent pas une certaine limite de longueur de déplacement. Le choix de modèle est apparu sensible au choix du début de la queue de distribution : pour un même bateau, le choix d'un modèle tronqué ou l'autre dépend de l'intervalle des valeurs de la variable sur lequel le modèle est ajusté. Pour nir, nous discutons les implications en écologie des résultats de ce travail.
Resumo:
The life history of the fruit fly (Drosophila melanogaster) is well understood, but fitness components are rarely measured by following single individuals over their lifetime, thereby limiting insights into lifetime reproductive success, reproductive senescence and post-reproductive lifespan. Moreover, most studies have examined long-established laboratory strains rather than freshly caught individuals and may thus be confounded by adaptation to laboratory culture, inbreeding or mutation accumulation. Here, we have followed the life histories of individual females from three recently caught, non-laboratory-adapted wild populations of D. melanogaster. Populations varied in a number of life-history traits, including ovariole number, fecundity, hatchability and lifespan. To describe individual patterns of age-specific fecundity, we developed a new model that allowed us to distinguish four phases during a female's life: a phase of reproductive maturation, followed by a period of linear and then exponential decline in fecundity and, finally, a post-ovipository period. Individual females exhibited clear-cut fecundity peaks, which contrasts with previous analyses, and post-peak levels of fecundity declined independently of how long females lived. Notably, females had a pronounced post-reproductive lifespan, which on average made up 40% of total lifespan. Post-reproductive lifespan did not differ among populations and was not correlated with reproductive fitness components, supporting the hypothesis that this period is a highly variable, random 'add-on' at the end of reproductive life rather than a correlate of selection on reproductive fitness. Most life-history traits were positively correlated, a pattern that might be due to genotype by environment interactions when wild flies are brought into a novel laboratory environment but that is unlikely explained by inbreeding or positive mutational covariance caused by mutation accumulation.
Resumo:
BACKGROUND: The objectives of the present study were to evaluate Aids prevention in drug users attending low threshold centres providing sterile injection equipment in Switzerland, to identify the characteristics of these users, and to monitor the progress of indicators of drug-related harm. METHODS: This paper presents results from a cross-sectional survey carried out in 1994. RESULTS: The mean age of attenders was 28 years, and women represented 27% of the sample. 75% of attenders used a combination of hard drugs (heroin and cocaine). Mean duration of heroin consumption was 8 years, and of cocaine 7 years; 76% of attenders had a fixed abode, but only 34% had stable employment; 45% were being treated with methadone; 9% had shared their injection material in the last 6 months; 24% always used condoms in the case of a stable relationship, and 71% in casual relationships. In a cluster analysis constructed on the basis of multiple correspondence analysis, two distinct profiles of users emerge: highly marginalised users with a high level of consumption (21%); irregular users, better integrated socially, of which the majority are under methadone treatment (79%). CONCLUSION: Theses centres play a major role in Aids prevention. Nevertheless, efforts to improve the hygiene conditions of drug injection in Switzerland should be pursued and extended. At the same time, prevention of HIV sexual transmissions should be reinforced.
Resumo:
We present I-band deep CCD exposures of the fields of galactic plane radio variables. An optical counterpart, based on positional coincidence, has been found for 15 of the 27 observed program objects. The Johnson I magnitude of the sources identified is in the range 18-21.