11 resultados para Sample selection
Resumo:
This paper will discuss some of the challenges which may be encountered by midwifery researchers when conducting research where the research setting is familiar or study participants are known to the researchers. The paper identifies some of the key challenges which should be considered such as researching in a familiar culture, perception of participants, sample selection, finding space in the setting and interview dynamics. Examples are provided from three previous qualitative research projects conducted by the authors in educational and clinical settings with both pre registration and post registration midwives (pre and post registration). Each of the key issues will be discussed highlighting specific issues relevant to each with further consideration of how these issues may impact on progress of the project, data collected and subsequent findings. Finally, these will be drawn together with recommendations for future research conducted by midwives or where the setting or participants are known to the researchers. Although the paper is focused on midwifery research the issues raised may bear relevance in other areas where the setting or participants are known to researchers.
Resumo:
Over the last 15 years, the supernova community has endeavoured to directly identify progenitor stars for core-collapse supernovae discovered in nearby galaxies. These precursors are often visible as resolved stars in high-resolution images from space-and ground-based telescopes. The discovery rate of progenitor stars is limited by the local supernova rate and the availability and depth of archive images of galaxies, with 18 detections of precursor objects and 27 upper limits. This review compiles these results (from 1999 to 2013) in a distance-limited sample and discusses the implications of the findings. The vast majority of the detections of progenitor stars are of type II-P, II-L, or IIb with one type Ib progenitor system detected and many more upper limits for progenitors of Ibc supernovae (14 in all). The data for these 45 supernovae progenitors illustrate a remarkable deficit of high-luminosity stars above an apparent limit of log L/L-circle dot similar or equal to 5.1 dex. For a typical Salpeter initial mass function, one would expect to have found 13 high-luminosity and high-mass progenitors by now. There is, possibly, only one object in this time-and volume-limited sample that is unambiguously high-mass (the progenitor of SN2009ip) although the nature of that supernovae is still debated. The possible biases due to the influence of circumstellar dust, the luminosity analysis, and sample selection methods are reviewed. It does not appear likely that these can explain the missing high-mass progenitor stars. This review concludes that the community's work to date shows that the observed populations of supernovae in the local Universe are not, on the whole, produced by high-mass (M greater than or similar to 18 M-circle dot) stars. Theoretical explosions of model stars also predict that black hole formation and failed supernovae tend to occur above an initial mass of M similar or equal to 18 M-circle dot. The models also suggest there is no simple single mass division for neutron star or black-hole formation and that there are islands of explodability for stars in the 8-120 M-circle dot range. The observational constraints are quite consistent with the bulk of stars above M similar or equal to 18 M-circle dot collapsing to form black holes with no visible supernovae.
Resumo:
We investigate the use of type Ic superluminous supernovae (SLSN Ic) as standardizable candles and distance indicators. Their appeal as cosmological probes stems from their remarkable peak luminosities, hot blackbody temperatures, and bright rest-frame ultraviolet emission. We present a sample of 16 published SLSN, from redshifts 0.1 to 1.2, and calculate accurate K corrections to determine uniform magnitudes in 2 synthetic rest-frame filter bandpasses with central wavelengths at 400 nm and 520 nm. At 400 nm, we find an encouragingly low scatter in their uncorrected, raw mean magnitudes with M(400) = -21.86 ± 0.35 mag for the full sample of 16 objects. We investigate the correlation between their decline rates and peak magnitude and find that the brighter events appear to decline more slowly. In a manner similar to the Phillips relation for type Ia SNe (SNe Ia), we define a ΔM 20 decline relation. This correlates peak magnitude and decline over 20 days and can reduce the scatter in standardized peak magnitudes to ±0.22 mag. We further show that M(400) appears to have a strong color dependence. Redder objects are fainter and also become redder faster. Using this peak magnitudecolor evolution relation, a surprisingly low scatter of between ±0.08 mag and ±0.13 mag can be found in peak magnitudes, depending on sample selection. However, we caution that only 8 to 10 objects currently have enough data to test this peak magnitudecolor evolution relation. We conclude that SLSN Ic are promising distance indicators in the high-redshift universe in regimes beyond those possible with SNe Ia. Although the empirical relationships are encouraging, the unknown progenitor systems, how they may evolve with redshift, and the uncertain explosion physics are of some concern. The two major measurement uncertainties are the limited numbers of low-redshift, well-studied objects available to test these relationships and internal dust extinction in the host galaxies.
Resumo:
This paper provides an empirical test of the child quantity–quality (QQ) trade-off predicted by unified growth theory. Using individual census returns from the 1911 Irish census, we examine whether children who attended school were from smaller families—as predicted by a standard QQ model. To measure causal effects, we use a selection of models robust to endogeneity concerns which we validate for this application using an Empirical Monte Carlo analysis. Our results show that a child remaining in school between the ages of 14 and 16 caused up to a 27 % reduction in fertility. Our results are robust to alternative estimation techniques with different modeling assumptions, sample selection, and alternative definitions of fertility. These findings highlight the importance of the demographic transition as a mechanism which underpinned the expansion in human capital witnessed in Western economies during the twentieth century.
Resumo:
PurposeThe selection of suitable outcomes and sample size calculation are critical factors in the design of a randomised controlled trial (RCT). The goal of this study was to identify the range of outcomes and information on sample size calculation in RCTs on geographic atrophy (GA).MethodsWe carried out a systematic review of age-related macular degeneration (AMD) RCTs. We searched MEDLINE, EMBASE, Scopus, Cochrane Library, www.controlled-trials.com, and www.ClinicalTrials.gov. Two independent reviewers screened records. One reviewer collected data and the second reviewer appraised 10% of collected data. We scanned references lists of selected papers to include other relevant RCTs.ResultsLiterature and registry search identified 3816 abstracts of journal articles and 493 records from trial registries. From a total of 177 RCTs on all types of AMD, 23 RCTs on GA were included. Eighty-one clinical outcomes were identified. Visual acuity (VA) was the most frequently used outcome, presented in 18 out of 23 RCTs and followed by the measures of lesion area. For sample size analysis, 8 GA RCTs were included. None of them provided sufficient Information on sample size calculations.ConclusionsThis systematic review illustrates a lack of standardisation in terms of outcome reporting in GA trials and issues regarding sample size calculation. These limitations significantly hamper attempts to compare outcomes across studies and also perform meta-analyses.
Resumo:
This study investigates face recognition with partial occlusion, illumination variation and their combination, assuming no prior information about the mismatch, and limited training data for each person. The authors extend their previous posterior union model (PUM) to give a new method capable of dealing with all these problems. PUM is an approach for selecting the optimal local image features for recognition to improve robustness to partial occlusion. The extension is in two stages. First, authors extend PUM from a probability-based formulation to a similarity-based formulation, so that it operates with as little as one single training sample to offer robustness to partial occlusion. Second, they extend this new formulation to make it robust to illumination variation, and to combined illumination variation and partial occlusion, by a novel combination of multicondition relighting and optimal feature selection. To evaluate the new methods, a number of databases with various simulated and realistic occlusion/illumination mismatches have been used. The results have demonstrated the improved robustness of the new methods.
Resumo:
Antibodies are are very important materials for diagnostics. A rapid and simple hybridoma screening method will help in delivering specific monoclonal antibodies. In this study, we systematically developed the first antibody array to screen for bacteria-specific monoclonal antibodies using Listeria monocytogenes as a bacteria model. The antibody array was developed to expedite the hybridoma screening process by printing hybridoma supernatants on a glass slide coated with an antigen of interest. This screening method is based on the binding ability of supernatants to the coated antigen. The bound supernatants were detected by a fluorescently labeled anti-mouse immunoglobulin. Conditions (slide types, coating, spotting, and blocking buffers) for antibody array construction were optimized. To demonstrate its usefulness, antibody array was used to screen a sample set of 96 hybridoma supernatants in comparison to ELISA. Most of the positive results identified by ELISA and antibody array methods were in agreement except for those with low signals that were undetectable by antibody array. Hybridoma supernatants were further characterized with surface plasmon resonance to obtain additional data on the characteristics of each selected clone. While the antibody array was slightly less sensitive than ELISA, a much faster and lower cost procedure to screen clones against multiple antigens has been demonstrated. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Model selection between competing models is a key consideration in the discovery of prognostic multigene signatures. The use of appropriate statistical performance measures as well as verification of biological significance of the signatures is imperative to maximise the chance of external validation of the generated signatures. Current approaches in time-to-event studies often use only a single measure of performance in model selection, such as logrank test p-values, or dichotomise the follow-up times at some phase of the study to facilitate signature discovery. In this study we improve the prognostic signature discovery process through the application of the multivariate partial Cox model combined with the concordance index, hazard ratio of predictions, independence from available clinical covariates and biological enrichment as measures of signature performance. The proposed framework was applied to discover prognostic multigene signatures from early breast cancer data. The partial Cox model combined with the multiple performance measures were used in both guiding the selection of the optimal panel of prognostic genes and prediction of risk within cross validation without dichotomising the follow-up times at any stage. The signatures were successfully externally cross validated in independent breast cancer datasets, yielding a hazard ratio of 2.55 [1.44, 4.51] for the top ranking signature.
Resumo:
The paper addresses the issue of choice of bandwidth in the application of semiparametric estimation of the long memory parameter in a univariate time series process. The focus is on the properties of forecasts from the long memory model. A variety of cross-validation methods based on out of sample forecasting properties are proposed. These procedures are used for the choice of bandwidth and subsequent model selection. Simulation evidence is presented that demonstrates the advantage of the proposed new methodology.
Resumo:
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.
Resumo:
Background: Selection bias in HIV prevalence estimates occurs if non-participation in testing is correlated with HIV status. Longitudinal data suggests that individuals who know or suspect they are HIV positive are less likely to participate in testing in HIV surveys, in which case methods to correct for missing data which are based on imputation and observed characteristics will produce biased results. Methods: The identity of the HIV survey interviewer is typically associated with HIV testing participation, but is unlikely to be correlated with HIV status. Interviewer identity can thus be used as a selection variable allowing estimation of Heckman-type selection models. These models produce asymptotically unbiased HIV prevalence estimates, even when non-participation is correlated with unobserved characteristics, such as knowledge of HIV status. We introduce a new random effects method to these selection models which overcomes non-convergence caused by collinearity, small sample bias, and incorrect inference in existing approaches. Our method is easy to implement in standard statistical software, and allows the construction of bootstrapped standard errors which adjust for the fact that the relationship between testing and HIV status is uncertain and needs to be estimated. Results: Using nationally representative data from the Demographic and Health Surveys, we illustrate our approach with new point estimates and confidence intervals (CI) for HIV prevalence among men in Ghana (2003) and Zambia (2007). In Ghana, we find little evidence of selection bias as our selection model gives an HIV prevalence estimate of 1.4% (95% CI 1.2% – 1.6%), compared to 1.6% among those with a valid HIV test. In Zambia, our selection model gives an HIV prevalence estimate of 16.3% (95% CI 11.0% - 18.4%), compared to 12.1% among those with a valid HIV test. Therefore, those who decline to test in Zambia are found to be more likely to be HIV positive. Conclusions: Our approach corrects for selection bias in HIV prevalence estimates, is possible to implement even when HIV prevalence or non-participation is very high or very low, and provides a practical solution to account for both sampling and parameter uncertainty in the estimation of confidence intervals. The wide confidence intervals estimated in an example with high HIV prevalence indicate that it is difficult to correct statistically for the bias that may occur when a large proportion of people refuse to test.