930 resultados para selection methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Postal and electronic questionnaires are widely used for data collection in epidemiological studies but non-response reduces the effective sample size and can introduce bias. Finding ways to increase response to postal and electronic questionnaires would improve the quality of health research. Objectives: To identify effective strategies to increase response to postal and electronic questionnaires. Search strategy: We searched 14 electronic databases to February 2008 and manually searched the reference lists of relevant trials and reviews, and all issues of two journals. We contacted the authors of all trials or reviews to ask about unpublished trials. Where necessary, we also contacted authors to confirm methods of allocation used and to clarify results presented. We assessed the eligibility of each trial using pre-defined criteria. Selection criteria: Randomised controlled trials of methods to increase response to postal or electronic questionnaires. Data collection and analysis: We extracted data on the trial participants, the intervention, the number randomised to intervention and comparison groups and allocation concealment. For each strategy, we estimated pooled odds ratios (OR) and 95% confidence intervals (CI) in a random-effects model. We assessed evidence for selection bias using Egger's weighted regression method and Begg's rank correlation test and funnel plot. We assessed heterogeneity among trial odds ratios using a Chi 2 test and the degree of inconsistency between trial results was quantified using the I 2 statistic. Main results: Postal We found 481 eligible trials.The trials evaluated 110 different ways of increasing response to postal questionnaires.We found substantial heterogeneity among trial results in half of the strategies. The odds of response were at least doubled using monetary incentives (odds ratio 1.87; 95% CI 1.73 to 2.04; heterogeneity P < 0.00001, I 2 = 84%), recorded delivery (1.76; 95% CI 1.43 to 2.18; P = 0.0001, I 2 = 71%), a teaser on the envelope - e.g. a comment suggesting to participants that they may benefit if they open it (3.08; 95% CI 1.27 to 7.44) and a more interesting questionnaire topic (2.00; 95% CI 1.32 to 3.04; P = 0.06, I 2 = 80%). The odds of response were substantially higher with pre-notification (1.45; 95% CI 1.29 to 1.63; P < 0.00001, I 2 = 89%), follow-up contact (1.35; 95% CI 1.18 to 1.55; P < 0.00001, I 2 = 76%), unconditional incentives (1.61; 1.36 to 1.89; P < 0.00001, I 2 = 88%), shorter questionnaires (1.64; 95%CI 1.43 to 1.87; P < 0.00001, I 2 = 91%), providing a second copy of the questionnaire at follow up (1.46; 95% CI 1.13 to 1.90; P < 0.00001, I 2 = 82%), mentioning an obligation to respond (1.61; 95% CI 1.16 to 2.22; P = 0.98, I 2 = 0%) and university sponsorship (1.32; 95% CI 1.13 to 1.54; P < 0.00001, I 2 = 83%). The odds of response were also increased with non-monetary incentives (1.15; 95% CI 1.08 to 1.22; P < 0.00001, I 2 = 79%), personalised questionnaires (1.14; 95% CI 1.07 to 1.22; P < 0.00001, I 2 = 63%), use of hand-written addresses (1.25; 95% CI 1.08 to 1.45; P = 0.32, I 2 = 14%), use of stamped return envelopes as opposed to franked return envelopes (1.24; 95% CI 1.14 to 1.35; P < 0.00001, I 2 = 69%), an assurance of confidentiality (1.33; 95% CI 1.24 to 1.42) and first class outward mailing (1.11; 95% CI 1.02 to 1.21; P = 0.78, I 2 = 0%). The odds of response were reduced when the questionnaire included questions of a sensitive nature (0.94; 95% CI 0.88 to 1.00; P = 0.51, I 2 = 0%). Electronic: We found 32 eligible trials. The trials evaluated 27 different ways of increasing response to electronic questionnaires. We found substantial heterogeneity among trial results in half of the strategies. The odds of response were increased by more than a half using non-monetary incentives (1.72; 95% CI 1.09 to 2.72; heterogeneity P < 0.00001, I 2 = 95%), shorter e-questionnaires (1.73; 1.40 to 2.13; P = 0.08, I 2 = 68%), including a statement that others had responded (1.52; 95% CI 1.36 to 1.70), and a more interesting topic (1.85; 95% CI 1.52 to 2.26). The odds of response increased by a third using a lottery with immediate notification of results (1.37; 95% CI 1.13 to 1.65), an offer of survey results (1.36; 95% CI 1.15 to 1.61), and using a white background (1.31; 95% CI 1.10 to 1.56). The odds of response were also increased with personalised e-questionnaires (1.24; 95% CI 1.17 to 1.32; P = 0.07, I 2 = 41%), using a simple header (1.23; 95% CI 1.03 to 1.48), using textual representation of response categories (1.19; 95% CI 1.05 to 1.36), and giving a deadline (1.18; 95% CI 1.03 to 1.34). The odds of response tripled when a picture was included in an e-mail (3.05; 95% CI 1.84 to 5.06; P = 0.27, I 2 = 19%). The odds of response were reduced when "Survey" was mentioned in the e-mail subject line (0.81; 95% CI 0.67 to 0.97; P = 0.33, I 2 = 0%), and when the e-mail included a male signature (0.55; 95% CI 0.38 to 0.80; P = 0.96, I 2 = 0%). Authors' conclusions: Health researchers using postal and electronic questionnaires can increase response using the strategies shown to be effective in this systematic review. Copyright © 2009 The Cochrane Collaboration. Published by John Wiley & Sons, Ltd.


--------------------------------------------------------------------------------

Reaxys Database Information|

--------------------------------------------------------------------------------

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Antibodies are are very important materials for diagnostics. A rapid and simple hybridoma screening method will help in delivering specific monoclonal antibodies. In this study, we systematically developed the first antibody array to screen for bacteria-specific monoclonal antibodies using Listeria monocytogenes as a bacteria model. The antibody array was developed to expedite the hybridoma screening process by printing hybridoma supernatants on a glass slide coated with an antigen of interest. This screening method is based on the binding ability of supernatants to the coated antigen. The bound supernatants were detected by a fluorescently labeled anti-mouse immunoglobulin. Conditions (slide types, coating, spotting, and blocking buffers) for antibody array construction were optimized. To demonstrate its usefulness, antibody array was used to screen a sample set of 96 hybridoma supernatants in comparison to ELISA. Most of the positive results identified by ELISA and antibody array methods were in agreement except for those with low signals that were undetectable by antibody array. Hybridoma supernatants were further characterized with surface plasmon resonance to obtain additional data on the characteristics of each selected clone. While the antibody array was slightly less sensitive than ELISA, a much faster and lower cost procedure to screen clones against multiple antigens has been demonstrated. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE The appropriate selection of patients for early clinical trials presents a major challenge. Previous analyses focusing on this problem were limited by small size and by interpractice heterogeneity. This study aims to define prognostic factors to guide risk-benefit assessments by using a large patient database from multiple phase I trials. PATIENTS AND METHODS Data were collected from 2,182 eligible patients treated in phase I trials between 2005 and 2007 in 14 European institutions. We derived and validated independent prognostic factors for 90-day mortality by using multivariate logistic regression analysis. Results The 90-day mortality was 16.5% with a drug-related death rate of 0.4%. Trial discontinuation within 3 weeks occurred in 14% of patients primarily because of disease progression. Eight different prognostic variables for 90-day mortality were validated: performance status (PS), albumin, lactate dehydrogenase, alkaline phosphatase, number of metastatic sites, clinical tumor growth rate, lymphocytes, and WBC. Two different models of prognostic scores for 90-day mortality were generated by using these factors, including or excluding PS; both achieved specificities of more than 85% and sensitivities of approximately 50% when using a score cutoff of 5 or higher. These models were not superior to the previously published Royal Marsden Hospital score in their ability to predict 90-day mortality. CONCLUSION Patient selection using any of these prognostic scores will reduce non-drug-related 90-day mortality among patients enrolled in phase I trials by 50%. However, this can be achieved only by an overall reduction in recruitment to phase I studies of 20%, more than half of whom would in fact have survived beyond 90 days.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ecological coherence is a multifaceted conservation objective that includes some potentially conflicting concepts. These concepts include the extent to which the network maximises diversity (including genetic diversity) and the extent to which protected areas interact with non-reserve locations. To examine the consequences of different selection criteria, the preferred location to complement protected sites was examined using samples taken from four locations around each of two marine protected areas: Strangford Lough and Lough Hyne, Ireland. Three different measures of genetic distance were used: FST, Dest and a measure of allelic dissimilarity, along with a direct assessment of the total number of alleles in different candidate networks. Standardized site scores were used for comparisons across methods and selection criteria. The average score for Castlehaven, a site relatively close to Lough Hyne, was highest, implying that this site would capture the most genetic diversity while ensuring highest degree of interaction between protected and unprotected sites. Patterns around Strangford Lough were more ambiguous, potentially reflecting the weaker genetic structure around this protected area in comparison to Lough Hyne. Similar patterns were found across species with different dispersal capacities, indicating that methods based on genetic distance could be used to help maximise ecological coherence in reserve networks. ⺠Ecological coherence is a key component of marine protected area network design. ⺠Coherence contains a number of competing concepts. ⺠Genetic information from field populations can help guide assessments of coherence. ⺠Average choice across different concepts of coherence was consistent among species. ⺠Measures can be combined to compare the coherence of different network designs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several products for surface treatment are available on the market to enhance durability characteristics of concrete. For each of these materials a certain level of protection is claimed. However, there is no commonly accepted procedure to assess the effectiveness of these treatments. The inherent generic properties may be of use to the manufacturers and those responsible for specifications, however, practising engineers are interested in knowing how they improve the performance of their structures. Thus in this review an attempt is made to assess the engineering aspects of the various surface treatments so that a procedure for their selection can be proposed. (C) 1997 Elsevier Science Lid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many situations, the number of data points is fixed, and the asymptotic convergence results of popular model selection tools may not be useful. A new algorithm for model selection, RIVAL (removing irrelevant variables amidst Lasso iterations), is presented and shown to be particularly effective for a large but fixed number of data points. The algorithm is motivated by an application of nuclear material detection where all unknown parameters are to be non-negative. Thus, positive Lasso and its variants are analyzed. Then, RIVAL is proposed and is shown to have some desirable properties, namely the number of data points needed to have convergence is smaller than existing methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-quality data from appropriate archives are needed for the continuing improvement of radiocarbon calibration curves. We discuss here the basic assumptions behind 14C dating that necessitate calibration and the relative strengths and weaknesses of archives from which calibration data are obtained. We also highlight the procedures, problems and uncertainties involved in determining atmospheric and surface ocean 14C/12C in these archives, including a discussion of the various methods used to derive an independent absolute timescale and uncertainty. The types of data required for the current IntCal database and calibration curve model are tabulated with examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-dimensional gene expression data provide a rich source of information because they capture the expression level of genes in dynamic states that reflect the biological functioning of a cell. For this reason, such data are suitable to reveal systems related properties inside a cell, e.g., in order to elucidate molecular mechanisms of complex diseases like breast or prostate cancer. However, this is not only strongly dependent on the sample size and the correlation structure of a data set, but also on the statistical hypotheses tested. Many different approaches have been developed over the years to analyze gene expression data to (I) identify changes in single genes, (II) identify changes in gene sets or pathways, and (III) identify changes in the correlation structure in pathways. In this paper, we review statistical methods for all three types of approaches, including subtypes, in the context of cancer data and provide links to software implementations and tools and address also the general problem of multiple hypotheses testing. Further, we provide recommendations for the selection of such analysis methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper addresses the issue of choice of bandwidth in the application of semiparametric estimation of the long memory parameter in a univariate time series process. The focus is on the properties of forecasts from the long memory model. A variety of cross-validation methods based on out of sample forecasting properties are proposed. These procedures are used for the choice of bandwidth and subsequent model selection. Simulation evidence is presented that demonstrates the advantage of the proposed new methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Correctly modelling and reasoning with uncertain information from heterogeneous sources in large-scale systems is critical when the reliability is unknown and we still want to derive adequate conclusions. To this end, context-dependent merging strategies have been proposed in the literature. In this paper we investigate how one such context-dependent merging strategy (originally defined for possibility theory), called largely partially maximal consistent subsets (LPMCS), can be adapted to Dempster-Shafer (DS) theory. We identify those measures for the degree of uncertainty and internal conflict that are available in DS theory and show how they can be used for guiding LPMCS merging. A simplified real-world power distribution scenario illustrates our framework. We also briefly discuss how our approach can be incorporated into a multi-agent programming language, thus leading to better plan selection and decision making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Virtual metrology (VM) aims to predict metrology values using sensor data from production equipment and physical metrology values of preceding samples. VM is a promising technology for the semiconductor manufacturing industry as it can reduce the frequency of in-line metrology operations and provide supportive information for other operations such as fault detection, predictive maintenance and run-to-run control. The prediction models for VM can be from a large variety of linear and nonlinear regression methods and the selection of a proper regression method for a specific VM problem is not straightforward, especially when the candidate predictor set is of high dimension, correlated and noisy. Using process data from a benchmark semiconductor manufacturing process, this paper evaluates the performance of four typical regression methods for VM: multiple linear regression (MLR), least absolute shrinkage and selection operator (LASSO), neural networks (NN) and Gaussian process regression (GPR). It is observed that GPR performs the best among the four methods and that, remarkably, the performance of linear regression approaches that of GPR as the subset of selected input variables is increased. The observed competitiveness of high-dimensional linear regression models, which does not hold true in general, is explained in the context of extreme learning machines and functional link neural networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic economic load dispatch (DELD) is one of the most important steps in power system operation. Various optimisation algorithms for solving the problem have been developed; however, due to the non-convex characteristics and large dimensionality of the problem, it is necessary to explore new methods to further improve the dispatch results and minimise the costs. This article proposes a hybrid differential evolution (DE) algorithm, namely clonal selection-based differential evolution (CSDE), to solve the problem. CSDE is an artificial intelligence technique that can be applied to complex optimisation problems which are for example nonlinear, large scale, non-convex and discontinuous. This hybrid algorithm combines the clonal selection algorithm (CSA) as the local search technique to update the best individual in the population, which enhances the diversity of the solutions and prevents premature convergence in DE. Furthermore, we investigate four mutation operations which are used in CSA as the hyper-mutation operations. Finally, an efficient solution repair method is designed for DELD to satisfy the complicated equality and inequality constraints of the power system to guarantee the feasibility of the solutions. Two benchmark power systems are used to evaluate the performance of the proposed method. The experimental results show that the proposed CSDE/best/1 approach significantly outperforms nine other variants of CSDE and DE, as well as most other published methods, in terms of the quality of the solution and the convergence characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, a comparison of different methods to predict drug−polymer solubility was carried out on binary systems consisting of five model drugs (paracetamol, chloramphenicol, celecoxib, indomethacin, and felodipine) and polyvinylpyrrolidone/vinyl acetate copolymers (PVP/VA) of different monomer weight ratios. The drug−polymer solubility at 25 °C was predicted using the Flory−Huggins model, from data obtained at elevated temperature using thermal analysis methods based on the recrystallization of a supersaturated amorphous solid dispersion and two variations of the melting point depression method. These predictions were compared with the solubility in the low molecular weight liquid analogues of the PVP/VA copolymer (N-vinylpyrrolidone and vinyl acetate). The predicted solubilities at 25 °C varied considerably depending on the method used. However, the three thermal analysis methods ranked the predicted solubilities in the same order, except for the felodipine−PVP system. Furthermore, the magnitude of the predicted solubilities from the recrystallization method and melting point depression method correlated well with the estimates based on the solubility in the liquid analogues, which suggests that this method can be used as an initial screening tool if a liquid analogue is available. The learnings of this important comparative study provided general guidance for the selection of the most suitable method(s) for the screening of drug−polymer solubility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose The aim of this work was to examine, for amorphous solid dispersions, how the thermal analysis method selected impacts on the construction of thermodynamic phase diagrams, and to assess the predictive value of such phase diagrams in the selection of optimal, physically stable API-polymer compositions. Methods Thermodynamic phase diagrams for two API/polymer systems (naproxen/HPMC AS LF and naproxen/Kollidon 17 PF) were constructed from data collected using two different thermal analysis methods. The “dynamic” method involved heating the physical mixture at a rate of 1 &[deg]C/minute. In the "static" approach, samples were held at a temperature above the polymer Tg for prolonged periods, prior to scanning at 10 &[deg]C/minute. Subsequent to construction of phase diagrams, solid dispersions consisting of API-polymer compositions representative of different zones in the phase diagrams were spray dried and characterised using DSC, pXRD, TGA, FTIR, DVS and SEM. The stability of these systems was investigated under the following conditions: 25 &[deg]C, desiccated; 25 &[deg]C, 60 % RH; 40 &[deg]C, desiccated; 40 &[deg]C, 60 % RH. Results Endset depression occurred with increasing polymer volume fraction (Figure 1a). In conjunction with this data, Flory-Huggins and Gordon-Taylor theory were applied to construct thermodynamic phase diagrams (Figure 1b). The Flory-Huggins interaction parameter (&[chi]) for naproxen and HPMC AS LF was + 0.80 and + 0.72, for the dynamic and static methods respectively. For naproxen and Kollidon 17 PF, the dynamic data resulted in an interaction parameter of - 1.1 and the isothermal data produced a value of - 2.2. For both systems, the API appeared to be less soluble in the polymer when the dynamic approach was used. Stability studies of spray dried solid dispersions could be used as a means of validating the thermodynamic phase diagrams. Conclusion The thermal analysis method used to collate data has a deterministic effect on the phase diagram produced. This effect should be considered when constructing thermodynamic phase diagrams, as they can be a useful tool in predicting the stability of amorphous solid dispersions.