954 resultados para Sample selection


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, novel methodologies for the determination of antioxidative compounds in herbs and beverages were developed. Antioxidants are compounds that can reduce, delay or inhibit oxidative events. They are a part of the human defense system and are obtained through the diet. Antioxidants are naturally present in several types of foods, e.g. in fruits, beverages, vegetables and herbs. Antioxidants can also be added to foods during manufacturing to suppress lipid oxidation and formation of free radicals under conditions of cooking or storage and to reduce the concentration of free radicals in vivo after food ingestion. There is growing interest in natural antioxidants, and effective compounds have already been identified from antioxidant classes such as carotenoids, essential oils, flavonoids and phenolic acids. The wide variety of sample matrices and analytes presents quite a challenge for the development of analytical techniques. Growing demands have been placed on sample pretreatment. In this study, three novel extraction techniques, namely supercritical fluid extraction (SFE), pressurised hot water extraction (PHWE) and dynamic sonication-assisted extraction (DSAE) were studied. SFE was used for the extraction of lycopene from tomato skins and PHWE was used in the extraction of phenolic compounds from sage. DSAE was applied to the extraction of phenolic acids from Lamiaceae herbs. In the development of extraction methodologies, the main parameters of the extraction were studied and the recoveries were compared to those achieved by conventional extraction techniques. In addition, the stability of lycopene was also followed under different storage conditions. For the separation of the antioxidative compounds in the extracts, liquid chromatographic methods (LC) were utilised. Two novel LC techniques, namely ultra performance liquid chromatography (UPLC) and comprehensive two-dimensional liquid chromatography (LCxLC) were studied and compared with conventional high performance liquid chromatography (HPLC) for the separation of antioxidants in beverages and Lamiaceae herbs. In LCxLC, the selection of LC mode, column dimensions and flow rates were studied and optimised to obtain efficient separation of the target compounds. In addition, the separation powers of HPLC, UPLC, HPLCxHPLC and HPLCxUPLC were compared. To exploit the benefits of an integrated system, in which sample preparation and final separation are performed in a closed unit, dynamic sonication-assisted extraction was coupled on-line to a liquid chromatograph via a solid-phase trap. The increased sensitivity was utilised in the extraction of phenolic acids from Lamiaceae herbs. The results were compared to those of achieved by the LCxLC system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports a measurement of the cross section for the pair production of top quarks in ppbar collisions at sqrt(s) = 1.96 TeV at the Fermilab Tevatron. The data was collected from the CDF II detector in a set of runs with a total integrated luminosity of 1.1 fb^{-1}. The cross section is measured in the dilepton channel, the subset of ttbar events in which both top quarks decay through t -> Wb -> l nu b where l = e, mu, or tau. The lepton pair is reconstructed as one identified electron or muon and one isolated track. The use of an isolated track to identify the second lepton increases the ttbar acceptance, particularly for the case in which one W decays as W -> tau nu. The purity of the sample may be further improved at the cost of a reduction in the number of signal events, by requiring an identified b-jet. We present the results of measurements performed with and without the request of an identified b-jet. The former is the first published CDF result for which a b-jet requirement is added to the dilepton selection. In the CDF data there are 129 pretag lepton + track candidate events, of which 69 are tagged. With the tagging information, the sample is divided into tagged and untagged sub-samples, and a combined cross section is calculated by maximizing a likelihood. The result is sigma_{ttbar} = 9.6 +/- 1.2 (stat.) -0.5 +0.6 (sys.) +/- 0.6 (lum.) pb, assuming a branching ratio of BR(W -> ell nu) = 10.8% and a top mass of m_t = 175 GeV/c^2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a methodology for selection of static VAR compensator location based on static voltage stability analysis of power systems. The analysis presented here uses the L-index of load buses, which includes voltage stability information of a normal load flow and is in the range of 0 (no load of system) to 1 (voltage collapse). An approach has been presented to select a suitable size and location of static VAR compensator in an EHV network for system voltage stability improvement. The proposed approach has been tested under simulated conditions on a few power systems and the results for a sample radial network and a 24-node equivalent EHV power network of a practical system are presented for illustration purposes. © 2000 Published by Elsevier Science S.A. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single receive antenna selection (AS) is a popular method for obtaining diversity benefits without the additional costs of multiple radio receiver chains. Since only one antenna receives at any time, the transmitter sends a pilot multiple times to enable the receiver to estimate the channel gains of its N antennas to the transmitter and select an antenna. In time-varying channels, the channel estimates of different antennas are outdated to different extents. We analyze the symbol error probability (SEP) in time-varying channels of the N-pilot and (N+1)-pilot AS training schemes. In the former, the transmitter sends one pilot for each receive antenna. In the latter, the transmitter sends one additional pilot that helps sample the channel fading process of the selected antenna twice. We present several new results about the SEP, optimal energy allocation across pilots and data, and optimal selection rule in time-varying channels for the two schemes. We show that due to the unique nature of AS, the (N+1)-pilot scheme, despite its longer training duration, is much more energy-efficient than the conventional N-pilot scheme. An extension to a practical scenario where all data symbols of a packet are received by the same antenna is also investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

H. 264/advanced video coding surveillance video encoders use the Skip mode specified by the standard to reduce bandwidth. They also use multiple frames as reference for motion-compensated prediction. In this paper, we propose two techniques to reduce the bandwidth and computational cost of static camera surveillance video encoders without affecting detection and recognition performance. A spatial sampler is proposed to sample pixels that are segmented using a Gaussian mixture model. Modified weight updates are derived for the parameters of the mixture model to reduce floating point computations. A storage pattern of the parameters in memory is also modified to improve cache performance. Skip selection is performed using the segmentation results of the sampled pixels. The second contribution is a low computational cost algorithm to choose the reference frames. The proposed reference frame selection algorithm reduces the cost of coding uncovered background regions. We also study the number of reference frames required to achieve good coding efficiency. Distortion over foreground pixels is measured to quantify the performance of the proposed techniques. Experimental results show bit rate savings of up to 94.5% over methods proposed in literature on video surveillance data sets. The proposed techniques also provide up to 74.5% reduction in compression complexity without increasing the distortion over the foreground regions in the video sequence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We aimed to study the selective pressures interacting on SLC45A2 to investigate the interplay between selection and susceptibility to disease. Thus, we enrolled 500 volunteers from a geographically limited population (Basques from the North of Spain) and by resequencing the whole coding region and intron 5 of the 34 most and the 34 least pigmented individuals according to the reflectance distribution, we observed that the polymorphism Leu374Phe (L374F, rs16891982) was statistically associated with skin color variability within this sample. In particular, allele 374F was significantly more frequent among the individuals with lighter skin. Further genotyping an independent set of 558 individuals of a geographically wider population with known ancestry in the Spanish population also revealed that the frequency of L374F was significantly correlated with the incident UV radiation intensity. Selection tests suggest that allele 374F is being positively selected in South Europeans, thus indicating that depigmentation is an adaptive process. Interestingly, by genotyping 119 melanoma samples, we show that this variant is also associated with an increased susceptibility to melanoma in our populations. The ultimate driving force for this adaptation is unknown, but it is compatible with the vitamin D hypothesis. This shows that molecular evolution analysis can be used as a useful technology to predict phenotypic and biomedical consequences in humans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In chemistry for chemical analysis of a multi-component sample or quantitative structure-activity/property relationship (QSAR/QSPR) studies, variable selection is a key step. In this study, comparisons between different methods were performed. These methods include three classical methods such as forward selection, backward elimination and stepwise regression; orthogonal descriptors; leaps-and-bounds regression and genetic algorithm. Thirty-five nitrobenzenes were taken as the data set. From these structures quantum chemical parameters, topological indices and indicator variable were extracted as the descriptors for the comparisons of variable selections. The interesting results have been obtained. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigates face recognition with partial occlusion, illumination variation and their combination, assuming no prior information about the mismatch, and limited training data for each person. The authors extend their previous posterior union model (PUM) to give a new method capable of dealing with all these problems. PUM is an approach for selecting the optimal local image features for recognition to improve robustness to partial occlusion. The extension is in two stages. First, authors extend PUM from a probability-based formulation to a similarity-based formulation, so that it operates with as little as one single training sample to offer robustness to partial occlusion. Second, they extend this new formulation to make it robust to illumination variation, and to combined illumination variation and partial occlusion, by a novel combination of multicondition relighting and optimal feature selection. To evaluate the new methods, a number of databases with various simulated and realistic occlusion/illumination mismatches have been used. The results have demonstrated the improved robustness of the new methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Antibodies are are very important materials for diagnostics. A rapid and simple hybridoma screening method will help in delivering specific monoclonal antibodies. In this study, we systematically developed the first antibody array to screen for bacteria-specific monoclonal antibodies using Listeria monocytogenes as a bacteria model. The antibody array was developed to expedite the hybridoma screening process by printing hybridoma supernatants on a glass slide coated with an antigen of interest. This screening method is based on the binding ability of supernatants to the coated antigen. The bound supernatants were detected by a fluorescently labeled anti-mouse immunoglobulin. Conditions (slide types, coating, spotting, and blocking buffers) for antibody array construction were optimized. To demonstrate its usefulness, antibody array was used to screen a sample set of 96 hybridoma supernatants in comparison to ELISA. Most of the positive results identified by ELISA and antibody array methods were in agreement except for those with low signals that were undetectable by antibody array. Hybridoma supernatants were further characterized with surface plasmon resonance to obtain additional data on the characteristics of each selected clone. While the antibody array was slightly less sensitive than ELISA, a much faster and lower cost procedure to screen clones against multiple antigens has been demonstrated. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model selection between competing models is a key consideration in the discovery of prognostic multigene signatures. The use of appropriate statistical performance measures as well as verification of biological significance of the signatures is imperative to maximise the chance of external validation of the generated signatures. Current approaches in time-to-event studies often use only a single measure of performance in model selection, such as logrank test p-values, or dichotomise the follow-up times at some phase of the study to facilitate signature discovery. In this study we improve the prognostic signature discovery process through the application of the multivariate partial Cox model combined with the concordance index, hazard ratio of predictions, independence from available clinical covariates and biological enrichment as measures of signature performance. The proposed framework was applied to discover prognostic multigene signatures from early breast cancer data. The partial Cox model combined with the multiple performance measures were used in both guiding the selection of the optimal panel of prognostic genes and prediction of risk within cross validation without dichotomising the follow-up times at any stage. The signatures were successfully externally cross validated in independent breast cancer datasets, yielding a hazard ratio of 2.55 [1.44, 4.51] for the top ranking signature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper addresses the issue of choice of bandwidth in the application of semiparametric estimation of the long memory parameter in a univariate time series process. The focus is on the properties of forecasts from the long memory model. A variety of cross-validation methods based on out of sample forecasting properties are proposed. These procedures are used for the choice of bandwidth and subsequent model selection. Simulation evidence is presented that demonstrates the advantage of the proposed new methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Selection bias in HIV prevalence estimates occurs if non-participation in testing is correlated with HIV status. Longitudinal data suggests that individuals who know or suspect they are HIV positive are less likely to participate in testing in HIV surveys, in which case methods to correct for missing data which are based on imputation and observed characteristics will produce biased results. Methods: The identity of the HIV survey interviewer is typically associated with HIV testing participation, but is unlikely to be correlated with HIV status. Interviewer identity can thus be used as a selection variable allowing estimation of Heckman-type selection models. These models produce asymptotically unbiased HIV prevalence estimates, even when non-participation is correlated with unobserved characteristics, such as knowledge of HIV status. We introduce a new random effects method to these selection models which overcomes non-convergence caused by collinearity, small sample bias, and incorrect inference in existing approaches. Our method is easy to implement in standard statistical software, and allows the construction of bootstrapped standard errors which adjust for the fact that the relationship between testing and HIV status is uncertain and needs to be estimated. Results: Using nationally representative data from the Demographic and Health Surveys, we illustrate our approach with new point estimates and confidence intervals (CI) for HIV prevalence among men in Ghana (2003) and Zambia (2007). In Ghana, we find little evidence of selection bias as our selection model gives an HIV prevalence estimate of 1.4% (95% CI 1.2% – 1.6%), compared to 1.6% among those with a valid HIV test. In Zambia, our selection model gives an HIV prevalence estimate of 16.3% (95% CI 11.0% - 18.4%), compared to 12.1% among those with a valid HIV test. Therefore, those who decline to test in Zambia are found to be more likely to be HIV positive. Conclusions: Our approach corrects for selection bias in HIV prevalence estimates, is possible to implement even when HIV prevalence or non-participation is very high or very low, and provides a practical solution to account for both sampling and parameter uncertainty in the estimation of confidence intervals. The wide confidence intervals estimated in an example with high HIV prevalence indicate that it is difficult to correct statistically for the bias that may occur when a large proportion of people refuse to test.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building on the instrumental model of group conflict (IMGC), the present experiment investigates the support for discriminatory and meritocratic method of selections at university in a sample of local and immigrant students. Results showed that local students were supporting in a larger proportion selection method that favors them over immigrants in comparison to method that consists in selecting the best applicants without considering his/her origin. Supporting the assumption of the IMGC, this effect was stronger for locals who perceived immigrants as competing for resources. Immigrant students supported more strongly the meritocratic selection method than the one that discriminated them. However, contrasting with the assumption of the IMGC, this effect was only present in students who perceived immigrants as weakly competing for locals' resources. Results demonstrate that selection methods used at university can be perceived differently depending on students' origin. Further, they suggest that the mechanisms underlying the perception of discriminatory and meritocratic selection methods differ between local and immigrant students. Hence, the present experiment makes a theoretical contribution to the IMGC by delimiting its assumptions to the ingroup facing a competitive situation with a relevant outgroup. Practical implication for universities recruitment policies are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we study several tests for the equality of two unknown distributions. Two are based on empirical distribution functions, three others on nonparametric probability density estimates, and the last ones on differences between sample moments. We suggest controlling the size of such tests (under nonparametric assumptions) by using permutational versions of the tests jointly with the method of Monte Carlo tests properly adjusted to deal with discrete distributions. We also propose a combined test procedure, whose level is again perfectly controlled through the Monte Carlo test technique and has better power properties than the individual tests that are combined. Finally, in a simulation experiment, we show that the technique suggested provides perfect control of test size and that the new tests proposed can yield sizeable power improvements.