998 resultados para CEO selection


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PurposeThe selection of suitable outcomes and sample size calculation are critical factors in the design of a randomised controlled trial (RCT). The goal of this study was to identify the range of outcomes and information on sample size calculation in RCTs on geographic atrophy (GA).MethodsWe carried out a systematic review of age-related macular degeneration (AMD) RCTs. We searched MEDLINE, EMBASE, Scopus, Cochrane Library, www.controlled-trials.com, and www.ClinicalTrials.gov. Two independent reviewers screened records. One reviewer collected data and the second reviewer appraised 10% of collected data. We scanned references lists of selected papers to include other relevant RCTs.ResultsLiterature and registry search identified 3816 abstracts of journal articles and 493 records from trial registries. From a total of 177 RCTs on all types of AMD, 23 RCTs on GA were included. Eighty-one clinical outcomes were identified. Visual acuity (VA) was the most frequently used outcome, presented in 18 out of 23 RCTs and followed by the measures of lesion area. For sample size analysis, 8 GA RCTs were included. None of them provided sufficient Information on sample size calculations.ConclusionsThis systematic review illustrates a lack of standardisation in terms of outcome reporting in GA trials and issues regarding sample size calculation. These limitations significantly hamper attempts to compare outcomes across studies and also perform meta-analyses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rationale for identifying drug targets within helminth neuromuscular signalling systems is based on the premise that adequate nerve and muscle function is essential for many of the key behavioural determinants of helminth parasitism, including sensory perception/host location, invasion, locomotion/orientation, attachment, feeding and reproduction. This premise is validated by the tendency of current anthelmintics to act on classical neurotransmitter-gated ion channels present on helminth nerve and/or muscle, yielding therapeutic endpoints associated with paralysis and/or death. Supplementary to classical neurotransmitters, helminth nervous systems are peptide-rich and encompass associated biosynthetic and signal transduction components - putative drug targets that remain to be exploited by anthelmintic chemotherapy. At this time, no neuropeptide system-targeting lead compounds have been reported, and given that our basic knowledge of neuropeptide biology in parasitic helminths remains inadequate, the short-term prospects for such drugs remain poor. Here, we review current knowledge of neuropeptide signalling in Nematoda and Platyhelminthes, and highlight a suite of 19 protein families that yield deleterious phenotypes in helminth reverse genetics screens. We suggest that orthologues of some of these peptidergic signalling components represent appealing therapeutic targets in parasitic helminths.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we consider the variable selection problem for a nonlinear non-parametric system. Two approaches are proposed, one top-down approach and one bottom-up approach. The top-down algorithm selects a variable by detecting if the corresponding partial derivative is zero or not at the point of interest. The algorithm is shown to have not only the parameter but also the set convergence. This is critical because the variable selection problem is binary, a variable is either selected or not selected. The bottom-up approach is based on the forward/backward stepwise selection which is designed to work if the data length is limited. Both approaches determine the most important variables locally and allow the unknown non-parametric nonlinear system to have different local dimensions at different points of interest. Further, two potential applications along with numerical simulations are provided to illustrate the usefulness of the proposed algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the gene selection problem for microarray data with small samples and variant correlation. Most existing algorithms usually require expensive computational effort, especially under thousands of gene conditions. The main objective of this paper is to effectively select the most informative genes from microarray data, while making the computational expenses affordable. This is achieved by proposing a novel forward gene selection algorithm (FGSA). To overcome the small samples' problem, the augmented data technique is firstly employed to produce an augmented data set. Taking inspiration from other gene selection methods, the L2-norm penalty is then introduced into the recently proposed fast regression algorithm to achieve the group selection ability. Finally, by defining a proper regression context, the proposed method can be fast implemented in the software, which significantly reduces computational burden. Both computational complexity analysis and simulation results confirm the effectiveness of the proposed algorithm in comparison with other approaches

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this article was to construct a T–ϕ phase diagram for a model drug (FD) and amorphous polymer (Eudragit® EPO) and to use this information to understand the impact of how temperature–composition coordinates influenced the final properties of the extrudate. Defining process boundaries and understanding drug solubility in polymeric carriers is of utmost importance and will help in the successful manufacture of new delivery platforms for BCS class II drugs. Physically mixed felodipine (FD)–Eudragit® EPO (EPO) binary mixtures with pre-determined weight fractions were analysed using DSC to measure the endset of melting and glass transition temperature. Extrudates of 10 wt% FD–EPO were processed using temperatures (110°C, 126°C, 140°C and 150°C) selected from the temperature–composition (T–ϕ) phase diagrams and processing screw speed of 20, 100 and 200rpm. Extrudates were characterised using powder X-ray diffraction (PXRD), optical, polarised light and Raman microscopy. To ensure formation of a binary amorphous drug dispersion (ADD) at a specific composition, HME processing temperatures should at least be equal to, or exceed, the corresponding temperature value on the liquid–solid curve in a F–H T–ϕ phase diagram. If extruded between the spinodal and liquid–solid curve, the lack of thermodynamic forces to attain complete drug amorphisation may be compensated for through the use of an increased screw speed. Constructing F–H T–ϕ phase diagrams are valuable not only in the understanding drug–polymer miscibility behaviour but also in rationalising the selection of important processing parameters for HME to ensure miscibility of drug and polymer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose The aim of this work was to examine, for amorphous solid dispersions, how the thermal analysis method selected impacts on the construction of thermodynamic phase diagrams, and to assess the predictive value of such phase diagrams in the selection of optimal, physically stable API-polymer compositions. Methods Thermodynamic phase diagrams for two API/polymer systems (naproxen/HPMC AS LF and naproxen/Kollidon 17 PF) were constructed from data collected using two different thermal analysis methods. The “dynamic” method involved heating the physical mixture at a rate of 1 &[deg]C/minute. In the "static" approach, samples were held at a temperature above the polymer Tg for prolonged periods, prior to scanning at 10 &[deg]C/minute. Subsequent to construction of phase diagrams, solid dispersions consisting of API-polymer compositions representative of different zones in the phase diagrams were spray dried and characterised using DSC, pXRD, TGA, FTIR, DVS and SEM. The stability of these systems was investigated under the following conditions: 25 &[deg]C, desiccated; 25 &[deg]C, 60 % RH; 40 &[deg]C, desiccated; 40 &[deg]C, 60 % RH. Results Endset depression occurred with increasing polymer volume fraction (Figure 1a). In conjunction with this data, Flory-Huggins and Gordon-Taylor theory were applied to construct thermodynamic phase diagrams (Figure 1b). The Flory-Huggins interaction parameter (&[chi]) for naproxen and HPMC AS LF was + 0.80 and + 0.72, for the dynamic and static methods respectively. For naproxen and Kollidon 17 PF, the dynamic data resulted in an interaction parameter of - 1.1 and the isothermal data produced a value of - 2.2. For both systems, the API appeared to be less soluble in the polymer when the dynamic approach was used. Stability studies of spray dried solid dispersions could be used as a means of validating the thermodynamic phase diagrams. Conclusion The thermal analysis method used to collate data has a deterministic effect on the phase diagram produced. This effect should be considered when constructing thermodynamic phase diagrams, as they can be a useful tool in predicting the stability of amorphous solid dispersions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An overview of research and public policy debate on academic selection in Northern Ireland. The chapter examines the outcomes of the major investigation of the effects of the selective system of secondary education published in 2000, including a consideration of comparative evidence collected in Scotland. The paper outlines the debate which followed the publication of the Burns Report and presents the current state of play in policy and practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Selection bias in HIV prevalence estimates occurs if non-participation in testing is correlated with HIV status. Longitudinal data suggests that individuals who know or suspect they are HIV positive are less likely to participate in testing in HIV surveys, in which case methods to correct for missing data which are based on imputation and observed characteristics will produce biased results. Methods: The identity of the HIV survey interviewer is typically associated with HIV testing participation, but is unlikely to be correlated with HIV status. Interviewer identity can thus be used as a selection variable allowing estimation of Heckman-type selection models. These models produce asymptotically unbiased HIV prevalence estimates, even when non-participation is correlated with unobserved characteristics, such as knowledge of HIV status. We introduce a new random effects method to these selection models which overcomes non-convergence caused by collinearity, small sample bias, and incorrect inference in existing approaches. Our method is easy to implement in standard statistical software, and allows the construction of bootstrapped standard errors which adjust for the fact that the relationship between testing and HIV status is uncertain and needs to be estimated. Results: Using nationally representative data from the Demographic and Health Surveys, we illustrate our approach with new point estimates and confidence intervals (CI) for HIV prevalence among men in Ghana (2003) and Zambia (2007). In Ghana, we find little evidence of selection bias as our selection model gives an HIV prevalence estimate of 1.4% (95% CI 1.2% – 1.6%), compared to 1.6% among those with a valid HIV test. In Zambia, our selection model gives an HIV prevalence estimate of 16.3% (95% CI 11.0% - 18.4%), compared to 12.1% among those with a valid HIV test. Therefore, those who decline to test in Zambia are found to be more likely to be HIV positive. Conclusions: Our approach corrects for selection bias in HIV prevalence estimates, is possible to implement even when HIV prevalence or non-participation is very high or very low, and provides a practical solution to account for both sampling and parameter uncertainty in the estimation of confidence intervals. The wide confidence intervals estimated in an example with high HIV prevalence indicate that it is difficult to correct statistically for the bias that may occur when a large proportion of people refuse to test.