964 resultados para statistic
Resumo:
Some WWW image engines allow the user to form a query in terms of text keywords. To build the image index, keywords are extracted heuristically from HTML documents containing each image, and/or from the image URL and file headers. Unfortunately, text-based image engines have merely retro-fitted standard SQL database query methods, and it is difficult to include images cues within such a framework. On the other hand, visual statistics (e.g., color histograms) are often insufficient for helping users find desired images in a vast WWW index. By truly unifying textual and visual statistics, one would expect to get better results than either used separately. In this paper, we propose an approach that allows the combination of visual statistics with textual statistics in the vector space representation commonly used in query by image content systems. Text statistics are captured in vector form using latent semantic indexing (LSI). The LSI index for an HTML document is then associated with each of the images contained therein. Visual statistics (e.g., color, orientedness) are also computed for each image. The LSI and visual statistic vectors are then combined into a single index vector that can be used for content-based search of the resulting image database. By using an integrated approach, we are able to take advantage of possible statistical couplings between the topic of the document (latent semantic content) and the contents of images (visual statistics). This allows improved performance in conducting content-based search. This approach has been implemented in a WWW image search engine prototype.
Resumo:
Background: We conducted a survival analysis of all the confirmed cases of Adult Tuberculosis (TB) patients treated in Cork-City, Ireland. The aim of this study was to estimate Survival time (ST), including median time of survival and to assess the association and impact of covariates (TB risk factors) to event status and ST. The outcome of the survival analysis is reported in this paper. Methods: We used a retrospective cohort study research design to review data of 647 bacteriologically confirmed TB patients from the medical record of two teaching hospitals. Mean age 49 years (Range 18–112). We collected information on potential risk factors of all confirmed cases of TB treated between 2008–2012. For the survival analysis, the outcome of interest was ‘treatment failure’ or ‘death’ (whichever came first). A univariate descriptive statistics analysis was conducted using a non- parametric procedure, Kaplan -Meier (KM) method to estimate overall survival (OS), while the Cox proportional hazard model was used for the multivariate analysis to determine possible association of predictor variables and to obtain adjusted hazard ratio. P value was set at <0.05, log likelihood ratio test at >0.10. Data were analysed using SPSS version 15.0. Results: There was no significant difference in the survival curves of male and female patients. (Log rank statistic = 0.194, df = 1, p = 0.66) and among different age group (Log rank statistic = 1.337, df = 3, p = 0.72). The mean overall survival (OS) was 209 days (95%CI: 92–346) while the median was 51 days (95% CI: 35.7–66). The mean ST for women was 385 days (95%CI: 76.6–694) and for men was 69 days (95%CI: 48.8–88.5). Multivariate Cox regression showed that patient who had history of drug misuse had 2.2 times hazard than those who do not have drug misuse. Smokers and alcohol drinkers had hazard of 1.8 while patients born in country of high endemicity (BICHE) had hazard of 6.3 and HIV co-infection hazard was 1.2. Conclusion: There was no significant difference in survival curves of male and female and among age group. Women had a higher ST compared to men. But men had a higher hazard rate compared to women. Anti-TNF, immunosuppressive medication and diabetes were found to be associated with longer ST, while alcohol, smoking, RICHE, BICHE was associated with shorter ST.
Resumo:
© 2015, Institute of Mathematical Statistics. All rights reserved.In order to use persistence diagrams as a true statistical tool, it would be very useful to have a good notion of mean and variance for a set of diagrams. In [23], Mileyko and his collaborators made the first study of the properties of the Fréchet mean in (D
Resumo:
Belying the spectacular success of solid organ transplantation and improvements in immunosuppressive therapy is the reality that long-term graft survival rates remain relatively unchanged, in large part due to chronic and insidious alloantibody-mediated graft injury. Half of heart transplant recipients develop chronic rejection within 10 years - a daunting statistic, particularly for young patients expecting to achieve longevity by enduring the rigors of a transplant. The current immunosuppressive pharmacopeia is relatively ineffective in preventing late alloantibody-associated chronic rejection. In this issue of the JCI, Kelishadi et al. report that preemptive deletion of B cells prior to heart transplantation in cynomolgus monkeys, in addition to conventional posttransplant immunosuppressive therapy with cyclosporine, markedly attenuated not only acute graft rejection but also alloantibody elaboration and chronic graft rejection. The success of this preemptive strike implies a central role for B cells in graft rejection, and this approach may help to delay or prevent chronic rejection after solid organ transplantation.
Resumo:
BACKGROUND: The National Comprehensive Cancer Network and the American Society of Clinical Oncology have established guidelines for the treatment and surveillance of colorectal cancer (CRC), respectively. Considering these guidelines, an accurate and efficient method is needed to measure receipt of care. METHODS: The accuracy and completeness of Veterans Health Administration (VA) administrative data were assessed by comparing them with data manually abstracted during the Colorectal Cancer Care Collaborative (C4) quality improvement initiative for 618 patients with stage I-III CRC. RESULTS: The VA administrative data contained gender, marital, and birth information for all patients but race information was missing for 62.1% of patients. The percent agreement for demographic variables ranged from 98.1-100%. The kappa statistic for receipt of treatments ranged from 0.21 to 0.60 and there was a 96.9% agreement for the date of surgical resection. The percentage of post-diagnosis surveillance events in C4 also in VA administrative data were 76.0% for colonoscopy, 84.6% for physician visit, and 26.3% for carcinoembryonic antigen (CEA) test. CONCLUSIONS: VA administrative data are accurate and complete for non-race demographic variables, receipt of CRC treatment, colonoscopy, and physician visits; but alternative data sources may be necessary to capture patient race and receipt of CEA tests.
Resumo:
CONCLUSION Radiation dose reduction, while saving image quality could be easily implemented with this approach. Furthermore, the availability of a dosimetric data archive provides immediate feedbacks, related to the implemented optimization strategies. Background JCI Standards and European Legislation (EURATOM 59/2013) require the implementation of patient radiation protection programs in diagnostic radiology. Aim of this study is to demonstrate the possibility to reduce patients radiation exposure without decreasing image quality, through a multidisciplinary team (MT), which analyzes dosimetric data of diagnostic examinations. Evaluation Data from CT examinations performed with two different scanners (Siemens DefinitionTM and GE LightSpeed UltraTM) between November and December 2013 are considered. CT scanners are configured to automatically send images to DoseWatch© software, which is able to store output parameters (e.g. kVp, mAs, pitch ) and exposure data (e.g. CTDIvol, DLP, SSDE). Data are analyzed and discussed by a MT composed by Medical Physicists and Radiologists, to identify protocols which show critical dosimetric values, then suggest possible improvement actions to be implemented. Furthermore, the large amount of data available allows to monitor diagnostic protocols currently in use and to identify different statistic populations for each of them. Discussion We identified critical values of average CTDIvol for head and facial bones examinations (respectively 61.8 mGy, 151 scans; 61.6 mGy, 72 scans), performed with the GE LightSpeed CTTM. Statistic analysis allowed us to identify the presence of two different populations for head scan, one of which was only 10% of the total number of scans and corresponded to lower exposure values. The MT adopted this protocol as standard. Moreover, the constant output parameters monitoring allowed us to identify unusual values in facial bones exams, due to changes during maintenance service, which the team promptly suggested to correct. This resulted in a substantial dose saving in CTDIvol average values of approximately 15% and 50% for head and facial bones exams, respectively. Diagnostic image quality was deemed suitable for clinical use by radiologists.
Resumo:
In attempts to conserve the species diversity of trees in tropical forests, monitoring of diversity in inventories is essential. For effective monitoring it is crucial to be able to make meaningful comparisons between different regions, or comparisons of the diversity of a region at different times. Many species diversity measures have been defined, including the well-known abundance and entropy measures. All such measures share a number of problems in their effective practical use. However, probably the most problematic is that they cannot be used to meaningfully assess changes, since thay are only concerned with the number of species or the proportions of the population/sample which they constitute. A natural (though simplistic) model of a species frequency distribution is the multinomial distribution. It is shown that the likelihood analysis of samples from such a distribution are closely related to a number of entropy-type measures of diversity. Hence a comparison of the species distribution on two plots, using the multinomial model and likelihood methods, leads to generalised cross-entropy as the LRT test statistic of the null that the species distributions are the same. Data from 30 contiguous plots in a forest in Sumatra are analysed using these methods. Significance tests between all pairs of plots yield extremely low p-values, indicating strongly that it ought to been "Obvious" that the observed species distributions are different on different plots. In terms of how different the plots are, and how these differences vary over the whole study site, a display of the degrees of freedom of the test, (equivalent to the number of shared species) seems to be the most revealing indicator, as well as the simplest.
Resumo:
Gross Motor Function Classification System (GMFCS) level was reported by three independent assessors in a population of children with cerebral palsy (CP) aged between 4 and 18 years (n=184; 112 males, 72 females; mean age 10y 10mo [SD 3y 7mo]). A software algorithm also provided a computed GMFCS level from a regional CP registry. Participants had clinical diagnoses of unilateral (n=94) and bilateral (n=84) spastic CP, ataxia (n=4), dyskinesia (n=1), and hypotonia (n=1), and could walk independently with or without the use of an aid (GMFCS Levels I-IV). Research physiotherapist (n=184) and parent/guardian data (n=178) were collected in a research environment. Data from the child's community physiotherapist (n=143) were obtained by postal questionnaire. Results, using the kappa statistic with linear weighting (?1w), showed good agreement between the parent/guardian and research physiotherapist (?1w=0.75) with more moderate levels of agreement between the clinical physiotherapist and researcher (?1w=0.64) and the clinical physiotherapist and parent/guardian (?1w=0.57). Agreement was consistently better for older children (>2y). This study has shown that agreement with parent report increases with therapists'experience of the GMFCS and knowledge of the child at the time of grading. Substantial agreement between a computed GMFCS and an experienced therapist (?1w=0.74) also demonstrates the potential for extrapolation of GMFCS rating from an existing CP registry, providing the latter has sufficient data on locomotor ability.
Resumo:
Modeling of on-body propagation channels is of paramount importance to those wishing to evaluate radio channel performance for wearable devices in body area networks (BANs). Difficulties in modeling arise due to the highly variable channel conditions related to changes in the user's state and local environment. This study characterizes these influences by using time-series analysis to examine and model signal characteristics for on-body radio channels in user stationary and mobile scenarios in four different locations: anechoic chamber, open office area, hallway, and outdoor environment. Autocorrelation and cross-correlation functions are reported and shown to be dependent on body state and surroundings. Autoregressive (AR) transfer functions are used to perform time-series analysis and develop models for fading in various on-body links. Due to the non-Gaussian nature of the logarithmically transformed observed signal envelope in the majority of mobile user states, a simple method for reproducing the failing based on lognormal and Nakagami statistics is proposed. The validity of the AR models is evaluated using hypothesis testing, which is based on the Ljung-Box statistic, and the estimated distributional parameters of the simulator output compared with those from experimental results.
Resumo:
Motivation: Recently, many univariate and several multivariate approaches have been suggested for testing differential expression of gene sets between different phenotypes. However, despite a wealth of literature studying their performance on simulated and real biological data, still there is a need to quantify their relative performance when they are testing different null hypotheses.
Results: In this article, we compare the performance of univariate and multivariate tests on both simulated and biological data. In the simulation study we demonstrate that high correlations equally affect the power of both, univariate as well as multivariate tests. In addition, for most of them the power is similarly affected by the dimensionality of the gene set and by the percentage of genes in the set, for which expression is changing between two phenotypes. The application of different test statistics to biological data reveals that three statistics (sum of squared t-tests, Hotelling's T2, N-statistic), testing different null hypotheses, find some common but also some complementing differentially expressed gene sets under specific settings. This demonstrates that due to complementing null hypotheses each test projects on different aspects of the data and for the analysis of biological data it is beneficial to use all three tests simultaneously instead of focusing exclusively on just one.
Resumo:
In this paper, we extend the heterogeneous panel data stationarity test of Hadri [Econometrics Journal, Vol. 3 (2000) pp. 148–161] to the cases where breaks are taken into account. Four models with different patterns of breaks under the null hypothesis are specified. Two of the models have been already proposed by Carrion-i-Silvestre et al.[Econometrics Journal,Vol. 8 (2005) pp. 159–175]. The moments of the statistics corresponding to the four models are derived in closed form via characteristic functions.We also provide the exact moments of a modified statistic that do not asymptotically depend on the location of the break point under the null hypothesis. The cases where the break point is unknown are also considered. For the model with breaks in the level and no time trend and for the model with breaks in the level and in the time trend, Carrion-i-Silvestre et al. [Econometrics Journal, Vol. 8 (2005) pp. 159–175]showed that the number of breaks and their positions may be allowed to differ acrossindividuals for cases with known and unknown breaks. Their results can easily be extended to the proposed modified statistic. The asymptotic distributions of all the statistics proposed are derived under the null hypothesis and are shown to be normally distributed. We show by simulations that our suggested tests have in general good performance in finite samples except the modified test. In an empirical application to the consumer prices of 22 OECD countries during the period from 1953 to 2003, we found evidence of stationarity once a structural break and cross-sectional dependence are accommodated.
Resumo:
This paper investigates the performance of the tests proposed by Hadri and by Hadri and Larsson for testing for stationarity in heterogeneous panel data under model misspecification. The panel tests are based on the well known KPSS test (cf. Kwiatkowski et al.) which considers two models: stationarity around a deterministic level and stationarity around a deterministic trend. There is no study, as far as we know, on the statistical properties of the test when the wrong model is used. We also consider the case of the simultaneous presence of the two types of models in a panel. We employ two asymptotics: joint asymptotic, T, N -> infinity simultaneously, and T fixed and N allowed to grow indefinitely. We use Monte Carlo experiments to investigate the effects of misspecification in sample sizes usually used in practice. The results indicate that the assumption that T is fixed rather than asymptotic leads to tests that have less size distortions, particularly for relatively small T with large N panels (micro-panels) than the tests derived under the joint asymptotics. We also find that choosing a deterministic trend when a deterministic level is true does not significantly affect the properties of the test. But, choosing a deterministic level when a deterministic trend is true leads to extreme over-rejections. Therefore, when unsure about which model has generated the data, it is suggested to use the model with a trend. We also propose a new statistic for testing for stationarity in mixed panel data where the mixture is known. The performance of this new test is very good for both cases of T asymptotic and T fixed. The statistic for T asymptotic is slightly undersized when T is very small (
Resumo:
Background: Postal and electronic questionnaires are widely used for data collection in epidemiological studies but non-response reduces the effective sample size and can introduce bias. Finding ways to increase response to postal and electronic questionnaires would improve the quality of health research. Objectives: To identify effective strategies to increase response to postal and electronic questionnaires. Search strategy: We searched 14 electronic databases to February 2008 and manually searched the reference lists of relevant trials and reviews, and all issues of two journals. We contacted the authors of all trials or reviews to ask about unpublished trials. Where necessary, we also contacted authors to confirm methods of allocation used and to clarify results presented. We assessed the eligibility of each trial using pre-defined criteria. Selection criteria: Randomised controlled trials of methods to increase response to postal or electronic questionnaires. Data collection and analysis: We extracted data on the trial participants, the intervention, the number randomised to intervention and comparison groups and allocation concealment. For each strategy, we estimated pooled odds ratios (OR) and 95% confidence intervals (CI) in a random-effects model. We assessed evidence for selection bias using Egger's weighted regression method and Begg's rank correlation test and funnel plot. We assessed heterogeneity among trial odds ratios using a Chi 2 test and the degree of inconsistency between trial results was quantified using the I 2 statistic. Main results: Postal We found 481 eligible trials.The trials evaluated 110 different ways of increasing response to postal questionnaires.We found substantial heterogeneity among trial results in half of the strategies. The odds of response were at least doubled using monetary incentives (odds ratio 1.87; 95% CI 1.73 to 2.04; heterogeneity P < 0.00001, I 2 = 84%), recorded delivery (1.76; 95% CI 1.43 to 2.18; P = 0.0001, I 2 = 71%), a teaser on the envelope - e.g. a comment suggesting to participants that they may benefit if they open it (3.08; 95% CI 1.27 to 7.44) and a more interesting questionnaire topic (2.00; 95% CI 1.32 to 3.04; P = 0.06, I 2 = 80%). The odds of response were substantially higher with pre-notification (1.45; 95% CI 1.29 to 1.63; P < 0.00001, I 2 = 89%), follow-up contact (1.35; 95% CI 1.18 to 1.55; P < 0.00001, I 2 = 76%), unconditional incentives (1.61; 1.36 to 1.89; P < 0.00001, I 2 = 88%), shorter questionnaires (1.64; 95%CI 1.43 to 1.87; P < 0.00001, I 2 = 91%), providing a second copy of the questionnaire at follow up (1.46; 95% CI 1.13 to 1.90; P < 0.00001, I 2 = 82%), mentioning an obligation to respond (1.61; 95% CI 1.16 to 2.22; P = 0.98, I 2 = 0%) and university sponsorship (1.32; 95% CI 1.13 to 1.54; P < 0.00001, I 2 = 83%). The odds of response were also increased with non-monetary incentives (1.15; 95% CI 1.08 to 1.22; P < 0.00001, I 2 = 79%), personalised questionnaires (1.14; 95% CI 1.07 to 1.22; P < 0.00001, I 2 = 63%), use of hand-written addresses (1.25; 95% CI 1.08 to 1.45; P = 0.32, I 2 = 14%), use of stamped return envelopes as opposed to franked return envelopes (1.24; 95% CI 1.14 to 1.35; P < 0.00001, I 2 = 69%), an assurance of confidentiality (1.33; 95% CI 1.24 to 1.42) and first class outward mailing (1.11; 95% CI 1.02 to 1.21; P = 0.78, I 2 = 0%). The odds of response were reduced when the questionnaire included questions of a sensitive nature (0.94; 95% CI 0.88 to 1.00; P = 0.51, I 2 = 0%). Electronic: We found 32 eligible trials. The trials evaluated 27 different ways of increasing response to electronic questionnaires. We found substantial heterogeneity among trial results in half of the strategies. The odds of response were increased by more than a half using non-monetary incentives (1.72; 95% CI 1.09 to 2.72; heterogeneity P < 0.00001, I 2 = 95%), shorter e-questionnaires (1.73; 1.40 to 2.13; P = 0.08, I 2 = 68%), including a statement that others had responded (1.52; 95% CI 1.36 to 1.70), and a more interesting topic (1.85; 95% CI 1.52 to 2.26). The odds of response increased by a third using a lottery with immediate notification of results (1.37; 95% CI 1.13 to 1.65), an offer of survey results (1.36; 95% CI 1.15 to 1.61), and using a white background (1.31; 95% CI 1.10 to 1.56). The odds of response were also increased with personalised e-questionnaires (1.24; 95% CI 1.17 to 1.32; P = 0.07, I 2 = 41%), using a simple header (1.23; 95% CI 1.03 to 1.48), using textual representation of response categories (1.19; 95% CI 1.05 to 1.36), and giving a deadline (1.18; 95% CI 1.03 to 1.34). The odds of response tripled when a picture was included in an e-mail (3.05; 95% CI 1.84 to 5.06; P = 0.27, I 2 = 19%). The odds of response were reduced when "Survey" was mentioned in the e-mail subject line (0.81; 95% CI 0.67 to 0.97; P = 0.33, I 2 = 0%), and when the e-mail included a male signature (0.55; 95% CI 0.38 to 0.80; P = 0.96, I 2 = 0%). Authors' conclusions: Health researchers using postal and electronic questionnaires can increase response using the strategies shown to be effective in this systematic review. Copyright © 2009 The Cochrane Collaboration. Published by John Wiley & Sons, Ltd.
--------------------------------------------------------------------------------
Reaxys Database Information|
--------------------------------------------------------------------------------
Resumo:
The aim of this paper was to confirm the factor structure of the 20-item Beck Hopelessness Scale in a non-clinical population. Previous research has highlighted a lack of clarity in its construct validity with regards to this population.
Based on previous factor analytic findings from both clinical and non-clinical studies, 13 separate confirmatory factor models were specified and estimated using LISREL 8.72 to test the one, two and three-factor models.
Psychology and medical students at Queen's University, Belfast (n = 581) completed both the BHS and the Beck Depression Inventory (BDI).
All models showed reasonable fit, but only one, a four-item single-factor model demonstrated a nonsignificant chi-squared statistic. These four items can be used to derive a Short-Form BHS (SBHS) in which increasing scores (0-4) corresponded with increasing scores in the BDI. The four items were also drawn from all three of Beck's proposed triad, and included both positively and negatively scored items.
This study in a UK undergraduate non-clinical population suggests that the BHS best measures a one-factor model of hopelessness. It appears that a shorter four-item scale can also measure this one-factor model. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
BACKGROUND: Inappropriate prescribing is a well-documented problem in older people. The new screening tools, STOPP (Screening Tool of Older Peoples' Prescriptions) and START (Screening Tool to Alert doctors to Right Treatment) have been formulated to identify potentially inappropriate medications (PIMs) and potential errors of omissions (PEOs) in older patients. Consistent, reliable application of STOPP and START is essential for the screening tools to be used effectively by pharmacists. OBJECTIVE: To determine the interrater reliability among a group of clinical pharmacists in applying the STOPP and START criteria to elderly patients' records. METHODS: Ten pharmacists (5 hospital pharmacists, 5 community pharmacists) were given 20 patient profiles containing details including the patients' age and sex, current medications, current diagnoses, relevant medical histories, biochemical data, and estimated glomerular filtration rate. Each pharmacist applied the STOPP and START criteria to each patient record. The PIMs and PEOs identified by each pharmacist were compared with those of 2 academic pharmacists who were highly familiar with the application of STOPP and START. An interrater reliability analysis using the k statistic (chance corrected measure of agreement) was performed to determine consistency between pharmacists. RESULTS: The median ? coefficients for hospital pharmacists and community pharmacists compared with the academic pharmacists for STOPP were 0.89 and 0.88, respectively, while those for START were 0.91 and 0.90, respectively. CONCLUSIONS: Interrater reliability of STOPP and START tools between pharmacists working in different sectors is good. Pharmacists working in both hospitals and in the community can use STOPP and START reliably during their everyday practice to identify PIMs and PEOs in older patients.