828 resultados para rank testing
Resumo:
This paper develops an approach to rank testing that nests all existing rank tests andsimplifies their asymptotics. The approach is based on the fact that implicit in every ranktest there are estimators of the null spaces of the matrix in question. The approach yieldsmany new insights about the behavior of rank testing statistics under the null as well as localand global alternatives in both the standard and the cointegration setting. The approach alsosuggests many new rank tests based on alternative estimates of the null spaces as well as thenew fixed-b theory. A brief Monte Carlo study illustrates the results.
Resumo:
Objective: To characterize articular and systemic inflammatory activity in juvenile idiopathic arthritis (JIA), identifying remission status with and without medication.Methods: A total of 165 JIA cases, followed for a mean period of 3.6 years, were reviewed in order to characterize episodes of inactivity and clinical remission on and off medication. The resulting data were analyzed by means of descriptive statistics, survival analysis, by comparison of Kaplan-Meier curves, log rank testing and binary logistic regression;analysis in order to identify predictive factors for remission or persistent activity.Results: One hundred and eight of the cases reviewed fulfilled the inclusion criteria: 57 patients (52.7%) exhibited a total of 71 episodes of inactivity, with a mean of 2.9 years per episode; 36 inactivity episodes (50.7%) resulted in clinical remission off medication, 35% of which were of the persistent oligoarticular subtype. The probability of clinical remission on medication over 2 years was 81, 82, 97 and 83% for cases of persistent oligoarticular, extended oligoarticular, polyarticular and systemicJIA, respectively. The probability of clinical remission off medication 5 years after onset of remission was 40 and 67% for patients with persistent oligoarticular and systemic JIA, respectively. Persistent disease activity was significantly associated with the use of an anti-rheumatic drug combination. Age at JIA onset was the only factor that predicted clinical remission (p = 0.002).Conclusions: In this cohort, the probability of JIA progressing to clinical remission was greater for the persistent oligoarticular and systemic subtypes, when compared with polyarticular cases.
Resumo:
OBJECTIVE Texture analysis is an alternative method to quantitatively assess MR-images. In this study, we introduce dynamic texture parameter analysis (DTPA), a novel technique to investigate the temporal evolution of texture parameters using dynamic susceptibility contrast enhanced (DSCE) imaging. Here, we aim to introduce the method and its application on enhancing lesions (EL), non-enhancing lesions (NEL) and normal appearing white matter (NAWM) in multiple sclerosis (MS). METHODS We investigated 18 patients with MS and clinical isolated syndrome (CIS), according to the 2010 McDonald's criteria using DSCE imaging at different field strengths (1.5 and 3 Tesla). Tissues of interest (TOIs) were defined within 27 EL, 29 NEL and 37 NAWM areas after normalization and eight histogram-based texture parameter maps (TPMs) were computed. TPMs quantify the heterogeneity of the TOI. For every TOI, the average, variance, skewness, kurtosis and variance-of-the-variance statistical parameters were calculated. These TOI parameters were further analyzed using one-way ANOVA followed by multiple Wilcoxon sum rank testing corrected for multiple comparisons. RESULTS Tissue- and time-dependent differences were observed in the dynamics of computed texture parameters. Sixteen parameters discriminated between EL, NEL and NAWM (pAVG = 0.0005). Significant differences in the DTPA texture maps were found during inflow (52 parameters), outflow (40 parameters) and reperfusion (62 parameters). The strongest discriminators among the TPMs were observed in the variance-related parameters, while skewness and kurtosis TPMs were in general less sensitive to detect differences between the tissues. CONCLUSION DTPA of DSCE image time series revealed characteristic time responses for ELs, NELs and NAWM. This may be further used for a refined quantitative grading of MS lesions during their evolution from acute to chronic state. DTPA discriminates lesions beyond features of enhancement or T2-hypersignal, on a numeric scale allowing for a more subtle grading of MS-lesions.
Resumo:
OBJECTIVES Molecular subclassification of non small-cell lung cancer (NSCLC) is essential to improve clinical outcome. This study assessed the prognostic and predictive value of circulating micro-RNA (miRNA) in patients with non-squamous NSCLC enrolled in the phase II SAKK (Swiss Group for Clinical Cancer Research) trial 19/05, receiving uniform treatment with first-line bevacizumab and erlotinib followed by platinum-based chemotherapy at progression. MATERIALS AND METHODS Fifty patients with baseline and 24 h blood samples were included from SAKK 19/05. The primary study endpoint was to identify prognostic (overall survival, OS) miRNA's. Patient samples were analyzed with Agilent human miRNA 8x60K microarrays, each glass slide formatted with eight high-definition 60K arrays. Each array contained 40 probes targeting each of the 1347 miRNA. Data preprocessing included quantile normalization using robust multi-array average (RMA) algorithm. Prognostic and predictive miRNA expression profiles were identified by Spearman's rank correlation test (percentage tumor shrinkage) or log-rank testing (for time-to-event endpoints). RESULTS Data preprocessing kept 49 patients and 424 miRNA for further analysis. Ten miRNA's were significantly associated with OS, with hsa-miR-29a being the strongest prognostic marker (HR=6.44, 95%-CI 2.39-17.33). Patients with high has-miR-29a expression had a significantly lower survival at 10 months compared to patients with a low expression (54% versus 83%). Six out of the 10 miRNA's (hsa-miRN-29a, hsa-miR-542-5p, hsa-miR-502-3p, hsa-miR-376a, hsa-miR-500a, hsa-miR-424) were insensitive to perturbations according to jackknife cross-validation on their HR for OS. The respective principal component analysis (PCA) defined a meta-miRNA signature including the same 6 miRNA's, resulting in a HR of 0.66 (95%-CI 0.53-0.82). CONCLUSION Cell-free circulating miRNA-profiling successfully identified a highly prognostic 6-gene signature in patients with advanced non-squamous NSCLC. Circulating miRNA profiling should further be validated in external cohorts for the selection and monitoring of systemic treatment in patients with advanced NSCLC.
Resumo:
BACKGROUND To investigate the impact of perioperative chemo(radio)therapy in advanced primary urethral carcinoma (PUC). PATIENTS AND METHODS A series of 124 patients (86 men, 38 women) were diagnosed with and underwent surgery for PUC in 10 referral centers between 1993 and 2012. Kaplan-Meier analysis with log-rank testing was used to investigate the impact of perioperative chemo(radio)therapy on overall survival (OS). The median follow-up was 21 months (mean: 32 months; interquartile range: 5-48). RESULTS Neoadjuvant chemotherapy (NAC), neoadjuvant chemoradiotherapy (N-CRT) plus adjuvant chemotherapy (ACH), and ACH was delivered in 12 (31%), 6 (15%) and 21 (54%) of these patients, respectively. Receipt of NAC/N-CRT was associated with clinically node-positive disease (cN+; P = 0.033) and lower utilization of cystectomy at surgery (P = 0.015). The objective response rate to NAC and N-CRT was 25% and 33%, respectively. The 3-year OS for patients with objective response to neoadjuvant treatment (complete/partial response) was 100% and 58.3% for those with stable or progressive disease (P = 0.30). Of the 26 patients staged ≥cT3 and/or cN+ disease, 16 (62%) received perioperative chemo(radio)therapy and 10 upfront surgery without perioperative chemotherapy (38%). The 3-year OS for this locally advanced subset of patients (≥cT3 and/or cN+) who received NAC (N = 5), N-CRT (N = 3), surgery-only (N = 10) and surgery plus ACH (N = 8) was 100%, 100%, 50% and 20%, respectively (P = 0.016). Among these 26 patients, receipt of neoadjuvant treatment was significantly associated with improved 3-year relapse-free survival (RFS) (P = 0.022) and OS (P = 0.022). Proximal tumor location correlated with inferior 3-year RFS and OS (P = 0.056/0.005). CONCLUSION In this series, patients who received NAC/N-CRT for cT3 and/or cN+ PUC appeared to demonstrate improved survival compared with those who underwent upfront surgery with or without ACH.
Resumo:
Expected utility theory (EUT) has been challenged as a descriptive theoryin many contexts. The medical decision analysis context is not an exception.Several researchers have suggested that rank dependent utility theory (RDUT)may accurately describe how people evaluate alternative medical treatments.Recent research in this domain has addressed a relevant feature of RDU models-probability weighting-but to date no direct test of this theoryhas been made. This paper provides a test of the main axiomatic differencebetween EUT and RDUT when health profiles are used as outcomes of riskytreatments. Overall, EU best described the data. However, evidence on theediting and cancellation operation hypothesized in Prospect Theory andCumulative Prospect Theory was apparent in our study. we found that RDUoutperformed EU in the presentation of the risky treatment pairs in whichthe common outcome was not obvious. The influence of framing effects onthe performance of RDU and their importance as a topic for future researchis discussed.
Resumo:
This paper analyzes the measure of systemic importance ∆CoV aR proposed by Adrian and Brunnermeier (2009, 2010) within the context of a similar class of risk measures used in the risk management literature. In addition, we develop a series of testing procedures, based on ∆CoV aR, to identify and rank the systemically important institutions. We stress the importance of statistical testing in interpreting the measure of systemic importance. An empirical application illustrates the testing procedures, using equity data for three European banks.
Resumo:
This paper investigates whether or not multivariate cointegrated process with structural change can describe the Brazilian term structure of interest rate data from 1995 to 2006. In this work the break point and the number of cointegrated vector are assumed to be known. The estimated model has four regimes. Only three of them are statistically different. The first starts at the beginning of the sample and goes until September of 1997. The second starts at October of 1997 until December of 1998. The third starts at January of 1999 and goes until the end of the sample. It is used monthly data. Models that allows for some similarities across the regimes are also estimated and tested. The models are estimated using the Generalized Reduced-Rank Regressions developed by Hansen (2003). All imposed restrictions can be tested using likelihood ratio test with standard asymptotic 1 qui-squared distribution. The results of the paper show evidence in favor of the long run implications of the expectation hypothesis for Brazil.
Resumo:
It is well known that cointegration between the level of two variables (labeled Yt and yt in this paper) is a necessary condition to assess the empirical validity of a present-value model (PV and PVM, respectively, hereafter) linking them. The work on cointegration has been so prevalent that it is often overlooked that another necessary condition for the PVM to hold is that the forecast error entailed by the model is orthogonal to the past. The basis of this result is the use of rational expectations in forecasting future values of variables in the PVM. If this condition fails, the present-value equation will not be valid, since it will contain an additional term capturing the (non-zero) conditional expected value of future error terms. Our article has a few novel contributions, but two stand out. First, in testing for PVMs, we advise to split the restrictions implied by PV relationships into orthogonality conditions (or reduced rank restrictions) before additional tests on the value of parameters. We show that PV relationships entail a weak-form common feature relationship as in Hecq, Palm, and Urbain (2006) and in Athanasopoulos, Guillén, Issler and Vahid (2011) and also a polynomial serial-correlation common feature relationship as in Cubadda and Hecq (2001), which represent restrictions on dynamic models which allow several tests for the existence of PV relationships to be used. Because these relationships occur mostly with nancial data, we propose tests based on generalized method of moment (GMM) estimates, where it is straightforward to propose robust tests in the presence of heteroskedasticity. We also propose a robust Wald test developed to investigate the presence of reduced rank models. Their performance is evaluated in a Monte-Carlo exercise. Second, in the context of asset pricing, we propose applying a permanent-transitory (PT) decomposition based on Beveridge and Nelson (1981), which focus on extracting the long-run component of asset prices, a key concept in modern nancial theory as discussed in Alvarez and Jermann (2005), Hansen and Scheinkman (2009), and Nieuwerburgh, Lustig, Verdelhan (2010). Here again we can exploit the results developed in the common cycle literature to easily extract permament and transitory components under both long and also short-run restrictions. The techniques discussed herein are applied to long span annual data on long- and short-term interest rates and on price and dividend for the U.S. economy. In both applications we do not reject the existence of a common cyclical feature vector linking these two series. Extracting the long-run component shows the usefulness of our approach and highlights the presence of asset-pricing bubbles.
Resumo:
This paper introduces the concept of common deterministic shifts (CDS). This concept is simple, intuitive and relates to the common structure of shifts or policy interventions. We propose a Reduced Rank technique to investigate the presence of CDS. The proposed testing procedure has standard asymptotics and good small-sample properties. We further link the concept of CDS to that of superexogeneity. It is shown that CDS tests can be constructed which allow to test for super-exogeneity. The Monte Carlo evidence indicates that the CDS test for super-exogeneity dominates testing procedures proposed in the literature.
Resumo:
Objectives. The null hypothesis was that mechanical testing systems used to determine polymerization stress (sigma(pol)) would rank a series of composites similarly. Methods. Two series of composites were tested in the following systems: universal testing machine (UTM) using glass rods as bonding substrate, UTM/acrylic rods, "low compliance device", and single cantilever device ("Bioman"). One series had five experimental composites containing BisGMA:TEGDMA in equimolar concentrations and 60, 65, 70, 75 or 80 wt% of filler. The other series had five commercial composites: Filtek Z250 (3M ESPE), Filtek A110 (3M ESPE), Tetric Ceram (Ivoclar), Heliomolar (Ivoclar) and Point 4 (Kerr). Specimen geometry, dimensions and curing conditions were similar in all systems. sigma(pol) was monitored for 10 min. Volumetric shrinkage (VS) was measured in a mercury dilatometer and elastic modulus (E) was determined by three-point bending. Shrinkage rate was used as a measure of reaction kinetics. ANOVA/Tukey test was performed for each variable, separately for each series. Results. For the experimental composites, sigma(pol) decreased with filler content in all systems, following the variation in VS. For commercial materials, sigma(pol) did not vary in the UTM/acrylic system and showed very few similarities in rankings in the others tests system. Also, no clear relationships were observed between sigma(pol) and VS or E. Significance. The testing systems showed a good agreement for the experimental composites, but very few similarities for the commercial composites. Therefore, comparison of polymerization stress results from different devices must be done carefully. (c) 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Resumo:
Studies of consumer-resource interactions suggest that individual diet specialisation is empirically widespread and theoretically important to the organisation and dynamics of populations and communities. We used weighted networks to analyze the resource use by sea otters, testing three alternative models for how individual diet specialisation may arise. As expected, individual specialisation was absent when otter density was low, but increased at high-otter density. A high-density emergence of nested resource-use networks was consistent with the model assuming individuals share preference ranks. However, a density-dependent emergence of a non-nested modular network for core resources was more consistent with the competitive refuge model. Individuals from different diet modules showed predictable variation in rank-order prey preferences and handling times of core resources, further supporting the competitive refuge model. Our findings support a hierarchical organisation of diet specialisation and suggest individual use of core and marginal resources may be driven by different selective pressures.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
We have undertaken two-dimensional gel electrophoresis proteomic profiling on a series of cell lines with different recombinant antibody production rates. Due to the nature of gel-based experiments not all protein spots are detected across all samples in an experiment, and hence datasets are invariably incomplete. New approaches are therefore required for the analysis of such graduated datasets. We approached this problem in two ways. Firstly, we applied a missing value imputation technique to calculate missing data points. Secondly, we combined a singular value decomposition based hierarchical clustering with the expression variability test to identify protein spots whose expression correlates with increased antibody production. The results have shown that while imputation of missing data was a useful method to improve the statistical analysis of such data sets, this was of limited use in differentiating between the samples investigated, and highlighted a small number of candidate proteins for further investigation. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Predicting the various responses of different species to changes in landscape structure is a formidable challenge to landscape ecology. Based on expert knowledge and landscape ecological theory, we develop five competing a priori models for predicting the presence/absence of the Koala (Phascolarctos cinereus) in Noosa Shire, south-east Queensland (Australia). A priori predictions were nested within three levels of ecological organization: in situ (site level) habitat (< 1 ha), patch level (100 ha) and landscape level (100-1000 ha). To test the models, Koala surveys and habitat surveys (n = 245) were conducted across the habitat mosaic. After taking into account tree species preferences, the patch and landscape context, and the neighbourhood effect of adjacent present sites, we applied logistic regression and hierarchical partitioning analyses to rank the alternative models and the explanatory variables. The strongest support was for a multilevel model, with Koala presence best predicted by the proportion of the landscape occupied by high quality habitat, the neighbourhood effect, the mean nearest neighbour distance between forest patches, the density of forest patches and the density of sealed roads. When tested against independent data (n = 105) using a receiver operator characteristic curve, the multilevel model performed moderately well. The study is consistent with recent assertions that habitat loss is the major driver of population decline, however, landscape configuration and roads have an important effect that needs to be incorporated into Koala conservation strategies.