12 resultados para information criteria
Resumo:
The problem of model selection of a univariate long memory time series is investigated once a semi parametric estimator for the long memory parameter has been used. Standard information criteria are not consistent in this case. A Modified Information Criterion (MIC) that overcomes these difficulties is introduced and proofs that show its asymptotic validity are provided. The results are general and cover a wide range of short memory processes. Simulation evidence compares the new and existing methodologies and empirical applications in monthly inflation and daily realized volatility are presented.
Resumo:
BACKGROUND: Whilst multimorbidity is more prevalent with increasing age, approximately 30% of middle-aged adults (45-64 years) are also affected. Several prescribing criteria have been developed to optimise medication use in older people (≥65 years) with little focus on potentially inappropriate prescribing (PIP) in middle-aged adults. We have developed a set of explicit prescribing criteria called PROMPT (PRescribing Optimally in Middle-aged People's Treatments) which may be applied to prescribing datasets to determine the prevalence of PIP in this age-group.
METHODS: A literature search was conducted to identify published prescribing criteria for all age groups, with the Project Steering Group (convened for this study) adding further criteria for consideration, all of which were reviewed for relevance to middle-aged adults. These criteria underwent a two-round Delphi process, using an expert panel consisting of general practitioners, pharmacists and clinical pharmacologists from the United Kingdom and Republic of Ireland. Using web-based questionnaires, 17 panellists were asked to indicate their level of agreement with each criterion via a 5-point Likert scale (1 = Strongly Disagree, 5 = Strongly Agree) to assess the applicability to middle-aged adults in the absence of clinical information. Criteria were accepted/rejected/revised dependent on the panel's level of agreement using the median response/interquartile range and additional comments.
RESULTS: Thirty-four criteria were rated in the first round of this exercise and consensus was achieved on 17 criteria which were accepted into the PROMPT criteria. Consensus was not reached on the remaining 17, and six criteria were removed following a review of the additional comments. The second round of this exercise focused on the remaining 11 criteria, some of which were revised following the first exercise. Five criteria were accepted from the second round, providing a final list of 22 criteria [gastro-intestinal system (n = 3), cardiovascular system (n = 4), respiratory system (n = 4), central nervous system (n = 6), infections (n = 1), endocrine system (n = 1), musculoskeletal system (n = 2), duplicates (n = 1)].
CONCLUSIONS: PROMPT is the first set of prescribing criteria developed for use in middle-aged adults. The utility of these criteria will be tested in future studies using prescribing datasets.
Resumo:
This paper examines the finite sample properties of three testing regimes for the null hypothesis of a panel unit root against stationary alternatives in the presence of cross-sectional correlation. The regimes of Bai and Ng (2004), Moon and Perron (2004) and Pesaran (2007) are assessed in the presence of multiple factors and also other non-standard situations. The behaviour of some information criteria used to determine the number of factors in a panel is examined and new information criteria with improved properties in small-N panels proposed. An application to the efficient markets hypothesis is also provided. The null hypothesis of a panel random walk is not rejected by any of the tests, supporting the efficient markets hypothesis in the financial services sector of the Australian Stock Exchange.
Resumo:
Nonlinear models constructed from radial basis function (RBF) networks can easily be over-fitted due to the noise on the data. While information criteria, such as the final prediction error (FPE), can provide a trade-off between training error and network complexity, the tunable parameters that penalise a large size of network model are hard to determine and are usually network dependent. This article introduces a new locally regularised, two-stage stepwise construction algorithm for RBF networks. The main objective is to produce a parsomous network that generalises well over unseen data. This is achieved by utilising Bayesian learning within a two-stage stepwise construction procedure to penalise centres that are mainly interpreted by the noise.
Resumo:
Quantitative scaling relationships among body mass, temperature and metabolic rate of organisms are still controversial, while resolution may be further complicated through the use of different and possibly inappropriate approaches to statistical analysis. We propose the application of a modelling strategy based on the theoretical approach of Akaike's information criteria and non-linear model fitting (nlm). Accordingly, we collated and modelled available data at intraspecific level on the individual standard metabolic rate of Antarctic microarthropods as a function of body mass (M), temperature (T), species identity (S) and high rank taxa to which species belong (G) and tested predictions from metabolic scaling theory (mass-metabolism allometric exponent b = 0.75, activation energy range 0.2-1.2 eV). We also performed allometric analysis based on logarithmic transformations (lm). Conclusions from lm and nlm approaches were different. Best-supported models from lm incorporated T, M and S. The estimates of the allometric scaling exponent linking body mass and metabolic rate resulted in a value of 0.696 +/- 0.105 (mean +/- 95% CI). In contrast, the four best-supported nlm models suggested that both the scaling exponent and activation energy significantly vary across the high rank taxa (Collembola, Cryptostigmata, Mesostigmata and Prostigmata) to which species belong, with mean values of b ranging from about 0.6 to 0.8. We therefore reached two conclusions: 1, published analyses of arthropod metabolism based on logarithmic data may be biased by data transformation; 2, non-linear models applied to Antarctic microarthropod metabolic rate suggest that intraspecific scaling of standard metabolic rate in Antarctic microarthropods is highly variable and can be characterised by scaling exponents that greatly vary within taxa, which may have biased previous interspecific comparisons that neglected intraspecific variability.
Resumo:
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.
Resumo:
We consider two celebrated criteria for defining the nonclassicality of bipartite bosonic quantum systems, the first stemming from information theoretic concepts and the second from physical constraints on the quantum phase space. Consequently, two sets of allegedly classical states are singled out: (i) the set C composed of the so-called classical-classical (CC) states—separable states that are locally distinguishable and do not possess quantum discord; (ii) the set P of states endowed with a positive P representation (P-classical states)—mixtures of Glauber coherent states that, e.g., fail to show negativity of their Wigner function. By showing that C and P are almost disjoint, we prove that the two defining criteria are maximally inequivalent. Thus, the notions of classicality that they put forward are radically different. In particular, generic CC states show quantumness in their P representation, and vice versa, almost all P-classical states have positive quantum discord and, hence, are not CC. This inequivalence is further elucidated considering different applications of P-classical and CC states. Our results suggest that there are other quantum correlations in nature than those revealed by entanglement and quantum discord.
Resumo:
Background A European screening tool (STOPP/START) has been formulated to identify the prescribing of potentially inappropriate medicines (PIMs) and potential prescribing omissions (PPOs). Pharmacists working in community pharmacies could use STOPP/START as a guide to conducting medication use reviews; however, community pharmacists do not routinely have access to patients' clinical records. Objective To compare the PIM and PPO detection rates from application of the STOPP/START criteria to patients' medication details alone with the detection rates from application of STOPP/START to information on patients' medications combined with clinical information. Setting Community Pharmacy. Method Three pharmacists applied STOPP/START to 250 patient medication lists, containing information regarding dose, frequency and duration of treatment. The PIMs and PPOs identified by each pharmacist were compared with those identified by consensus agreement of two other pharmacists, who applied STOPP/START criteria using patients' full clinical records. Main outcome measure The main outcome measures were: (1) PIM and PPO detection rates among pharmacists with access to patients' clinical information compared to PIM and PPO detection rates among pharmacists using patients' medication information only, and (2) the levels of agreement (calculated using Cohen's kappa statistic (k)) for the three most commonly identified PIMs and PPOs. Results Pharmacists with access to patients' clinical records identified significantly fewer PIMs than pharmacists without (p = 0.002). The three most commonly identified PIMs were benzodiazepines, proton pump inhibitors and duplicate drug classes, with kappa (k) statistic agreement ranges of 0.87-0.97, 0.60-0.68 and 0.39-0.85 respectively. PPOs were identified more often (p
Resumo:
Three issues usually are associated with threat prevention intelligent surveillance systems. First, the fusion and interpretation of large scale incomplete heterogeneous information; second, the demand of effectively predicting suspects’ intention and ranking the potential threats posed by each suspect; third, strategies of allocating limited security resources (e.g., the dispatch of security team) to prevent a suspect’s further actions towards critical assets. However, in the literature, these three issues are seldomly considered together in a sensor network based intelligent surveillance framework. To address
this problem, in this paper, we propose a multi-level decision support framework for in-time reaction in intelligent surveillance. More specifically, based on a multi-criteria event modeling framework, we design a method to predict the most plausible intention of a suspect. Following this, a decision support model is proposed to rank each suspect based on their threat severity and to determine resource allocation strategies. Finally, formal properties are discussed to justify our framework.
Resumo:
In many CCTV and sensor network based intelligent surveillance systems, a number of attributes or criteria are used to individually evaluate the degree of potential threat of a suspect. The outcomes for these attributes are in general from analytical algorithms where data are often pervaded with uncertainty and incompleteness. As a result, such individual threat evaluations are often inconsistent, and individual evaluations can change as time elapses. Therefore, integrating heterogeneous threat evaluations with temporal influence to obtain a better overall evaluation is a challenging issue. So far, this issue has rarely be considered by existing event reasoning frameworks under uncertainty in sensor network based surveillance. In this paper, we first propose a weighted aggregation operator based on a set of principles that constraints the fusion of individual threat evaluations. Then, we propose a method to integrate the temporal influence on threat evaluation changes. Finally, we demonstrate the usefulness of our system with a decision support event modeling framework using an airport security surveillance scenario.
Resumo:
Purpose
The study contributes to the literature on public value and performance examining politicians’ and managers’ perspectives by investigating the importance they attach to the different facets of performance information (i.e. budgetary, accrual based- and non-financial information (NFI)).
Design/methodology/approach
We survey politicians and managers in all Italian municipalities of at least 80,000 inhabitants.
Findings
Overall, NFI is more appreciated than financial information (FI). Moreover, budgetary accounting is preferred to accrual accounting. Politicians’ and managers’ preferences are generally aligned.
Research limitations/implications
NFI as a measure of public value is not alternative, but rather complementary, to FI. The latter remains a fundamental element of public sector accounting due to its role in resource allocation and control.
Practical implications
The preference for NFI over FI and of budgetary over accruals accounting suggests that the current predominant emphasis on (accrual-based) financial reporting might be misplaced.
Originality/value
Public value and performance are multi-faceted concepts. They can be captured by different types of information and evaluated according to different criteria, which will also depend on the category of stakeholders or users who assesses public performance. So far, most literature has considered the financial and non-financial facets of performance as virtually separate. Similarly, in the practice, financial management tends to be decoupled from non-financial performance management. However, this research shows that only by considering their joint interactions we can achieve an accurate representation of what public value really is.