939 resultados para Measurement error models
Resumo:
Many public organisations have been under great pressure in recent years to increase the efficiency and transparency of outputs, to rationalise the use of public resources, and to increase the quality of service delivery. In this context, public organisations were encouraged to introduce the New Public Management reforms with the goal of improving the efficiency and effectiveness of the performance organisation through a new public management model. This new public management model is based on measurement by outputs and outcomes, a clear definition of responsibilities, the transparency and accountability of governmental activities, and on a greater value for citizens. What type of performance measurement systems are used in police services? Based on the literature, we see that multidimensional models, such as the Balanced Scorecard, are important in many public organisations, like municipalities, universities, and hospitals. Police services are characterised by complex, diverse objectives and stakeholders. Therefore, performance measurement of these public services calls for a specific analysis. Based on a nationwide survey of all police chiefs of the Portuguese police force, we find that employee performance measurement is the main form of measurement. Also, we propose a strategic map for the Portuguese police service.
Resumo:
A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.
Resumo:
Recent literature has proved that many classical pricing models (Black and Scholes, Heston, etc.) and risk measures (V aR, CV aR, etc.) may lead to “pathological meaningless situations”, since traders can build sequences of portfolios whose risk leveltends to −infinity and whose expected return tends to +infinity, i.e., (risk = −infinity, return = +infinity). Such a sequence of strategies may be called “good deal”. This paper focuses on the risk measures V aR and CV aR and analyzes this caveat in a discrete time complete pricing model. Under quite general conditions the explicit expression of a good deal is given, and its sensitivity with respect to some possible measurement errors is provided too. We point out that a critical property is the absence of short sales. In such a case we first construct a “shadow riskless asset” (SRA) without short sales and then the good deal is given by borrowing more and more money so as to invest in the SRA. It is also shown that the SRA is interested by itself, even if there are short selling restrictions.
Resumo:
A new data set of daily gridded observations of precipitation, computed from over 400 stations in Portugal, is used to assess the performance of 12 regional climate models at 25 km resolution, from the ENSEMBLES set, all forced by ERA-40 boundary conditions, for the 1961-2000 period. Standard point error statistics, calculated from grid point and basin aggregated data, and precipitation related climate indices are used to analyze the performance of the different models in representing the main spatial and temporal features of the regional climate, and its extreme events. As a whole, the ENSEMBLES models are found to achieve a good representation of those features, with good spatial correlations with observations. There is a small but relevant negative bias in precipitation, especially in the driest months, leading to systematic errors in related climate indices. The underprediction of precipitation occurs in most percentiles, although this deficiency is partially corrected at the basin level. Interestingly, some of the conclusions concerning the performance of the models are different of what has been found for the contiguous territory of Spain; in particular, ENSEMBLES models appear too dry over Portugal and too wet over Spain. Finally, models behave quite differently in the simulation of some important aspects of local climate, from the mean climatology to high precipitation regimes in localized mountain ranges and in the subsequent drier regions.
Resumo:
Introduction: Standard Uptake Value (SUV) is a measurement of the uptake in a tumour normalized on the basis of a distribution volume and is used to quantify 18F-Fluorodeoxiglucose (FDG) uptake in tumors, such as primary lung tumor. Several sources of error can affect its accuracy. Normalization can be based on body weight, body surface area (BSA) and lean body mass (LBM). The aim of this study is to compare the influence of 3 normalization volumes in the calculation of SUV: body weight (SUVW), BSA (SUVBSA) and LBM (SUVLBM), with and without glucose correction, in patients with known primary lung tumor. The correlation between SUV and weight, height, blood glucose level, injected activity and time between injection and image acquisition is evaluated. Methods: Sample included 30 subjects (8 female and 22 male) with primary lung tumor, with clinical indication for 18F-FDG Positron Emission Tomography (PET). Images were acquired on a Siemens Biography according to the department’s protocol. Maximum pixel SUVW was obtained for abnormal uptake focus through semiautomatic VOI with Quantification 3D isocontour (threshold 2.5). The concentration of radioactivity (kBq/ml) was obtained from SUVW, SUVBSA, SUVLBM and the glucose corrected SUV were mathematically obtained. Results: Statistically significant differences between SUVW, SUVBSA and SUVLBM and between SUVWgluc, SUVBSAgluc and SUVLBMgluc were observed (p=0.000<0.05). The blood glucose level showed significant positive correlations with SUVW (r=0.371; p=0.043) and SUVLBM (r=0.389; p=0.034). SUVBSA showed independence of variations with the blood glucose level. Conclusion: The measurement of a radiopharmaceutical tumor uptake normalized on the basis of different distribution volumes is still variable. Further investigation on this subject is recommended.
Resumo:
Composite materials have a complex behavior, which is difficult to predict under different types of loads. In the course of this dissertation a methodology was developed to predict failure and damage propagation of composite material specimens. This methodology uses finite element numerical models created with Ansys and Matlab softwares. The methodology is able to perform an incremental-iterative analysis, which increases, gradually, the load applied to the specimen. Several structural failure phenomena are considered, such as fiber and/or matrix failure, delamination or shear plasticity. Failure criteria based on element stresses were implemented and a procedure to reduce the stiffness of the failed elements was prepared. The material used in this dissertation consist of a spread tow carbon fabric with a 0°/90° arrangement and the main numerical model analyzed is a 26-plies specimen under compression loads. Numerical results were compared with the results of specimens tested experimentally, whose mechanical properties are unknown, knowing only the geometry of the specimen. The material properties of the numerical model were adjusted in the course of this dissertation, in order to find the lowest difference between the numerical and experimental results with an error lower than 5% (it was performed the numerical model identification based on the experimental results).
Resumo:
The tt¯ production cross-section dependence on jet multiplicity and jet transverse momentum is reported for proton--proton collisions at a centre-of-mass energy of 7 TeV in the single-lepton channel. The data were collected with the ATLAS detector at the CERN Large Hadron Collider and comprise the full 2011 data sample corresponding to an integrated luminosity of 4.6 fb−1. Differential cross-sections are presented as a function of the jet multiplicity for up to eight jets using jet transverse momentum thresholds of 25, 40, 60, and 80 GeV, and as a function of jet transverse momentum up to the fifth jet. The results are shown after background subtraction and corrections for all detector effects, within a kinematic range closely matched to the experimental acceptance. Several QCD-based Monte Carlo models are compared with the results. Sensitivity to the parton shower modelling is found at the higher jet multiplicities, at high transverse momentum of the leading jet and in the transverse momentum spectrum of the fifth leading jet. The MC@NLO+HERWIG MC is found to predict too few events at higher jet multiplicities.
Resumo:
The distribution and orientation of energy inside jets is predicted to be an experimental handle on colour connections between the hard--scatter quarks and gluons initiating the jets. This Letter presents a measurement of the distribution of one such variable, the jet pull angle. The pull angle is measured for jets produced in tt¯ events with one W boson decaying leptonically and the other decaying to jets using 20.3 fb−1 of data recorded with the ATLAS detector at a centre--of--mass energy of s√=8 TeV at the LHC. The jet pull angle distribution is corrected for detector resolution and acceptance effects and is compared to various models.
Resumo:
Correlations between the elliptic or triangular flow coefficients vm (m=2 or 3) and other flow harmonics vn (n=2 to 5) are measured using sNN−−−−√=2.76 TeV Pb+Pb collision data collected in 2010 by the ATLAS experiment at the LHC, corresponding to an integrated lumonisity of 7 μb−1. The vm-vn correlations are measured in midrapidity as a function of centrality, and, for events within the same centrality interval, as a function of event ellipticity or triangularity defined in a forward rapidity region. For events within the same centrality interval, v3 is found to be anticorrelated with v2 and this anticorrelation is consistent with similar anticorrelations between the corresponding eccentricities ϵ2 and ϵ3. On the other hand, it is observed that v4 increases strongly with v2, and v5 increases strongly with both v2 and v3. The trend and strength of the vm-vn correlations for n=4 and 5 are found to disagree with ϵm-ϵn correlations predicted by initial-geometry models. Instead, these correlations are found to be consistent with the combined effects of a linear contribution to vn and a nonlinear term that is a function of v22 or of v2v3, as predicted by hydrodynamic models. A simple two-component fit is used to separate these two contributions. The extracted linear and nonlinear contributions to v4 and v5 are found to be consistent with previously measured event-plane correlations.
Resumo:
Programa Doutoral em Engenharia Eletrónica e de Computadores
Resumo:
The needs of reducing human error has been growing in every field of study, and medicine is one of those. Through the implementation of technologies is possible to help in the decision making process of clinics, therefore to reduce the difficulties that are typically faced. This study focuses on easing some of those difficulties by presenting real-time data mining models capable of predicting if a monitored patient, typically admitted in intensive care, will need to take vasopressors. Data Mining models were induced using clinical variables such as vital signs, laboratory analysis, among others. The best model presented a sensitivity of 94.94%. With this model it is possible reducing the misuse of vasopressors acting as prevention. At same time it is offered a better care to patients by anticipating their treatment with vasopressors.
Resumo:
Este proyecto propone extender y generalizar los procesos de estimación e inferencia de modelos aditivos generalizados multivariados para variables aleatorias no gaussianas, que describen comportamientos de fenómenos biológicos y sociales y cuyas representaciones originan series longitudinales y datos agregados (clusters). Se genera teniendo como objeto para las aplicaciones inmediatas, el desarrollo de metodología de modelación para la comprensión de procesos biológicos, ambientales y sociales de las áreas de Salud y las Ciencias Sociales, la condicionan la presencia de fenómenos específicos, como el de las enfermedades.Es así que el plan que se propone intenta estrechar la relación entre la Matemática Aplicada, desde un enfoque bajo incertidumbre y las Ciencias Biológicas y Sociales, en general, generando nuevas herramientas para poder analizar y explicar muchos problemas sobre los cuales tienen cada vez mas información experimental y/o observacional.Se propone, en forma secuencial, comenzando por variables aleatorias discretas (Yi, con función de varianza menor que una potencia par del valor esperado E(Y)) generar una clase unificada de modelos aditivos (paramétricos y no paramétricos) generalizados, la cual contenga como casos particulares a los modelos lineales generalizados, no lineales generalizados, los aditivos generalizados, los de media marginales generalizados (enfoques GEE1 -Liang y Zeger, 1986- y GEE2 -Zhao y Prentice, 1990; Zeger y Qaqish, 1992; Yan y Fine, 2004), iniciando una conexión con los modelos lineales mixtos generalizados para variables latentes (GLLAMM, Skrondal y Rabe-Hesketh, 2004), partiendo de estructuras de datos correlacionados. Esto permitirá definir distribuciones condicionales de las respuestas, dadas las covariables y las variables latentes y estimar ecuaciones estructurales para las VL, incluyendo regresiones de VL sobre las covariables y regresiones de VL sobre otras VL y modelos específicos para considerar jerarquías de variación ya reconocidas. Cómo definir modelos que consideren estructuras espaciales o temporales, de manera tal que permitan la presencia de factores jerárquicos, fijos o aleatorios, medidos con error como es el caso de las situaciones que se presentan en las Ciencias Sociales y en Epidemiología, es un desafío a nivel estadístico. Se proyecta esa forma secuencial para la construcción de metodología tanto de estimación como de inferencia, comenzando con variables aleatorias Poisson y Bernoulli, incluyendo los existentes MLG, hasta los actuales modelos generalizados jerárquicos, conextando con los GLLAMM, partiendo de estructuras de datos correlacionados. Esta familia de modelos se generará para estructuras de variables/vectores, covariables y componentes aleatorios jerárquicos que describan fenómenos de las Ciencias Sociales y la Epidemiología.
Resumo:
This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).
Resumo:
This paper develops the link between poverty and inequality by focussing on a class of poverty indices (some of them well-known) which aggregate normative concerns for absolute and relative deprivation. The indices are distinguished by a parameter that captures the ethical sensitivity of poverty measurement to ``exclusion'' or ``relative-deprivation'' aversion. We also show how the indices can be readily used to predict the impact of growth on poverty. An illustration using LIS data finds that he United States show more relative deprivation than Denmark and Belgium whatever the percentiles considered, but that overall deprivation comparisons of the four countries considered will generally necessarily depend on the intensity of the ethical concern for relative deprivation. The impact of growth on poverty is also seen to depend on the presence of and on the attention granted to concerns over relative deprivation. }
Resumo:
Among the largest resources for biological sequence data is the large amount of expressed sequence tags (ESTs) available in public and proprietary databases. ESTs provide information on transcripts but for technical reasons they often contain sequencing errors. Therefore, when analyzing EST sequences computationally, such errors must be taken into account. Earlier attempts to model error prone coding regions have shown good performance in detecting and predicting these while correcting sequencing errors using codon usage frequencies. In the research presented here, we improve the detection of translation start and stop sites by integrating a more complex mRNA model with codon usage bias based error correction into one hidden Markov model (HMM), thus generalizing this error correction approach to more complex HMMs. We show that our method maintains the performance in detecting coding sequences.