942 resultados para Classification error rate
Resumo:
Frame rate upconversion (FRUC) is an important post-processing technique to enhance the visual quality of low frame rate video. A major, recent advance in this area is FRUC based on trilateral filtering which novelty mainly derives from the combination of an edge-based motion estimation block matching criterion with the trilateral filter. However, there is still room for improvement, notably towards reducing the size of the uncovered regions in the initial estimated frame, this means the estimated frame before trilateral filtering. In this context, proposed is an improved motion estimation block matching criterion where a combined luminance and edge error metric is weighted according to the motion vector components, notably to regularise the motion field. Experimental results confirm that significant improvements are achieved for the final interpolated frames, reaching PSNR gains up to 2.73 dB, on average, regarding recent alternative solutions, for video content with varied motion characteristics.
Resumo:
MultiBand OFDM (MB-OFDM) UWB [1] is a short-range promising wireless technology for high data rate communications up to 480 Mbps. In this paper, we have designed and implemented in an Virtex-6 FPGA an MB-OFDM UWB receiver for the highest data rate of 480 Mbps. To test the system, we have also implemented an MB-OFDM transmitter and an AWGN generator in VHDL and determined the bit error rates at the receiver running in an FPGA.
Resumo:
OBJECTIVE: To develop a Charlson-like comorbidity index based on clinical conditions and weights of the original Charlson comorbidity index. METHODS: Clinical conditions and weights were adapted from the International Classification of Diseases, 10th revision and applied to a single hospital admission diagnosis. The study included 3,733 patients over 18 years of age who were admitted to a public general hospital in the city of Rio de Janeiro, southeast Brazil, between Jan 2001 and Jan 2003. The index distribution was analyzed by gender, type of admission, blood transfusion, intensive care unit admission, age and length of hospital stay. Two logistic regression models were developed to predict in-hospital mortality including: a) the aforementioned variables and the risk-adjustment index (full model); and b) the risk-adjustment index and patient's age (reduced model). RESULTS: Of all patients analyzed, 22.3% had risk scores >1, and their mortality rate was 4.5% (66.0% of them had scores >1). Except for gender and type of admission, all variables were retained in the logistic regression. The models including the developed risk index had an area under the receiver operating characteristic curve of 0.86 (full model), and 0.76 (reduced model). Each unit increase in the risk score was associated with nearly 50% increase in the odds of in-hospital death. CONCLUSIONS: The risk index developed was able to effectively discriminate the odds of in-hospital death which can be useful when limited information is available from hospital databases.
Resumo:
Crossed classification models are applied in many investigations taking in consideration the existence of interaction between all factors or, in alternative, excluding all interactions, and in this case only the effects and the error term are considered. In this work we use commutative Jordan algebras in the study of the algebraic structure of these designs and we use them to obtain similar designs where only some of the interactions are considered. We finish presenting the expressions of the variance componentes estimators.
Resumo:
In the present paper we assess the performance of information-theoretic inspired risks functionals in multilayer perceptrons with reference to the two most popular ones, Mean Square Error and Cross-Entropy. The information-theoretic inspired risks, recently proposed, are: HS and HR2 are, respectively, the Shannon and quadratic Rényi entropies of the error; ZED is a risk reflecting the error density at zero errors; EXP is a generalized exponential risk, able to mimic a wide variety of risk functionals, including the information-thoeretic ones. The experiments were carried out with multilayer perceptrons on 35 public real-world datasets. All experiments were performed according to the same protocol. The statistical tests applied to the experimental results showed that the ubiquitous mean square error was the less interesting risk functional to be used by multilayer perceptrons. Namely, mean square error never achieved a significantly better classification performance than competing risks. Cross-entropy and EXP were the risks found by several tests to be significantly better than their competitors. Counts of significantly better and worse risks have also shown the usefulness of HS and HR2 for some datasets.
Resumo:
Thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the subject of Electrical and Computer Engineering
Resumo:
We determine he optimal combination of a universal benefit, B, and categorical benefit, C, for an economy in which individuals differ in both their ability to work - modelled as an exogenous zero quantity constraint on labour supply - and, conditional on being able to work, their productivity at work. C is targeted at those unable to work, and is conditioned in two dimensions: ex-ante an individual must be unable to work and be awarded the benefit, whilst ex-post a recipient must not subsequently work. However, the ex-ante conditionality may be imperfectly enforced due to Type I (false rejection) and Type II (false award) classification errors, whilst, in addition, the ex-post conditionality may be imperfectly enforced. If there are no classification errors - and thus no enforcement issues - it is always optimal to set C>0, whilst B=0 only if the benefit budget is sufficiently small. However, when classification errors occur, B=0 only if there are no Type I errors and the benefit budget is sufficiently small, while the conditions under which C>0 depend on the enforcement of the ex-post conditionality. We consider two discrete alternatives. Under No Enforcement C>0 only if the test administering C has some discriminatory power. In addition, social welfare is decreasing in the propensity to make each type error. However, under Full Enforcement C>0 for all levels of discriminatory power. Furthermore, whilst social welfare is decreasing in the propensity to make Type I errors, there are certain conditions under which it is increasing in the propensity to make Type II errors. This implies that there may be conditions under which it would be welfare enhancing to lower the chosen eligibility threshold - support the suggestion by Goodin (1985) to "err on the side of kindness".
Resumo:
Objective: Non-operative management (NOM) of blunt splenic injuries (BSI) is nowadays considered the standard treatment. The study aimed to determine the criteria applied for NOM and to identify risk factors for its failure. Methods: Review of all adult patients with BSI treated at the University Hospital Bern, Switzerland, between 2000 and 2008. Results: There were 206 patients (146 men, 70·9%) with a mean age of 38·2 ± 19·1 years and an Injury Severity Score of 30·9 ± 11·6. The American Association for the Surgery of Trauma classification of the splenic injury was: grade I, n=43 (20·9%); grade II, n=52 (25·2%); grade III, n=60 (29·1%); grade IV, n=42 (20·4%) and grade V, n=9 (4·4%). 47 patients (22·8%) required immediate surgery. Five or more units of red cell transfusions (P<0·001), Glasgow Coma Scale<11 (P=0·009) and age ≥55 years (P=0·038) were associated with primary operative management (OM). 159 patients (77·2%) qualified for NOM, which was successful in 89·9% (143/159). The overall splenic salvage rate was 69·4% (143/206). Multivariate analysis found age ≥40 years to be the only factor independently related to the failure of NOM (P=0·001). Conclusion: Advanced age is associated with an increased failure rate ofNOM in patients with BSI.
Resumo:
Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
Introduction: Non-operative management (NOM) of blunt splenic injuries in hemodynamically stable patients is nowadays considered the standard treatment. Material and Methods: The aim was to clarify the criteria used for primary operative management (OM) and planned NOM. Furthermore, the study aimed to identify risk factors for failure of NOM. All adult patients with blunt splenic injuries treated from 2000-2008 were reviewed and a logistic regression analysis employed. Results: There were 206 patients (146 men, 70.9%). Mean age was 38.2 ± 19.1 years. The mean Injury Severity Score (ISS) was 30.9 ± 11.6. The American Association for the Surgery of Trauma (AAST) classification of the splenic injury was: grade I, n = 43 (20.9%); grade II, n = 52 (25.2%); grade III, n = 60 (29.1%), grade IV, n = 42 (20.4%) and grade V, n = 9 (4.4%). 47 patients (22.8%) required immediate surgery (OM). More than 5 units of red cell transfusions (odds ratio [OR] 13.72, P < 0.001), a Glasgow Coma Scale < 11 (OR 9.88, P = 0.009) and age ? 55 years (OR 3.29, P = 0.038) were associated with primary OM. 159 patients (77.2%) qualified for a non-surgical approach (NOM), which was successful in 89.9% (143/159). The overall splenic salvage rate amounted to 69.4% (143/206). Multiple logistic regression analysis found age ? 40 years to be the only factor significantly and independently related to the failure of NOM (OR 13.58, P = 0.001). Conclusion: Advanced age is associated with an increased failure rate of NOM in patients with blunt splenic injuries.
Resumo:
BACKGROUND: To compare the prognostic relevance of Masaoka and Müller-Hermelink classifications. METHODS: We treated 71 patients with thymic tumors at our institution between 1980 and 1997. Complete follow-up was achieved in 69 patients (97%) with a mean follow up-time of 8.3 years (range, 9 months to 17 years). RESULTS: Masaoka stage I was found in 31 patients (44.9%), stage II in 17 (24.6%), stage III in 19 (27.6%), and stage IV in 2 (2.9%). The 10-year overall survival rate was 83.5% for stage I, 100% for stage IIa, 58% for stage IIb, 44% for stage III, and 0% for stage IV. The disease-free survival rates were 100%, 70%, 40%, 38%, and 0%, respectively. Histologic classification according to Müller-Hermelink found medullary tumors in 7 patients (10.1%), mixed in 18 (26.1%), organoid in 14 (20.3%), cortical in 11 (15.9%), well-differentiated thymic carcinoma in 14 (20.3%), and endocrine carcinoma in 5 (7.3%), with 10-year overall survival rates of 100%, 75%, 92%, 87.5%, 30%, and 0%, respectively, and 10-year disease-free survival rates of 100%, 100%, 77%, 75%, 37%, and 0%, respectively. Medullary, mixed, and well-differentiated organoid tumors were correlated with stage I and II, and well-differentiated thymic carcinoma and endocrine carcinoma with stage III and IV (p < 0.001). Multivariate analysis showed age, gender, myasthenia gravis, and postoperative adjuvant therapy not to be significant predictors of overall and disease-free survival after complete resection, whereas the Müller-Hermelink and Masaoka classifications were independent significant predictors for overall (p < 0.05) and disease-free survival (p < 0.004; p < 0.0001). CONCLUSIONS: The consideration of staging and histology in thymic tumors has the potential to improve recurrence prediction and patient selection for combined treatment modalities.
Resumo:
We present a heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion. To achieve this goal, the optimal codeword separation is sacrificed in favor of a maximum class discrimination in the partitions. The creation of the hierarchical partition set is performed using a binary tree. As a result, a compact matrix with high discrimination power is obtained. Our method is validated using the UCI database and applied to a real problem, the classification of traffic sign images.
Resumo:
A common way to model multiclass classification problems is by means of Error-Correcting Output Codes (ECOCs). Given a multiclass problem, the ECOC technique designs a code word for each class, where each position of the code identifies the membership of the class for a given binary problem. A classification decision is obtained by assigning the label of the class with the closest code. One of the main requirements of the ECOC design is that the base classifier is capable of splitting each subgroup of classes from each binary problem. However, we cannot guarantee that a linear classifier model convex regions. Furthermore, nonlinear classifiers also fail to manage some type of surfaces. In this paper, we present a novel strategy to model multiclass classification problems using subclass information in the ECOC framework. Complex problems are solved by splitting the original set of classes into subclasses and embedding the binary problems in a problem-dependent ECOC design. Experimental results show that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceal the decision boundaries for the base classifier. The results are even more significant when one has a sufficiently large training size.
Resumo:
The Iowa D.O.T. has a classification system designed to rate coarse aggregates as to their skid resistant characteristics. Aggregates have been classified into five functional types, with a Type 1 being the most skid resistant. A complete description of the classification system can be found in the Office of Materials Instructional Memorandum T-203. Due to the variability of ledges within any given quarry the classification of individual ledges becomes necessary. The type of aggregate is then specified for each asphaltic concrete surface course. As various aggregates become used in a.c. paving, there is a continuing process of evaluating the frictional properties of the pavement surface. It is primarily through an effort of this sort that information on aggregate sources and individual ledges becomes more refined. This study is being conducted to provide that needed up-to-date information that can be used to monitor the aggregate classification system.