985 resultados para JEL classification codes: L15
Resumo:
Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.
Resumo:
The first AO comprehensive pediatric long-bone fracture classification system has been proposed following a structured path of development and validation with experienced pediatric surgeons. A Web-based multicenter agreement study involving 70 surgeons in 15 clinics and 5 countries was conducted to assess the reliability and accuracy of this classification when used by a wide range of surgeons with various levels of experience. Training was provided at each clinic before the session. Using the Internet, participants could log in at any time and classify 275 supracondylar, radius, and tibia fractures at their own pace. The fracture diagnosis was made following the hierarchy of the classification system using both clinical terminology and codes. kappa coefficients for the single-surgeon diagnosis of epiphyseal, metaphyseal, or diaphyseal fracture type were 0.66, 0.80, and 0.91, respectively. Median accuracy estimates for each bone and type were all greater than 80%. Depending on their experience and specialization, surgeons greatly varied in their ability to classify fractures. Pediatric training and at least 2 years of experience were associated with significant improvement in reliability and accuracy. Kappa coefficients for diagnosis of specific child patterns were 0.51, 0.63, and 0.48 for epiphyseal, metaphyseal, and diaphyseal fractures, respectively. Identified reasons for coding discrepancies were related to different understandings of terminology and definitions, as well as poor quality radiographic images. Results supported some minor adjustments in the coding of fracture type and child patterns. This classification system received wide acceptance and support among the surgeons involved. As long as appropriate training could be performed, the system classification was reliable, especially among surgeons with a minimum of 2 years of clinical experience. We encourage broad-based consultation between surgeons' international societies and the use of this classification system in the context of clinical practice as well as prospectively for clinical studies.
Resumo:
We present a heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion. To achieve this goal, the optimal codeword separation is sacrificed in favor of a maximum class discrimination in the partitions. The creation of the hierarchical partition set is performed using a binary tree. As a result, a compact matrix with high discrimination power is obtained. Our method is validated using the UCI database and applied to a real problem, the classification of traffic sign images.
Resumo:
A common way to model multiclass classification problems is by means of Error-Correcting Output Codes (ECOCs). Given a multiclass problem, the ECOC technique designs a code word for each class, where each position of the code identifies the membership of the class for a given binary problem. A classification decision is obtained by assigning the label of the class with the closest code. One of the main requirements of the ECOC design is that the base classifier is capable of splitting each subgroup of classes from each binary problem. However, we cannot guarantee that a linear classifier model convex regions. Furthermore, nonlinear classifiers also fail to manage some type of surfaces. In this paper, we present a novel strategy to model multiclass classification problems using subclass information in the ECOC framework. Complex problems are solved by splitting the original set of classes into subclasses and embedding the binary problems in a problem-dependent ECOC design. Experimental results show that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceal the decision boundaries for the base classifier. The results are even more significant when one has a sufficiently large training size.
Resumo:
BACKGROUND: Surveillance of multiple congenital anomalies is considered to be more sensitive for the detection of new teratogens than surveillance of all or isolated congenital anomalies. Current literature proposes the manual review of all cases for classification into isolated or multiple congenital anomalies. METHODS: Multiple anomalies were defined as two or more major congenital anomalies, excluding sequences and syndromes. A computer algorithm for classification of major congenital anomaly cases in the EUROCAT database according to International Classification of Diseases (ICD)v10 codes was programmed, further developed, and implemented for 1 year's data (2004) from 25 registries. The group of cases classified with potential multiple congenital anomalies were manually reviewed by three geneticists to reach a final agreement of classification as "multiple congenital anomaly" cases. RESULTS: A total of 17,733 cases with major congenital anomalies were reported giving an overall prevalence of major congenital anomalies at 2.17%. The computer algorithm classified 10.5% of all cases as "potentially multiple congenital anomalies". After manual review of these cases, 7% were agreed to have true multiple congenital anomalies. Furthermore, the algorithm classified 15% of all cases as having chromosomal anomalies, 2% as monogenic syndromes, and 76% as isolated congenital anomalies. The proportion of multiple anomalies varies by congenital anomaly subgroup with up to 35% of cases with bilateral renal agenesis. CONCLUSIONS: The implementation of the EUROCAT computer algorithm is a feasible, efficient, and transparent way to improve classification of congenital anomalies for surveillance and research.
Resumo:
The World Health Organization (WHO) plans to submit the 11th revision of the International Classification of Diseases (ICD) to the World Health Assembly in 2018. The WHO is working toward a revised classification system that has an enhanced ability to capture health concepts in a manner that reflects current scientific evidence and that is compatible with contemporary information systems. In this paper, we present recommendations made to the WHO by the ICD revision's Quality and Safety Topic Advisory Group (Q&S TAG) for a new conceptual approach to capturing healthcare-related harms and injuries in ICD-coded data. The Q&S TAG has grouped causes of healthcare-related harm and injuries into four categories that relate to the source of the event: (a) medications and substances, (b) procedures, (c) devices and (d) other aspects of care. Under the proposed multiple coding approach, one of these sources of harm must be coded as part of a cluster of three codes to depict, respectively, a healthcare activity as a 'source' of harm, a 'mode or mechanism' of harm and a consequence of the event summarized by these codes (i.e. injury or harm). Use of this framework depends on the implementation of a new and potentially powerful code-clustering mechanism in ICD-11. This new framework for coding healthcare-related harm has great potential to improve the clinical detail of adverse event descriptions, and the overall quality of coded health data.
Resumo:
There are numerous text documents available in electronic form. More and more are becoming available every day. Such documents represent a massive amount of information that is easily accessible. Seeking value in this huge collection requires organization; much of the work of organizing documents can be automated through text classification. The accuracy and our understanding of such systems greatly influences their usefulness. In this paper, we seek 1) to advance the understanding of commonly used text classification techniques, and 2) through that understanding, improve the tools that are available for text classification. We begin by clarifying the assumptions made in the derivation of Naive Bayes, noting basic properties and proposing ways for its extension and improvement. Next, we investigate the quality of Naive Bayes parameter estimates and their impact on classification. Our analysis leads to a theorem which gives an explanation for the improvements that can be found in multiclass classification with Naive Bayes using Error-Correcting Output Codes. We use experimental evidence on two commonly-used data sets to exhibit an application of the theorem. Finally, we show fundamental flaws in a commonly-used feature selection algorithm and develop a statistics-based framework for text feature selection. Greater understanding of Naive Bayes and the properties of text allows us to make better use of it in text classification.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Zones of mixing between shallow groundwaters of different composition were unravelled by two-way regionalized classification, a technique based on correspondence analysis (CA), cluster analysis (ClA) and discriminant analysis (DA), aided by gridding, map-overlay and contouring tools. The shallow groundwaters are from a granitoid plutonite in the Funda o region (central Portugal). Correspondence analysis detected three natural clusters in the working dataset: 1, weathering; 2, domestic effluents; 3, fertilizers. Cluster analysis set an alternative distribution of the samples by the three clusters. Group memberships obtained by correspondence analysis and by cluster analysis were optimized by discriminant analysis, gridded memberships as follows: codes 1, 2 or 3 were used when classification by correspondence analysis and cluster analysis produced the same results; code 0 when the grid node was first assigned to cluster 1 and then to cluster 2 or vice versa (mixing between weathering and effluents); code 4 in the other cases (mixing between agriculture and the other influences). Code-3 areas were systematically surrounded by code-4 areas, an observation attributed to hydrodynamic dispersion. Accordingly, the extent of code-4 areas in two orthogonal directions was assumed proportional to the longitudinal and transverse dispersivities of local soils. The results (0.7-16.8 and 0.4-4.3 m, respectively) are acceptable at the macroscopic scale. The ratios between longitudinal and transverse dispersivities (1.2-11.1) are also in agreement with results obtained by other studies.
Resumo:
Mode of access: Internet.
Resumo:
Objective: To demonstrate properties of the International Classification of the External Cause of Injury (ICECI) as a tool for use in injury prevention research. Methods: The Childhood Injury Prevention Study (CHIPS) is a prospective longitudinal follow up study of a cohort of 871 children 5 - 12 years of age, with a nested case crossover component. The ICECI is the latest tool in the International Classification of Diseases (ICD) family and has been designed to improve the precision of coding injury events. The details of all injury events recorded in the study, as well as all measured injury related exposures, were coded using the ICECI. This paper reports a substudy on the utility and practicability of using the ICECI in the CHIPS to record exposures. Interrater reliability was quantified for a sample of injured participants using the Kappa statistic to measure concordance between codes independently coded by two research staff. Results: There were 767 diaries collected at baseline and event details from 563 injuries and exposure details from injury crossover periods. There were no event, location, or activity details which could not be coded using the ICECI. Kappa statistics for concordance between raters within each of the dimensions ranged from 0.31 to 0.93 for the injury events and 0.94 and 0.97 for activity and location in the control periods. Discussion: This study represents the first detailed account of the properties of the ICECI revealed by its use in a primary analytic epidemiological study of injury prevention. The results of this study provide considerable support for the ICECI and its further use.
Resumo:
* Supported by COMBSTRU Research Training Network HPRN-CT-2002-00278 and the Bulgarian National Science Foundation under Grant MM-1304/03.
Resumo:
Partially supported by the Technical University of Gabrovo under Grant C-801/2008
Resumo:
2000 Mathematics Subject Classification: 94B05, 94B15.