870 resultados para Classification of sciences
Resumo:
Technology of classification of electronic documents based on the theory of disturbance of pseudoinverse matrices was proposed.
Resumo:
This paper deals with the classification of news items in ePaper, a prototype system of a future personalized newspaper service on a mobile reading device. The ePaper system aggregates news items from various news providers and delivers to each subscribed user (reader) a personalized electronic newspaper, utilizing content-based and collaborative filtering methods. The ePaper can also provide users "standard" (i.e., not personalized) editions of selected newspapers, as well as browsing capabilities in the repository of news items. This paper concentrates on the automatic classification of incoming news using hierarchical news ontology. Based on this classification on one hand, and on the users' profiles on the other hand, the personalization engine of the system is able to provide a personalized paper to each user onto her mobile reading device.
Resumo:
A major drawback of artificial neural networks is their black-box character. Therefore, the rule extraction algorithm is becoming more and more important in explaining the extracted rules from the neural networks. In this paper, we use a method that can be used for symbolic knowledge extraction from neural networks, once they have been trained with desired function. The basis of this method is the weights of the neural network trained. This method allows knowledge extraction from neural networks with continuous inputs and output as well as rule extraction. An example of the application is showed. This example is based on the extraction of average load demand of a power plant.
Resumo:
This article describes and classifies various approaches for solving the global illumination problem. The classification aims to show the similarities between different types of algorithms. We introduce the concept of Light Manager, as a central element and mediator between illumination algorithms in a heterogeneous environment of a graphical system. We present results and analysis of the implementation of the described ideas.
Resumo:
Short text messages a.k.a Microposts (e.g. Tweets) have proven to be an effective channel for revealing information about trends and events, ranging from those related to Disaster (e.g. hurricane Sandy) to those related to Violence (e.g. Egyptian revolution). Being informed about such events as they occur could be extremely important to authorities and emergency professionals by allowing such parties to immediately respond. In this work we study the problem of topic classification (TC) of Microposts, which aims to automatically classify short messages based on the subject(s) discussed in them. The accurate TC of Microposts however is a challenging task since the limited number of tokens in a post often implies a lack of sufficient contextual information. In order to provide contextual information to Microposts, we present and evaluate several graph structures surrounding concepts present in linked knowledge sources (KSs). Traditional TC techniques enrich the content of Microposts with features extracted only from the Microposts content. In contrast our approach relies on the generation of different weighted semantic meta-graphs extracted from linked KSs. We introduce a new semantic graph, called category meta-graph. This novel meta-graph provides a more fine grained categorisation of concepts providing a set of novel semantic features. Our findings show that such category meta-graph features effectively improve the performance of a topic classifier of Microposts. Furthermore our goal is also to understand which semantic feature contributes to the performance of a topic classifier. For this reason we propose an approach for automatic estimation of accuracy loss of a topic classifier on new, unseen Microposts. We introduce and evaluate novel topic similarity measures, which capture the similarity between the KS documents and Microposts at a conceptual level, considering the enriched representation of these documents. Extensive evaluation in the context of Emergency Response (ER) and Violence Detection (VD) revealed that our approach outperforms previous approaches using single KS without linked data and Twitter data only up to 31.4% in terms of F1 measure. Our main findings indicate that the new category graph contains useful information for TC and achieves comparable results to previously used semantic graphs. Furthermore our results also indicate that the accuracy of a topic classifier can be accurately predicted using the enhanced text representation, outperforming previous approaches considering content-based similarity measures. © 2014 Elsevier B.V. All rights reserved.
Resumo:
Илинка А. Димитрова, Цветелина Н. Младенова - Моноида P Tn от всички частични преобразования върху едно n-елементно множество относно операцията композиция на преобразования е изучаван в различни аспекти от редица автори. Едно частично преобразование α се нарича запазващо наредбата, ако от x ≤ y следва, че xα ≤ yα за всяко x, y от дефиниционното множество на α. Обект на разглеждане в настоящата работа е моноида P On състоящ се от всички частични запазващи наредбата преобразования. Очевидно P On е под-моноид на P Tn. Направена е пълна класификация на максималните подполугрупи на моноида P On. Доказано е, че съществуват пет различни вида максимални подполугрупи на разглеждания моноид. Броят на всички максимални подполугрупи на POn е точно 2^n + 2n − 2.
Resumo:
The digitization practice for retro-converting of the mathematical periodicals, published by the Institute of Mathematics and Informatics, Bulgarian Academy of Sciences (IMI-BAS) and the followed benefits for long-term preserving and assuring open access to these materials is discussed in the article.
Resumo:
The thesis presented an overlapping analysis of private law institutions, in response to the arguments that law must be separated into discrete categories. The basis of this overlapping approach was the realist perspective, which emphasises the role of facts and outcomes as the starting point for legal analysis as opposed to legal principle or doctrine.
Resumo:
Az egyes nemzetek számviteli szabályozásának vizsgálatánál az adott ország sajátosságaiból eredően részben eltérő szabályozások alakultak ki. Az induktív megközelítésű vizsgálatok jellemzően a szabályozási kérdések széles körét fogják át, de csak néhány tényező mentén közelítve. A cash flow-kimutatások témakörénél a legtöbbször csak azt nézték, hogy van-e előírás a kimutatás elkészítésére, de a részletekkel már kevésbé foglalkoztak. Ebből adódóan e területen viszonylag kis különbséget mutattak ki ezek a felmérések. A szerző kutatása szerint a nemzeti cash flow-kimutatások szabályozásának részleteiben eltérések tapasztalhatók, és ezek alapján a nemzetek klaszterelemzéssel hierarchikusan csoportokba rendezhetők. _____ Research has found that as a result of their particularities, different countries have established partly different accounting frameworks. Studies with inductive approaches typically encompass a wide range of regulatory issues, but based on a limited number of factors only. In the case of Statements of Cash Flows, most studies have so far only examined the existence of rules governing the presentation of the statement, without an in-depth analysis of the details. Therefore, these studies only found relatively minor differences in this field. The author’s research shows that many differences exist in the details of national Cash Flow Statement regulations, which makes it possible to classify the countries in groups using the method of hierarchical clustering.
Resumo:
Az utóbbi évtizedekben egyre gyakrabban merült fel a közszolgálati szervezetek értékelésének igénye, és egyre újabb módszerek jelentek meg, amelyek felvetették ezek rendszerezésének szükségességét mind a gyakorlatban, mind a kutatásokban. A szerző a szakirodalomban fellelhető osztályozási kísérleteknek és az értékelés szakterülete szempontjainak figyelembevételével javaslatot tesz a közszolgálati szervezetek értékelési módszereinek osztályozási keretrendszerére. Az osztályozási szempontok között szerepel az értékelő helyzete, az értékelés szerepe és a megismerés módszere. Az osztályozási keretrendszer tartalmát a szerző példákkal is illusztrálja, amely jelzi a modell gyakorlati alkalmazhatóságát. Ugyanakkor a keretrendszer a kutatások fókuszának és érvényességi körének meghatározásában is segítséget nyújthat. _____ In the last decades the need of the evaluation of public sector organizations has emerged more and more often, and many new methods have shown up that has raised the need of their classification in practice and in research, as well. Based on literature review and the literature of evaluation the author makes a proposal on the classification framework of the evaluation methods of public sector organizations. The dimensions of the classification include the situation of evaluator, the role of evaluation and the approach of knowledge. The author illustrates the content of the framework with examples referring to the applicability of the model in practice. At the same time, the framework is also useful in determining the focus or the scope of research projects.
Resumo:
Press Release from Florida International University 's Office of Media Relations announcing the selection of Dr. John Rock's appointment as first dean of academic affairs at Florida International University 's College of Medicine.
Resumo:
This dissertation develops a new figure of merit to measure the similarity (or dissimilarity) of Gaussian distributions through a novel concept that relates the Fisher distance to the percentage of data overlap. The derivations are expanded to provide a generalized mathematical platform for determining an optimal separating boundary of Gaussian distributions in multiple dimensions. Real-world data used for implementation and in carrying out feasibility studies were provided by Beckman-Coulter. It is noted that although the data used is flow cytometric in nature, the mathematics are general in their derivation to include other types of data as long as their statistical behavior approximate Gaussian distributions. ^ Because this new figure of merit is heavily based on the statistical nature of the data, a new filtering technique is introduced to accommodate for the accumulation process involved with histogram data. When data is accumulated into a frequency histogram, the data is inherently smoothed in a linear fashion, since an averaging effect is taking place as the histogram is generated. This new filtering scheme addresses data that is accumulated in the uneven resolution of the channels of the frequency histogram. ^ The qualitative interpretation of flow cytometric data is currently a time consuming and imprecise method for evaluating histogram data. This method offers a broader spectrum of capabilities in the analysis of histograms, since the figure of merit derived in this dissertation integrates within its mathematics both a measure of similarity and the percentage of overlap between the distributions under analysis. ^
Resumo:
Flow Cytometry analyzers have become trusted companions due to their ability to perform fast and accurate analyses of human blood. The aim of these analyses is to determine the possible existence of abnormalities in the blood that have been correlated with serious disease states, such as infectious mononucleosis, leukemia, and various cancers. Though these analyzers provide important feedback, it is always desired to improve the accuracy of the results. This is evidenced by the occurrences of misclassifications reported by some users of these devices. It is advantageous to provide a pattern interpretation framework that is able to provide better classification ability than is currently available. Toward this end, the purpose of this dissertation was to establish a feature extraction and pattern classification framework capable of providing improved accuracy for detecting specific hematological abnormalities in flow cytometric blood data. ^ This involved extracting a unique and powerful set of shift-invariant statistical features from the multi-dimensional flow cytometry data and then using these features as inputs to a pattern classification engine composed of an artificial neural network (ANN). The contribution of this method consisted of developing a descriptor matrix that can be used to reliably assess if a donor’s blood pattern exhibits a clinically abnormal level of variant lymphocytes, which are blood cells that are potentially indicative of disorders such as leukemia and infectious mononucleosis. ^ This study showed that the set of shift-and-rotation-invariant statistical features extracted from the eigensystem of the flow cytometric data pattern performs better than other commonly-used features in this type of disease detection, exhibiting an accuracy of 80.7%, a sensitivity of 72.3%, and a specificity of 89.2%. This performance represents a major improvement for this type of hematological classifier, which has historically been plagued by poor performance, with accuracies as low as 60% in some cases. This research ultimately shows that an improved feature space was developed that can deliver improved performance for the detection of variant lymphocytes in human blood, thus providing significant utility in the realm of suspect flagging algorithms for the detection of blood-related diseases.^