938 resultados para Ontology Visualization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Da Costa's conception of being modifies that of Quine to incorporate relativization to non-classical logics. A naturalistic view of this conception is discussed. This view tries to extend to logic some ideas of Maddy's naturalism concerning mathematics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual data mining (VDM) tools employ information visualization techniques in order to represent large amounts of high-dimensional data graphically and to involve the user in exploring data at different levels of detail. The users are looking for outliers, patterns and models – in the form of clusters, classes, trends, and relationships – in different categories of data, i.e., financial, business information, etc. The focus of this thesis is the evaluation of multidimensional visualization techniques, especially from the business user’s perspective. We address three research problems. The first problem is the evaluation of projection-based visualizations with respect to their effectiveness in preserving the original distances between data points and the clustering structure of the data. In this respect, we propose the use of existing clustering validity measures. We illustrate their usefulness in evaluating five visualization techniques: Principal Components Analysis (PCA), Sammon’s Mapping, Self-Organizing Map (SOM), Radial Coordinate Visualization and Star Coordinates. The second problem is concerned with evaluating different visualization techniques as to their effectiveness in visual data mining of business data. For this purpose, we propose an inquiry evaluation technique and conduct the evaluation of nine visualization techniques. The visualizations under evaluation are Multiple Line Graphs, Permutation Matrix, Survey Plot, Scatter Plot Matrix, Parallel Coordinates, Treemap, PCA, Sammon’s Mapping and the SOM. The third problem is the evaluation of quality of use of VDM tools. We provide a conceptual framework for evaluating the quality of use of VDM tools and apply it to the evaluation of the SOM. In the evaluation, we use an inquiry technique for which we developed a questionnaire based on the proposed framework. The contributions of the thesis consist of three new evaluation techniques and the results obtained by applying these evaluation techniques. The thesis provides a systematic approach to evaluation of various visualization techniques. In this respect, first, we performed and described the evaluations in a systematic way, highlighting the evaluation activities, and their inputs and outputs. Secondly, we integrated the evaluation studies in the broad framework of usability evaluation. The results of the evaluations are intended to help developers and researchers of visualization systems to select appropriate visualization techniques in specific situations. The results of the evaluations also contribute to the understanding of the strengths and limitations of the visualization techniques evaluated and further to the improvement of these techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software plays an important role in our society and economy. Software development is an intricate process, and it comprises many different tasks: gathering requirements, designing new solutions that fulfill these requirements, as well as implementing these designs using a programming language into a working system. As a consequence, the development of high quality software is a core problem in software engineering. This thesis focuses on the validation of software designs. The issue of the analysis of designs is of great importance, since errors originating from designs may appear in the final system. It is considered economical to rectify the problems as early in the software development process as possible. Practitioners often create and visualize designs using modeling languages, one of the more popular being the Uni ed Modeling Language (UML). The analysis of the designs can be done manually, but in case of large systems, the need of mechanisms that automatically analyze these designs arises. In this thesis, we propose an automatic approach to analyze UML based designs using logic reasoners. This approach firstly proposes the translations of the UML based designs into a language understandable by reasoners in the form of logic facts, and secondly shows how to use the logic reasoners to infer the logical consequences of these logic facts. We have implemented the proposed translations in the form of a tool that can be used with any standard compliant UML modeling tool. Moreover, we authenticate the proposed approach by automatically validating hundreds of UML based designs that consist of thousands of model elements available in an online model repository. The proposed approach is limited in scope, but is fully automatic and does not require any expertise of logic languages from the user. We exemplify the proposed approach with two applications, which include the validation of domain specific languages and the validation of web service interfaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A growing concern for organisations is how they should deal with increasing amounts of collected data. With fierce competition and smaller margins, organisations that are able to fully realize the potential in the data they collect can gain an advantage over the competitors. It is almost impossible to avoid imprecision when processing large amounts of data. Still, many of the available information systems are not capable of handling imprecise data, even though it can offer various advantages. Expert knowledge stored as linguistic expressions is a good example of imprecise but valuable data, i.e. data that is hard to exactly pinpoint to a definitive value. There is an obvious concern among organisations on how this problem should be handled; finding new methods for processing and storing imprecise data are therefore a key issue. Additionally, it is equally important to show that tacit knowledge and imprecise data can be used with success, which encourages organisations to analyse their imprecise data. The objective of the research conducted was therefore to explore how fuzzy ontologies could facilitate the exploitation and mobilisation of tacit knowledge and imprecise data in organisational and operational decision making processes. The thesis introduces both practical and theoretical advances on how fuzzy logic, ontologies (fuzzy ontologies) and OWA operators can be utilized for different decision making problems. It is demonstrated how a fuzzy ontology can model tacit knowledge which was collected from wine connoisseurs. The approach can be generalised and applied also to other practically important problems, such as intrusion detection. Additionally, a fuzzy ontology is applied in a novel consensus model for group decision making. By combining the fuzzy ontology with Semantic Web affiliated techniques novel applications have been designed. These applications show how the mobilisation of knowledge can successfully utilize also imprecise data. An important part of decision making processes is undeniably aggregation, which in combination with a fuzzy ontology provides a promising basis for demonstrating the benefits that one can retrieve from handling imprecise data. The new aggregation operators defined in the thesis often provide new possibilities to handle imprecision and expert opinions. This is demonstrated through both theoretical examples and practical implementations. This thesis shows the benefits of utilizing all the available data one possess, including imprecise data. By combining the concept of fuzzy ontology with the Semantic Web movement, it aspires to show the corporate world and industry the benefits of embracing fuzzy ontologies and imprecision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines information security as a process (information securing) in terms of what it does, especially beyond its obvious role of protector. It investigates concepts related to ‘ontology of becoming’, and examines what it is that information securing produces. The research is theory driven and draws upon three fields: sociology (especially actor-network theory), philosophy (especially Gilles Deleuze and Félix Guattari’s concept of ‘machine’, ‘territory’ and ‘becoming’, and Michel Serres’s concept of ‘parasite’), and information systems science (the subject of information security). Social engineering (used here in the sense of breaking into systems through non-technical means) and software cracker groups (groups which remove copy protection systems from software) are analysed as examples of breaches of information security. Firstly, the study finds that information securing is always interruptive: every entity (regardless of whether or not it is malicious) that becomes connected to information security is interrupted. Furthermore, every entity changes, becomes different, as it makes a connection with information security (ontology of becoming). Moreover, information security organizes entities into different territories. However, the territories – the insides and outsides of information systems – are ontologically similar; the only difference is in the order of the territories, not in the ontological status of entities that inhabit the territories. In other words, malicious software is ontologically similar to benign software; they both are users in terms of a system. The difference is based on the order of the system and users: who uses the system and what the system is used for. Secondly, the research shows that information security is always external (in the terms of this study it is a ‘parasite’) to the information system that it protects. Information securing creates and maintains order while simultaneously disrupting the existing order of the system that it protects. For example, in terms of software itself, the implementation of a copy protection system is an entirely external addition. In fact, this parasitic addition makes software different. Thus, information security disrupts that which it is supposed to defend from disruption. Finally, it is asserted that, in its interruption, information security is a connector that creates passages; it connects users to systems while also creating its own threats. For example, copy protection systems invite crackers and information security policies entice social engineers to use and exploit information security techniques in a novel manner.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, biomarkers and transcriptional factor motifs were identified in order to investigate the etiology and phenotypic severity of Down syndrome. GSE 1281, GSE 1611, and GSE 5390 were downloaded from the gene expression ominibus (GEO). A robust multiarray analysis (RMA) algorithm was applied to detect differentially expressed genes (DEGs). In order to screen for biological pathways and to interrogate the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway database, the database for annotation, visualization, and integrated discovery (DAVID) was used to carry out a gene ontology (GO) function enrichment for DEGs. Finally, a transcriptional regulatory network was constructed, and a hypergeometric distribution test was applied to select for significantly enriched transcriptional factor motifs. CBR1, DYRK1A, HMGN1, ITSN1, RCAN1, SON, TMEM50B, and TTC3 were each up-regulated two-fold in Down syndrome samples compared to normal samples; of these, SON and TTC3 were newly reported. CBR1, DYRK1A, HMGN1, ITSN1, RCAN1, SON, TMEM50B, and TTC3 were located on human chromosome 21 (mouse chromosome 16). The DEGs were significantly enriched in macromolecular complex subunit organization and focal adhesion pathways. Eleven significantly enriched transcription factor motifs (PAX5, EGR1, XBP1, SREBP1, OLF1, MZF1, NFY, NFKAPPAB, MYCMAX, NFE2, and RP58) were identified. The DEGs and transcription factor motifs identified in our study provide biomarkers for the understanding of Down syndrome pathogenesis and progression.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ontology matching is an important task when data from multiple data sources is integrated. Problems of ontology matching have been studied widely in the researchliterature and many different solutions and approaches have been proposed alsoin commercial software tools. In this survey, well-known approaches of ontologymatching, and its subtype schema matching, are reviewed and compared. The aimof this report is to summarize the knowledge about the state-of-the-art solutionsfrom the research literature, discuss how the methods work on different application domains, and analyze pros and cons of different open source and academic tools inthe commercial world.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this thesis was to develop a program that can illustrate thermal-hydraulic node dimensions used in SMABRE simulations. These created node illustrations are used to verify the correctness of the designed simulation model and in addition they can be included in scientific reports. This thesis will include theory about SMABRE and relevant programs that were used to achieve the ending results. This thesis will give explanations for different modules that were created and used in the finished program, and it will present the different problems encountered and provide the solutions. The most important objective in this thesis is to display the results of generic VVER-1000 node dimensions and verify the correctness in the displayed part. The finished program was created using code language Python.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The text examines Sergej Nikolajeviè Bulgakov's description of the philosopheme as thoroughly "immanent" (viz., the immanence of man qua being, such that ontology in Bulgakov becomes a conceptual analogue for immanence) and the corollary that such immanence necessarily excludes the problematic of the "creation of the world." Because of this resolute immanence and the notion that the creation of the world in the form of creatio ex nihilo requires a non-immanent or non-ontological thought and concept, the problematic for Bulgakov is approached only by a theologeme. Appropriating this argument as material for a cursory philosopheme, the text attempts to transform Bulgakov's theologeme into a philosopheme through an elision of God and dogma that overdetermines the theologeme. This philosopheme (nascent within Bulgakov's work itself, in both his hesitation to the overdetermination of immanence and the commitment to the problem of creation) would be a thoroughly non-ontological philosopheme, one that allows for the treatment of the problematic of "creation" or singular ontogenesis, yet with the corollary that this philosopheme must rely on an "ontological zero" Such a philosopheme qua ontologically empty formula nevertheless remains ontologically significant insofar as it is to evince the limit of ontology, in the ontological zero's non-relationality to ontology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract: Nietzsche's Will-to-Power Ontology: An Interpretation of Beyond Good and Evil § 36 By: Mark Minuk Will-to-power is the central component of Nietzsche's philosophy, and passage 36 of Beyond Good and Evil is essential to coming to an understanding of it. 1 argue for and defend the thesis that will-to-power constitutes Nietzsche's ontology, and offer a new understanding of what that means. Nietzsche's ontology can be talked about as though it were a traditional substance ontology (i.e., a world made up of forces; a duality of conflicting forces described as 'towards which' and 'away from which'). However, 1 argue that what defines this ontology is an understanding of valuation as ontologically fundamental—^the basis of interpretation, and from which a substance ontology emerges. In the second chapter, I explain Nietzsche's ontology, as reflected in this passage, through a discussion of Heidegger's two ontological categories in Being and Time (readiness-to-hand, and present-at-hand). In a nutshell, it means that the world of our desires and passions (the most basic of which is for power) is ontologically more fundamental than the material world, or any other interpretation, which is to say, the material world emerges out of a world of our desires and passions. In the first chapter, I address the problematic form of the passage reflected in the first sentence. The passage is in a hypothetical style makes no claim to positive knowledge or truth, and, superficially, looks like Schopenhaurian position for the metaphysics of the will, which Nietzsche rejects. 1 argue that the hypothetical form of the passage is a matter of style, namely, the style of a free-spirit for whom the question of truth is reframed as a question of values. In the third and final chapter, 1 address the charge that Nietzsche's interpretation is a conscious anthropomorphic projection. 1 suggest that the charge rests on a distinction (between nature and man) that Nietzsche rejects. I also address the problem of the causality of the will for Nietzsche, by suggesting that an alternative, perspectival form of causality is possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In studying affect within the realm of student-teacher relationships my thesis project use the concept of “affect” as composed by Baruch Spinoza (1992, 2007). I focus specifically on how Deleuze (1988) interprets and implements the term within his own philosophy, as well as on Antonio Negri’s (2011, 1991) work on Spinoza including his and Michael Hardt’s (2000, 2004, 2009) more recent works. This thesis will explore Spinoza’s affect within the discourse of Affective Pedagogy and Critical Pedagogy while remaining committed to a Spinoizist ontology as outlined by Deleuze (1988). I used artefacts from my past experiences as a student and teacher to produce evocative writing pieces which act as affective continuances of my past experiences as a student, student-teacher, and teacher, and the relationships of affect that composed them. This project used these artefacts and the writings they produced as sites of intensity that are carried through from traces, to evocative thresholds, to concepts, and finally into analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Affiliation: Département de Biochimie, Faculté de médecine, Université de Montréal

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Anticipating the increase in video information in future, archiving of news is an important activity in the visual media industry. When the volume of archives increases, it will be difficult for journalists to find the appropriate content using current search tools. This paper provides the details of the study we conducted about the news extraction systems used in different news channels in Kerala. Semantic web technologies can be used effectively since news archiving share many of the characteristics and problems of WWW. Since visual news archives of different media resources follow different metadata standards, interoperability between the resources is also an issue. World Wide Web Consortium has proposed a draft for an ontology framework for media resource which addresses the intercompatiblity issues. In this paper, the w3c proposed framework and its drawbacks is also discussed