980 resultados para Meta Data, Semantic Web, Software Maintenance, Software Metrics
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Semantic Web: Software agents on the Semantic Web may use commonly agreed service language, which enables co-ordination between agents and proactive delivery of learning materials in the context of actual problems. The vision is that each user has his own personalized agent that communicates with other agents.
Semantic web approach for dealing with administrative boundary revisions: a case study of Dhaka City
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
With the constant grow of enterprises and the need to share information across departments and business areas becomes more critical, companies are turning to integration to provide a method for interconnecting heterogeneous, distributed and autonomous systems. Whether the sales application needs to interface with the inventory application, the procurement application connect to an auction site, it seems that any application can be made better by integrating it with other applications. Integration between applications can face several troublesome due the fact that applications may not have been designed and implemented having integration in mind. Regarding to integration issues, two tier software systems, composed by the database tier and by the “front-end” tier (interface), have shown some limitations. As a solution to overcome the two tier limitations, three tier systems were proposed in the literature. Thus, by adding a middle-tier (referred as middleware) between the database tier and the “front-end” tier (or simply referred application), three main benefits emerge. The first benefit is related with the fact that the division of software systems in three tiers enables increased integration capabilities with other systems. The second benefit is related with the fact that any modifications to the individual tiers may be carried out without necessarily affecting the other tiers and integrated systems and the third benefit, consequence of the others, is related with less maintenance tasks in software system and in all integrated systems. Concerning software development in three tiers, this dissertation focus on two emerging technologies, Semantic Web and Service Oriented Architecture, combined with middleware. These two technologies blended with middleware, which resulted in the development of Swoat framework (Service and Semantic Web Oriented ArchiTecture), lead to the following four synergic advantages: (1) allow the creation of loosely-coupled systems, decoupling the database from “front-end” tiers, therefore reducing maintenance; (2) the database schema is transparent to “front-end” tiers which are aware of the information model (or domain model) that describes what data is accessible; (3) integration with other heterogeneous systems is allowed by providing services provided by the middleware; (4) the service request by the “frontend” tier focus on ‘what’ data and not on ‘where’ and ‘how’ related issues, reducing this way the application development time by developers.
Resumo:
Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.
Resumo:
Nowadays, when a user is planning a touristic route is very difficult to find out which are the best places to visit. The user has to choose considering his/her preferences due to the great quantity of information it is possible to find in the web and taking into account it is necessary to do a selection, within small time because there is a limited time to do a trip. In Itiner@ project, we aim to implement Semantic Web technology combined with Geographic Information Systems in order to offer personalized touristic routes around a region based on user preferences and time situation. Using ontologies it is possible to link, structure, share data and obtain the result more suitable for user's preferences and actual situation with less time and more precisely than without ontologies. To achieve these objectives we propose a web page combining a GIS server and a touristic ontology. As a step further, we also study how to extend this technology on mobile devices due to the raising interest and technological progress of these devices and location-based services, which allows the user to have all the route information on the hand when he/she does a touristic trip. We design a little application in order to apply the combination of GIS and Semantic Web in a mobile device.
Resumo:
Nowadays, when a user is planning a touristic route is very difficult to find out which are the best places to visit. The user has to choose considering his/her preferences due to the great quantity of information it is possible to find in the web and taking into account it is necessary to do a selection, within small time because there is a limited time to do a trip. In Itiner@ project, we aim to implement Semantic Web technology combined with Geographic Information Systems in order to offer personalized touristic routes around a region based on user preferences and time situation. Using ontologies it is possible to link, structure, share data and obtain the result more suitable for user's preferences and actual situation with less time and more precisely than without ontologies. To achieve these objectives we propose a web page combining a GIS server and a touristic ontology. As a step further, we also study how to extend this technology on mobile devices due to the raising interest and technological progress of these devices and location-based services, which allows the user to have all the route information on the hand when he/she does a touristic trip. We design a little application in order to apply the combination of GIS and Semantic Web in a mobile device.
Resumo:
Opinnäytetyö etsii korrelaatiota ohjelmistomittauksella saavutettujen tulosten ja ohjelmasta löytyneiden virheiden väliltä. Työssä käytetään koeryhmänä jo olemassaolevia ohjelmistoja. Työ tutkii olisiko ohjelmistomittareita käyttämällä ollut mahdollista paikallistaa ohjelmistojen ongelmakohdat ja näin saada arvokasta tietoa ohjelmistokehitykseen. Mittausta voitaisiin käyttää resurssien parempaan kohdentamiseen koodikatselmuksissa, koodi-integraatiossa, systeemitestauksessa ja aikataulutuksessa. Mittaamisen avulla nämä tehtävät saisivat enemmän tietoa resurssien kohdistamiseen. Koeryhmänä käytetään erilaisia ohjelmistotuotteita. Yhteistä näille kaikille tuotteille on niiden peräkkäiset julkaisut. Uutta julkaisua tehtäessä, edellistä julkaisua käytetään pohjana, jonka päällekehitetään uutta lähdekoodia. Tämän takia ohjelmistomittauksessa pitää pystyä erottelemaan edellisen julkaisun lähdekoodi uudesta lähdekoodista. Työssä käytettävät ohjelmistomittarit ovat yleisiä ja ohjelmistotekniikassalaajasti käytettyjä mittaamaan erilaisia lähdekoodin ominaisuuksia, joiden arvellaan vaikuttavan virhealttiuteen. Tämän työn tarkoitus on tutkia näiden ohjelmistomittareiden käytettävyyttä koeryhmänä toimivissa ohjelmistoympäristöissä. Käytännön osuus työstä onnistui löytämään korrelaation joidenkinohjelmistomittareiden ja virheiden väliltä, samalla kuin toiset ohjelmistomittarit eivät antaneet vakuuttavia tuloksia. Ohjelmistomittareita käyttämällä näyttää olevan mahdollista tunnistaa virhealttiit kohdat ohjelmasta ja siten parantaa ohjelmistokehityksen tehokkuutta. Ohjelmistomittareiden käyttö tuotekehityksessäon perusteltavaa ja niiden avulla mahdollisesti pystyttäisiin vaikuttamaan ohjelmiston laatuun tulevissa julkaisuissa.
Resumo:
Taking the maximum advantage of technological innovations and the investment in them is of key importance for businesses. The IT industry offers a wide range of innovative high-technology solutions to manage information processing and distribution. However for end-user businesses to make informed decisions in this area is challenging. The aim of this research is to identify the key differences in principal solutions, and what the selection criteria should be for those involved. Existing methodologies for software development are classified, and some key criteria are described to help IT system developers and users determine what are the most important factors in system selection, development and deployment. Statistical data is researched and analysed, a theoretical basis is developed and reviewed, key issues from case studies are identified and generalized to be presented along with the conclusions in the current study. The results give a good basis for corporate consideration and provide overall support to the key decisions in developing web-based software. The conclusion is that new web developments should be considered the stakeholders as an evolution of existing business systems, but they should then pay particular attention to the new advantages that web-based software offers in terms of standardised interfaces and procedures, universal deployment opportunities, and a range of other benefits the study highlights.
Resumo:
Semantic Web Mining aims at combining the two fast-developing research areas Semantic Web and Web Mining. This survey analyzes the convergence of trends from both areas: Growing numbers of researchers work on improving the results of Web Mining by exploiting semantic structures in the Web, and they use Web Mining techniques for building the Semantic Web. Last but not least, these techniques can be used for mining the Semantic Web itself. The second aim of this paper is to use these concepts to circumscribe what Web space is, what it represents and how it can be represented and analyzed. This is used to sketch the role that Semantic Web Mining and the software agents and human agents involved in it can play in the evolution of Web space.
Resumo:
Increasingly, distributed systems are being used to host all manner of applications. While these platforms provide a relatively cheap and effective means of executing applications, so far there has been little work in developing tools and utilities that can help application developers understand problems with the supporting software, or the executing applications. To fully understand why an application executing on a distributed system is not behaving as would be expected it is important that not only the application, but also the underlying middleware, and the operating system are analysed too, otherwise issues could be missed and certainly overall performance profiling and fault diagnoses would be harder to understand. We believe that one approach to profiling and the analysis of distributed systems and the associated applications is via the plethora of log files generated at runtime. In this paper we report on a system (Slogger), that utilises various emerging Semantic Web technologies to gather the heterogeneous log files generated by the various layers in a distributed system and unify them in common data store. Once unified, the log data can be queried and visualised in order to highlight potential problems or issues that may be occurring in the supporting software or the application itself.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Recently, the Semantic Web has experienced significant advancements in standards and techniques, as well as in the amount of semantic information available online. Nevertheless, mechanisms are still needed to automatically reconcile information when it is expressed in different natural languages on the Web of Data, in order to improve the access to semantic information across language barriers. In this context several challenges arise [1], such as: (i) ontology translation/localization, (ii) cross-lingual ontology mappings, (iii) representation of multilingual lexical information, and (iv) cross-lingual access and querying of linked data. In the following we will focus on the second challenge, which is the necessity of establishing, representing and storing cross-lingual links among semantic information on the Web. In fact, in a “truly” multilingual Semantic Web, semantic data with lexical representations in one natural language would be mapped to equivalent or related information in other languages, thus making navigation across multilingual information possible for software agents.