994 resultados para automated semantic integration


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this Master’s Thesis is to find out best practices for IT service management integration. Integration in this context means process integration between an IT organization and an integration partner. For observing the objective, two different perspectives are assigned: process and technology. The thesis consists of theory, framework, implementation, and analysis parts. The first part introduces common methodology of IT service management and enterprise integration. The second part presents an integration framework for ITSM integration. The third part illustrates how the framework is used and the last part analyses the framework. The major results of this thesis were the framework architecture, the framework tools, the implementation model, the testing model, and the deployment model for ITSM integration. As a fundamental best practice, the framework contained a four-division structure between architecture, process, data, and technology. This architecture provides a baseline for ITSM integration design, implementation and testing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Frontier and Emerging economies have implemented policies with the objective of liberalizing their equity markets. Equity market liberalization opens the domestic equity market to foreign investors and as well paves the way for domestic investors to invest in foreign equity securities. Among other things, equity market liberalization results in diversification benefits. Moreover, equity market liberalization leads to low cost of equity capital resulting from the lower rate of return by investors. Additionally, foreign and local investors share any potential risks. Liberalized equity markets also become liquid considering that there are more investors to trade. Equity market liberalization results in financial integration which explains the movement of two markets. In crisis period, increased volatility and co-movement between two markets may result in what is termed contagion effects. In Africa, major moves toward financial liberalization generally started in the late 1980s with South Africa as the pioneer. Over the years, researchers have studied the impact of financial liberalization on Africa’s economic development with diverse results; some being positive, others negative and still others being mixed. The objective of this study is to establish whether African stock-markets are integrated into the United States (US) and World market. Furthermore, the study helps to see if there are international linkages between the Africa, US and the world markets. A Bivariate- VAR- GARCH- BEKK model is employed in the study. In the study, the effect of thin trading is removed through series of econometric data purification. This is because thin trading, also known as non-trading or inconsistency of trading, is a main feature of African markets and may trigger inconsistency and biased results. The study confirmed the widely established results that the South Africa and Egypt stock markets are highly integrated with the US and World market. Interestingly, the study adds to knowledge in this research area by establishing the fact that Kenya is very integrated with the US and World markets and that it receives and exports past innovations as well as shocks to and from the US and World market.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Torrefaction is one of the pretreatment technologies to enhance the fuel characteristics of biomass. The efficient and continuous operation of a torrefaction reactor, in the commercial scale, demands a secure biomass supply, in addition to adequate source of heat. Biorefinery plants or biomass-fuelled steam power plants have the potential to integrate with the torrefaction reactor to exchange heat and mass, using available infrastructure and energy sources. The technical feasibility of this integration is examined in this study. A new model for the torrefaction process is introduced and verified by the available experimental data. The torrefaction model is then integrated in different steam power plants to simulate possible mass and energy exchange between the reactor and the plants. The performance of the integrated plant is investigated for different configurations and the results are compared.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Field trial was conducted with the aim of utilizing allelopathic crop residues to reduce the use of synthetic herbicides in broad bean (Vicia faba) fields. Sunflower residue at 600 and 1,400 g m-2 and Treflan (trifluralin) at 50, 75 and 100% of recommended dose were incorporated into the soil alone or in combination with each other. Untreated plots were maintained as a control. Herbicide application in plots amended with sunflower residue had the least total weed count and biomass, which was even better than herbicide used alone. Integration of recommended dose of Treflan with sunflower residue at 1,400 g m-² produced maximum (987.5 g m-2) aboveground biomass of broad bean, which was 74 and 36% higher than control and recommended herbicide dose applied alone, respectively. Combination of herbicide and sunflower residue appeared to better enhance pod number and yield per unit area than herbicide alone. Application of 50% dose of Treflan in plots amended with sunflower residue resulted in similar yield advantage as was noticed with 100% herbicide dose. Chromatographic analysis of residue-infested field soil indicated the presence of several phytotoxic compounds of phenolic nature. Periodic data revealed that maximum suppression in weed density and dry weight synchronized with peak values of phytotoxins observed 4 weeks after incorporation of sunflower residues. Integration of sunflower residues with lower herbicide rates can produce effective weed suppression without compromising yield as a feasible and environmentally sound approach in broad bean fields.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biomedical natural language processing (BioNLP) is a subfield of natural language processing, an area of computational linguistics concerned with developing programs that work with natural language: written texts and speech. Biomedical relation extraction concerns the detection of semantic relations such as protein-protein interactions (PPI) from scientific texts. The aim is to enhance information retrieval by detecting relations between concepts, not just individual concepts as with a keyword search. In recent years, events have been proposed as a more detailed alternative for simple pairwise PPI relations. Events provide a systematic, structural representation for annotating the content of natural language texts. Events are characterized by annotated trigger words, directed and typed arguments and the ability to nest other events. For example, the sentence “Protein A causes protein B to bind protein C” can be annotated with the nested event structure CAUSE(A, BIND(B, C)). Converted to such formal representations, the information of natural language texts can be used by computational applications. Biomedical event annotations were introduced by the BioInfer and GENIA corpora, and event extraction was popularized by the BioNLP'09 Shared Task on Event Extraction. In this thesis we present a method for automated event extraction, implemented as the Turku Event Extraction System (TEES). A unified graph format is defined for representing event annotations and the problem of extracting complex event structures is decomposed into a number of independent classification tasks. These classification tasks are solved using SVM and RLS classifiers, utilizing rich feature representations built from full dependency parsing. Building on earlier work on pairwise relation extraction and using a generalized graph representation, the resulting TEES system is capable of detecting binary relations as well as complex event structures. We show that this event extraction system has good performance, reaching the first place in the BioNLP'09 Shared Task on Event Extraction. Subsequently, TEES has achieved several first ranks in the BioNLP'11 and BioNLP'13 Shared Tasks, as well as shown competitive performance in the binary relation Drug-Drug Interaction Extraction 2011 and 2013 shared tasks. The Turku Event Extraction System is published as a freely available open-source project, documenting the research in detail as well as making the method available for practical applications. In particular, in this thesis we describe the application of the event extraction method to PubMed-scale text mining, showing how the developed approach not only shows good performance, but is generalizable and applicable to large-scale real-world text mining projects. Finally, we discuss related literature, summarize the contributions of the work and present some thoughts on future directions for biomedical event extraction. This thesis includes and builds on six original research publications. The first of these introduces the analysis of dependency parses that leads to development of TEES. The entries in the three BioNLP Shared Tasks, as well as in the DDIExtraction 2011 task are covered in four publications, and the sixth one demonstrates the application of the system to PubMed-scale text mining.