45 resultados para Ontology generation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presentation at the Nordic Perspectives on Open Access and Open Science seminar, Helsinki, October 15, 2013

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kristiina Hormia-Poutasen esitys CBUC-konferenssissa Barcelonassa 12.4.2013.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kristiina Hormia-Poutasen esitys CBUC-konferenssissa Barcelonassa 12.4.2013.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kristiina Hormia-Poutasen esitys CBUC-konferenssissa Barcelonassa 12.4.2013.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kristiina Hormia-Poutasen esitys CBUC-konferenssissa 12.4.2013 Barcelonassa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

New challenges have been created in the modern work environment as the diversity of the workforce is greater than ever in terms of generations. There will become a large demand of generation Y employees as the baby boomer generation employees retire at an accelerated rate. The purpose of this study is to investigate Y generation specific characteristics and to identify motivational systems to enhance performance. The research questions are: 1. What are Y generation characteristics? 2. What motivational systems organizations can form to motivate Y generation employees and in turn, create better performance? The Y generation specific characteristics identified from the literature include; achievement oriented; confident; educated; multitasking; having a need for feedback; needing management support; sociable and tech savvy. The proposed motivational systems can be found in four areas of the organization; HRM, training and development, communication and decision making policies. Three focus groups were held to investigate what would motivate generation Y employees to achieve better performance. Two of these focus groups were Finnish natives and the third consisted of international students. The HRM systems included flexibility and a culture of fun. It was concluded that flexibility within the workplace and role was a great source of motivation. Culture of fun was not responded to as favorably although most focus group participants rated enjoyableness as one of their top motivating factors. Training and development systems include training programs and mentoring as sources of potential motivation. Training programs were viewed as a mode to gain a better position and were not necessarily seen as motivational systems. Mentoring programs were not concluded to have a significant effect on motivation. Communication systems included keeping up with technology, clarity and goals as well as feedback. Keeping up with technology was seen as an ineffective tool to motivate. Clarity and goal setting was seen as very important to be able to perform but not necessarily motivating. Feedback had a highly motivating effect on these focus groups. Decision making policies included collaboration and teamwork as well as ownership. Teams were familiar and meet the social needs of Y generation employees and are motivating. Ownership was equated with trust and responsibility and was highly valued as well as motivating to these focus group participants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software plays an important role in our society and economy. Software development is an intricate process, and it comprises many different tasks: gathering requirements, designing new solutions that fulfill these requirements, as well as implementing these designs using a programming language into a working system. As a consequence, the development of high quality software is a core problem in software engineering. This thesis focuses on the validation of software designs. The issue of the analysis of designs is of great importance, since errors originating from designs may appear in the final system. It is considered economical to rectify the problems as early in the software development process as possible. Practitioners often create and visualize designs using modeling languages, one of the more popular being the Uni ed Modeling Language (UML). The analysis of the designs can be done manually, but in case of large systems, the need of mechanisms that automatically analyze these designs arises. In this thesis, we propose an automatic approach to analyze UML based designs using logic reasoners. This approach firstly proposes the translations of the UML based designs into a language understandable by reasoners in the form of logic facts, and secondly shows how to use the logic reasoners to infer the logical consequences of these logic facts. We have implemented the proposed translations in the form of a tool that can be used with any standard compliant UML modeling tool. Moreover, we authenticate the proposed approach by automatically validating hundreds of UML based designs that consist of thousands of model elements available in an online model repository. The proposed approach is limited in scope, but is fully automatic and does not require any expertise of logic languages from the user. We exemplify the proposed approach with two applications, which include the validation of domain specific languages and the validation of web service interfaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A growing concern for organisations is how they should deal with increasing amounts of collected data. With fierce competition and smaller margins, organisations that are able to fully realize the potential in the data they collect can gain an advantage over the competitors. It is almost impossible to avoid imprecision when processing large amounts of data. Still, many of the available information systems are not capable of handling imprecise data, even though it can offer various advantages. Expert knowledge stored as linguistic expressions is a good example of imprecise but valuable data, i.e. data that is hard to exactly pinpoint to a definitive value. There is an obvious concern among organisations on how this problem should be handled; finding new methods for processing and storing imprecise data are therefore a key issue. Additionally, it is equally important to show that tacit knowledge and imprecise data can be used with success, which encourages organisations to analyse their imprecise data. The objective of the research conducted was therefore to explore how fuzzy ontologies could facilitate the exploitation and mobilisation of tacit knowledge and imprecise data in organisational and operational decision making processes. The thesis introduces both practical and theoretical advances on how fuzzy logic, ontologies (fuzzy ontologies) and OWA operators can be utilized for different decision making problems. It is demonstrated how a fuzzy ontology can model tacit knowledge which was collected from wine connoisseurs. The approach can be generalised and applied also to other practically important problems, such as intrusion detection. Additionally, a fuzzy ontology is applied in a novel consensus model for group decision making. By combining the fuzzy ontology with Semantic Web affiliated techniques novel applications have been designed. These applications show how the mobilisation of knowledge can successfully utilize also imprecise data. An important part of decision making processes is undeniably aggregation, which in combination with a fuzzy ontology provides a promising basis for demonstrating the benefits that one can retrieve from handling imprecise data. The new aggregation operators defined in the thesis often provide new possibilities to handle imprecision and expert opinions. This is demonstrated through both theoretical examples and practical implementations. This thesis shows the benefits of utilizing all the available data one possess, including imprecise data. By combining the concept of fuzzy ontology with the Semantic Web movement, it aspires to show the corporate world and industry the benefits of embracing fuzzy ontologies and imprecision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Innovations diffuse at different speed among the members of a social system through various communication channels. The group of early adopters can be seen as the most influential reference group for majority of people to base their innovation adoption decisions on. Thus, the early adopters can often accelerate the diffusion of innovations. The purpose of this research is to discover means of diffusion for an innovative product in Finnish market through the influential early adopters in respect to the characteristics of the case product. The purpose of the research can be achieved through the following sub objectives:  Who are the potential early adopters for the case product and why?  How the potential early adopters of the case product should be communicated with?  What would be the expectations, preferences, and experiences of the early adopters of the case product? The case product examined in this research is a new board game called Rock Science which is considered to be incremental innovation bringing board gaming and hard rock music together in a new way. The research was conducted in two different parts using both qualitative and quantitative research methods. This mixed method research began with expert interviews of six music industry experts. The information gathered from the interviews enabled researcher to compose the questionnaire for the quantitative part of the study. Internet survey that was sent out resulted with a sample of 97 responses from the targeted population. The key findings of the study suggest that (1) the potential early adopters for the case product are more likely to be young adults from the capital city area with great interest in rock music, (2) the early adopters can be reached effectively through credible online sources of information, and (3) the respondents overall product feedback is highly positive, except in the case of quality-price ratio of the product. This research indicates that more effective diffusion of Rock Science board game in Finland can be reached through (1) strategic alliances with music industry and media partnerships, (2) pricing adjustments, (3) use of supporting game formats, and (4) innovative use of various social media channels.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines information security as a process (information securing) in terms of what it does, especially beyond its obvious role of protector. It investigates concepts related to ‘ontology of becoming’, and examines what it is that information securing produces. The research is theory driven and draws upon three fields: sociology (especially actor-network theory), philosophy (especially Gilles Deleuze and Félix Guattari’s concept of ‘machine’, ‘territory’ and ‘becoming’, and Michel Serres’s concept of ‘parasite’), and information systems science (the subject of information security). Social engineering (used here in the sense of breaking into systems through non-technical means) and software cracker groups (groups which remove copy protection systems from software) are analysed as examples of breaches of information security. Firstly, the study finds that information securing is always interruptive: every entity (regardless of whether or not it is malicious) that becomes connected to information security is interrupted. Furthermore, every entity changes, becomes different, as it makes a connection with information security (ontology of becoming). Moreover, information security organizes entities into different territories. However, the territories – the insides and outsides of information systems – are ontologically similar; the only difference is in the order of the territories, not in the ontological status of entities that inhabit the territories. In other words, malicious software is ontologically similar to benign software; they both are users in terms of a system. The difference is based on the order of the system and users: who uses the system and what the system is used for. Secondly, the research shows that information security is always external (in the terms of this study it is a ‘parasite’) to the information system that it protects. Information securing creates and maintains order while simultaneously disrupting the existing order of the system that it protects. For example, in terms of software itself, the implementation of a copy protection system is an entirely external addition. In fact, this parasitic addition makes software different. Thus, information security disrupts that which it is supposed to defend from disruption. Finally, it is asserted that, in its interruption, information security is a connector that creates passages; it connects users to systems while also creating its own threats. For example, copy protection systems invite crackers and information security policies entice social engineers to use and exploit information security techniques in a novel manner.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ontology matching is an important task when data from multiple data sources is integrated. Problems of ontology matching have been studied widely in the researchliterature and many different solutions and approaches have been proposed alsoin commercial software tools. In this survey, well-known approaches of ontologymatching, and its subtype schema matching, are reviewed and compared. The aimof this report is to summarize the knowledge about the state-of-the-art solutionsfrom the research literature, discuss how the methods work on different application domains, and analyze pros and cons of different open source and academic tools inthe commercial world.