11 resultados para user requirements

em Universitätsbibliothek Kassel, Universität Kassel, Germany


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, progress in the area of mobile telecommunications has changed our way of life, in the private as well as the business domain. Mobile and wireless networks have ever increasing bit rates, mobile network operators provide more and more services, and at the same time costs for the usage of mobile services and bit rates are decreasing. However, mobile services today still lack functions that seamlessly integrate into users’ everyday life. That is, service attributes such as context-awareness and personalisation are often either proprietary, limited or not available at all. In order to overcome this deficiency, telecommunications companies are heavily engaged in the research and development of service platforms for networks beyond 3G for the provisioning of innovative mobile services. These service platforms are to support such service attributes. Service platforms are to provide basic service-independent functions such as billing, identity management, context management, user profile management, etc. Instead of developing own solutions, developers of end-user services such as innovative messaging services or location-based services can utilise the platform-side functions for their own purposes. In doing so, the platform-side support for such functions takes away complexity, development time and development costs from service developers. Context-awareness and personalisation are two of the most important aspects of service platforms in telecommunications environments. The combination of context-awareness and personalisation features can also be described as situation-dependent personalisation of services. The support for this feature requires several processing steps. The focus of this doctoral thesis is on the processing step, in which the user’s current context is matched against situation-dependent user preferences to find the matching user preferences for the current user’s situation. However, to achieve this, a user profile management system and corresponding functionality is required. These parts are also covered by this thesis. Altogether, this thesis provides the following contributions: The first part of the contribution is mainly architecture-oriented. First and foremost, we provide a user profile management system that addresses the specific requirements of service platforms in telecommunications environments. In particular, the user profile management system has to deal with situation-specific user preferences and with user information for various services. In order to structure the user information, we also propose a user profile structure and the corresponding user profile ontology as part of an ontology infrastructure in a service platform. The second part of the contribution is the selection mechanism for finding matching situation-dependent user preferences for the personalisation of services. This functionality is provided as a sub-module of the user profile management system. Contrary to existing solutions, our selection mechanism is based on ontology reasoning. This mechanism is evaluated in terms of runtime performance and in terms of supported functionality compared to other approaches. The results of the evaluation show the benefits and the drawbacks of ontology modelling and ontology reasoning in practical applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A stand-alone power system is an autonomous system that supplies electricity to the user load without being connected to the electric grid. This kind of decentralized system is frequently located in remote and inaccessible areas. It is essential for about one third of the world population which are living in developed or isolated regions and have no access to an electricity utility grid. The most people live in remote and rural areas, with low population density, lacking even the basic infrastructure. The utility grid extension to these locations is not a cost effective option and sometimes technically not feasible. The purpose of this thesis is the modelling and simulation of a stand-alone hybrid power system, referred to as “hydrogen Photovoltaic-Fuel Cell (PVFC) hybrid system”. It couples a photovoltaic generator (PV), an alkaline water electrolyser, a storage gas tank, a proton exchange membrane fuel cell (PEMFC), and power conditioning units (PCU) to give different system topologies. The system is intended to be an environmentally friendly solution since it tries maximising the use of a renewable energy source. Electricity is produced by a PV generator to meet the requirements of a user load. Whenever there is enough solar radiation, the user load can be powered totally by the PV electricity. During periods of low solar radiation, auxiliary electricity is required. An alkaline high pressure water electrolyser is powered by the excess energy from the PV generator to produce hydrogen and oxygen at a pressure of maximum 30bar. Gases are stored without compression for short- (hourly or daily) and long- (seasonal) term. A proton exchange membrane (PEM) fuel cell is used to keep the system’s reliability at the same level as for the conventional system while decreasing the environmental impact of the whole system. The PEM fuel cell consumes gases which are produced by an electrolyser to meet the user load demand when the PV generator energy is deficient, so that it works as an auxiliary generator. Power conditioning units are appropriate for the conversion and dispatch the energy between the components of the system. No batteries are used in this system since they represent the weakest when used in PV systems due to their need for sophisticated control and their short lifetime. The model library, ISET Alternative Power Library (ISET-APL), is designed by the Institute of Solar Energy supply Technology (ISET) and used for the simulation of the hybrid system. The physical, analytical and/or empirical equations of each component are programmed and implemented separately in this library for the simulation software program Simplorer by C++ language. The model parameters are derived from manufacturer’s performance data sheets or measurements obtained from literature. The identification and validation of the major hydrogen PVFC hybrid system component models are evaluated according to the measured data of the components, from the manufacturer’s data sheet or from actual system operation. Then, the overall system is simulated, at intervals of one hour each, by using solar radiation as the primary energy input and hydrogen as energy storage for one year operation. A comparison between different topologies, such as DC or AC coupled systems, is carried out on the basis of energy point of view at two locations with different geographical latitudes, in Kassel/Germany (Europe) and in Cairo/Egypt (North Africa). The main conclusion in this work is that the simulation method of the system study under different conditions could successfully be used to give good visualization and comparison between those topologies for the overall performance of the system. The operational performance of the system is not only depending on component efficiency but also on system design and consumption behaviour. The worst case of this system is the low efficiency of the storage subsystem made of the electrolyser, the gas storage tank, and the fuel cell as it is around 25-34% at Cairo and 29-37% at Kassel. Therefore, the research for this system should be concentrated in the subsystem components development especially the fuel cell.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Die Dissertation befasst sich mit der Einführung komplexer Softwaresysteme, die, bestehend aus einer Kombination aus parametrisierter Standardsoftware gepaart mit Wettbewerbsvorteil sichernden Individualsoftwarekomponenten, keine Software-Engineering-Projekte im klassischen Sinn mehr darstellen, sondern einer strategieorientierten Gestaltung von Geschäftsprozessen und deren Implementierung in Softwaresystemen bedürfen. Die Problemstellung einer adäquaten Abwägung zwischen TCO-optimierender Einführung und einer gleichzeitigen vollständigen Unterstützung der kritischen Erfolgsfaktoren des Unternehmens ist hierbei von besonderer Bedeutung. Der Einsatz integrierter betriebswirtschaftlicher Standardsoftware, mit den Möglichkeiten einer TCO-Senkung, jedoch ebenfalls der Gefahr eines Verlustes von Alleinstellungsmerkmalen am Markt durch Vereinheitlichungstendenzen, stellt ein in Einführungsprojekten wesentliches zu lösendes Problem dar, um Suboptima zu vermeiden. Die Verwendung von Vorgehensmodellen, die sich oftmals an klassischen Softwareentwicklungsprojekten orientieren oder vereinfachte Phasenmodelle für das Projektmanagement darstellen, bedingt eine fehlende Situationsadäquanz in den Detailsituationen der Teilprojekte eines komplexen Einführungsprojektes. Das in dieser Arbeit entwickelte generische Vorgehensmodell zur strategieorientierten und partizipativen Einführung komplexer Softwaresysteme im betriebswirtschaftlichen Anwendungsbereich macht - aufgrund der besonders herausgearbeiteten Ansätze zu einer strategieorientierten Einführung, respektive Entwicklung derartiger Systeme sowie aufgrund der situationsadäquaten Vorgehensstrategien im Rahmen der Teilprojektorganisation � ein Softwareeinführungsprojekt zu einem Wettbewerbsfaktor stärkenden, strategischen Element im Unternehmen. Die in der Dissertation diskutierten Überlegungen lassen eine Vorgehensweise präferieren, die eine enge Verschmelzung des Projektes zur Organisationsoptimierung mit dem Softwareimplementierungsprozess impliziert. Eine Priorisierung der Geschäftsprozesse mit dem Ziel, zum einen bei Prozessen mit hoher wettbewerbsseitiger Priorität ein organisatorisches Suboptimum zu vermeiden und zum anderen trotzdem den organisatorischen Gestaltungs- und den Systemimplementierungsprozess schnell und ressourcenschonend durchzuführen, ist ein wesentliches Ergebnis der Ausarbeitungen. Zusätzlich führt die Ausgrenzung weiterer Prozesse vom Einführungsvorgang zunächst zu einem Produktivsystem, welches das Unternehmen in den wesentlichen Punkten abdeckt, das aber ebenso in späteren Projektschritten zu einem System erweitert werden kann, welches eine umfassende Funktionalität besitzt. Hieraus ergeben sich Möglichkeiten, strategischen Anforderungen an ein modernes Informationssystem, das die kritischen Erfolgsfaktoren eines Unternehmens konsequent unterstützen muss, gerecht zu werden und gleichzeitig ein so weit als möglich ressourcenschonendes, weil die Kostenreduktionsaspekte einer Standardlösung nutzend, Projekt durchzuführen. Ein weiterer wesentlicher Aspekt ist die situationsadäquate Modellinstanziierung, also die projektspezifische Anpassung des Vorgehensmodells sowie die situationsadäquate Wahl der Vorgehensweisen in Teilprojekten und dadurch Nutzung der Vorteile der verschiedenen Vorgehensstrategien beim konkreten Projektmanagement. Der Notwendigkeit der Entwicklung einer Projektorganisation für prototypingorientiertes Vorgehen wird in diesem Zusammenhang ebenfalls Rechnung getragen. Die Notwendigkeit der Unternehmen, sich einerseits mit starken Differenzierungspotenzialen am Markt hervorzuheben und andererseits bei ständig sinkenden Margen einer Kostenoptimierung nachzukommen, lässt auch in Zukunft das entwickelte Modell als erfolgreich erscheinen. Hinzu kommt die Tendenz zu Best-Of-Breed-Ansätzen und komponentenbasierten Systemen im Rahmen der Softwareauswahl, die eine ausgesprochen differenzierte Vorgehensweise in Projekten verstärkt notwendig machen wird. Durch die in das entwickelte Modell integrierten Prototyping-Ansätze wird der auch in Zukunft an Bedeutung gewinnenden Notwendigkeit der Anwenderintegration Rechnung getragen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The process of developing software that takes advantage of multiple processors is commonly referred to as parallel programming. For various reasons, this process is much harder than the sequential case. For decades, parallel programming has been a problem for a small niche only: engineers working on parallelizing mostly numerical applications in High Performance Computing. This has changed with the advent of multi-core processors in mainstream computer architectures. Parallel programming in our days becomes a problem for a much larger group of developers. The main objective of this thesis was to find ways to make parallel programming easier for them. Different aims were identified in order to reach the objective: research the state of the art of parallel programming today, improve the education of software developers about the topic, and provide programmers with powerful abstractions to make their work easier. To reach these aims, several key steps were taken. To start with, a survey was conducted among parallel programmers to find out about the state of the art. More than 250 people participated, yielding results about the parallel programming systems and languages in use, as well as about common problems with these systems. Furthermore, a study was conducted in university classes on parallel programming. It resulted in a list of frequently made mistakes that were analyzed and used to create a programmers' checklist to avoid them in the future. For programmers' education, an online resource was setup to collect experiences and knowledge in the field of parallel programming - called the Parawiki. Another key step in this direction was the creation of the Thinking Parallel weblog, where more than 50.000 readers to date have read essays on the topic. For the third aim (powerful abstractions), it was decided to concentrate on one parallel programming system: OpenMP. Its ease of use and high level of abstraction were the most important reasons for this decision. Two different research directions were pursued. The first one resulted in a parallel library called AthenaMP. It contains so-called generic components, derived from design patterns for parallel programming. These include functionality to enhance the locks provided by OpenMP, to perform operations on large amounts of data (data-parallel programming), and to enable the implementation of irregular algorithms using task pools. AthenaMP itself serves a triple role: the components are well-documented and can be used directly in programs, it enables developers to study the source code and learn from it, and it is possible for compiler writers to use it as a testing ground for their OpenMP compilers. The second research direction was targeted at changing the OpenMP specification to make the system more powerful. The main contributions here were a proposal to enable thread-cancellation and a proposal to avoid busy waiting. Both were implemented in a research compiler, shown to be useful in example applications, and proposed to the OpenMP Language Committee.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The principal objective of this paper is to develop a methodology for the formulation of a master plan for renewable energy based electricity generation in The Gambia, Africa. Such a master plan aims to develop and promote renewable sources of energy as an alternative to conventional forms of energy for generating electricity in the country. A tailor-made methodology for the preparation of a 20-year renewable energy master plan focussed on electricity generation is proposed in order to be followed and verified throughout the present dissertation, as it is applied for The Gambia. The main input data for the proposed master plan are (i) energy demand analysis and forecast over 20 years and (ii) resource assessment for different renewable energy alternatives including their related power supply options. The energy demand forecast is based on a mix between Top-Down and Bottom-Up methodologies. The results are important data for future requirements of (primary) energy sources. The electricity forecast is separated in projections at sent-out level and at end-user level. On the supply side, Solar, Wind and Biomass, as sources of energy, are investigated in terms of technical potential and economic benefits for The Gambia. Other criteria i.e. environmental and social are not considered in the evaluation. Diverse supply options are proposed and technically designed based on the assessed renewable energy potential. This process includes the evaluation of the different available conversion technologies and finalizes with the dimensioning of power supply solutions, taking into consideration technologies which are applicable and appropriate under the special conditions of The Gambia. The balance of these two input data (demand and supply) gives a quantitative indication of the substitution potential of renewable energy generation alternatives in primarily fossil-fuel-based electricity generation systems, as well as fuel savings due to the deployment of renewable resources. Afterwards, the identified renewable energy supply options are ranked according to the outcomes of an economic analysis. Based on this ranking, and other considerations, a 20-year investment plan, broken down into five-year investment periods, is prepared and consists of individual renewable energy projects for electricity generation. These projects included basically on-grid renewable energy applications. Finally, a priority project from the master plan portfolio is selected for further deeper analysis. Since solar PV is the most relevant proposed technology, a PV power plant integrated to the fossil-fuel powered main electrical system in The Gambia is considered as priority project. This project is analysed by economic competitiveness under the current conditions in addition to sensitivity analysis with regard to oil and new-technology market conditions in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In der Dissertation wird der Frage nachgegangen, welche globalen bildungspolitischen Maßnahmen erforderlich sind, um auch bislang exkludierten Menschen den Kompetenzerwerb zu ermöglichen, der benötigt wird, eine positive User Experience in benutzergenerierten, digitalen Lernumgebungen auszubilden, damit sie an der modernen Weltgesellschaft selbstbestimmt teilhaben können. Zu diesem Zweck wurden Castells ‘Netzwerkgesellschaft’ und Csikszentmihalys ‘Theorie der optimalen Erfahrung’ als analytische Grundlagen zur Einordnung der sozialen Netzwerk-Aktivitäten herangezogen. Dies ermöglichte es, unter Rückgriff auf aktuelle Lerntheorien, Kompetenzdebatten, ökonomische Analysen des Bildungssystems und User Experience-Forschungen, einige individuelle und gesamtgesellschaftliche Voraussetzungen abzuleiten, um in der Netzwerkgesellschaft konstruktiv überleben zu können. Mit Blick auf unterschiedliche sozio-kulturelle Bedingungen für persönlichen Flow im ‘space of flows’ liessen sich schließlich differenzierte Flow-Kriterien entwickeln, die als Grundlage für die Operationalisierung im Rahmen einer Real-Time Delphi (RTD)-Studie mit einem internationalen Expertinnen-Panel dienen konnten. Ziel war es, bildungspolitische Ansatzpunkte zu finden, den bislang Exkludierten bis zum Jahre 2020 erste Rahmenbedingungen zu bieten, damit sie potentiell teilhaben können an der Gestaltung der zukünftigen Netzwerkgesellschaft. Das Ergebnis der Expertinnen-Befragung wurde unter Rückgriff auf aktuelle Global und Educational Governance-Studien und das Einflusspotenzial der Zivilgesellschaft auf den Digital Divide reflektiert. Vor diesem Hintergrund konnten abschließend vier bildungspolitische Verlaufsszenarien entworfen werden, die es ermöglichen könnten, bis 2020 die Kluft zu den global Exkludierten wenigstens etwas zu schließen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the vision of Mark Weiser on ubiquitous computing, computers are disappearing from the focus of the users and are seamlessly interacting with other computers and users in order to provide information and services. This shift of computers away from direct computer interaction requires another way of applications to interact without bothering the user. Context is the information which can be used to characterize the situation of persons, locations, or other objects relevant for the applications. Context-aware applications are capable of monitoring and exploiting knowledge about external operating conditions. These applications can adapt their behaviour based on the retrieved information and thus to replace (at least a certain amount) the missing user interactions. Context awareness can be assumed to be an important ingredient for applications in ubiquitous computing environments. However, context management in ubiquitous computing environments must reflect the specific characteristics of these environments, for example distribution, mobility, resource-constrained devices, and heterogeneity of context sources. Modern mobile devices are equipped with fast processors, sufficient memory, and with several sensors, like Global Positioning System (GPS) sensor, light sensor, or accelerometer. Since many applications in ubiquitous computing environments can exploit context information for enhancing their service to the user, these devices are highly useful for context-aware applications in ubiquitous computing environments. Additionally, context reasoners and external context providers can be incorporated. It is possible that several context sensors, reasoners and context providers offer the same type of information. However, the information providers can differ in quality levels (e.g. accuracy), representations (e.g. position represented in coordinates and as an address) of the offered information, and costs (like battery consumption) for providing the information. In order to simplify the development of context-aware applications, the developers should be able to transparently access context information without bothering with underlying context accessing techniques and distribution aspects. They should rather be able to express which kind of information they require, which quality criteria this information should fulfil, and how much the provision of this information should cost (not only monetary cost but also energy or performance usage). For this purpose, application developers as well as developers of context providers need a common language and vocabulary to specify which information they require respectively they provide. These descriptions respectively criteria have to be matched. For a matching of these descriptions, it is likely that a transformation of the provided information is needed to fulfil the criteria of the context-aware application. As it is possible that more than one provider fulfils the criteria, a selection process is required. In this process the system has to trade off the provided quality of context and required costs of the context provider against the quality of context requested by the context consumer. This selection allows to turn on context sources only if required. Explicitly selecting context services and thereby dynamically activating and deactivating the local context provider has the advantage that also the resource consumption is reduced as especially unused context sensors are deactivated. One promising solution is a middleware providing appropriate support in consideration of the principles of service-oriented computing like loose coupling, abstraction, reusability, or discoverability of context providers. This allows us to abstract context sensors, context reasoners and also external context providers as context services. In this thesis we present our solution consisting of a context model and ontology, a context offer and query language, a comprehensive matching and mediation process and a selection service. Especially the matching and mediation process and the selection service differ from the existing works. The matching and mediation process allows an autonomous establishment of mediation processes in order to transfer information from an offered representation into a requested representation. In difference to other approaches, the selection service selects not only a service for a service request, it rather selects a set of services in order to fulfil all requests which also facilitates the sharing of services. The approach is extensively reviewed regarding the different requirements and a set of demonstrators shows its usability in real-world scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cassava (Manihot esculenta) is one of the most important export crops in Thailand, yet the nitrogen requirement is unknown and not considered by growers and producers. Cassava requirements for N were determined in field experiments during a period of four years and four sites on the Satuk (Suk), Don Chedi (Dc), Pak Chong (Pc),and Ban Beung (BBg) soil series in Lopburi, Supanburi, Nakhon Ratchasima, and Chonburi sites, respectively. The fertilizer treatment structure comprised 0, 62.5, 125, 187.5, 250 and 312.5 kg N ha^(-1) as urea. At each site cassava was harvested at nine months and yield parameters and the minimum datasets were taken. The fertilizer rate which resulted in maximum yield ranged from 187.5 kg N ha^(-1) in Supanburi and Chonburi (fresh weight yield of 47,500 and 30,000 kg ha^(-1) respectively) to 250 kg N ha^(-1) in Lopburi and Nakhon Ratchasima (fresh weight yield of 64,100 and 46,700 kg ha^(-1) respectively). Yield appeared to decrease at the higher, 312 kg ha^(-1), at Supanburi and Lopburi, and 250 kg ha^(-1) (Chonburi) fertilizer N rates. Net revenue was 70.4 and 72.9 % higher than where no N was appliedLopburi and Nakhon Ratchasima sites. Net revenue at the Supanburi and Chonburi sites were 53.8 and 211.0 % higher than that where no N was applied. This study suggests that at all sites improved cassava production and net revenue could be obtained with the judicious application of higher quantities of N. The results provide needed guidance to nitrogen fertilization of the important industrial crop cassava in Thailand.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mit Hilfe der Vorhersage von Kontexten können z. B. Dienste innerhalb einer ubiquitären Umgebung proaktiv an die Bedürfnisse der Nutzer angepasst werden. Aus diesem Grund hat die Kontextvorhersage einen signifikanten Stellenwert innerhalb des ’ubiquitous computing’. Nach unserem besten Wissen, verwenden gängige Ansätze in der Kontextvorhersage ausschließlich die Kontexthistorie des Nutzers als Datenbasis, dessen Kontexte vorhersagt werden sollen. Im Falle, dass ein Nutzer unerwartet seine gewohnte Verhaltensweise ändert, enthält die Kontexthistorie des Nutzers keine geeigneten Informationen, um eine zuverlässige Kontextvorhersage zu gewährleisten. Daraus folgt, dass Vorhersageansätze, die ausschließlich die Kontexthistorie des Nutzers verwenden, dessen Kontexte vorhergesagt werden sollen, fehlschlagen könnten. Um die Lücke der fehlenden Kontextinformationen in der Kontexthistorie des Nutzers zu schließen, führen wir den Ansatz zur kollaborativen Kontextvorhersage (CCP) ein. Dabei nutzt CCP bestehende direkte und indirekte Relationen, die zwischen den Kontexthistorien der verschiedenen Nutzer existieren können, aus. CCP basiert auf der Singulärwertzerlegung höherer Ordnung, die bereits erfolgreich in bestehenden Empfehlungssystemen eingesetzt wurde. Um Aussagen über die Vorhersagegenauigkeit des CCP Ansatzes treffen zu können, wird dieser in drei verschiedenen Experimenten evaluiert. Die erzielten Vorhersagegenauigkeiten werden mit denen von drei bekannten Kontextvorhersageansätzen, dem ’Alignment’ Ansatz, dem ’StatePredictor’ und dem ’ActiveLeZi’ Vorhersageansatz, verglichen. In allen drei Experimenten werden als Evaluationsbasis kollaborative Datensätze verwendet. Anschließend wird der CCP Ansatz auf einen realen kollaborativen Anwendungsfall, den proaktiven Schutz von Fußgängern, angewendet. Dabei werden durch die Verwendung der kollaborativen Kontextvorhersage Fußgänger frühzeitig erkannt, die potentiell Gefahr laufen, mit einem sich nähernden Auto zu kollidieren. Als kollaborative Datenbasis werden reale Bewegungskontexte der Fußgänger verwendet. Die Bewegungskontexte werden mittels Smartphones, welche die Fußgänger in ihrer Hosentasche tragen, gesammelt. Aus dem Grund, dass Kontextvorhersageansätze in erster Linie personenbezogene Kontexte wie z.B. Standortdaten oder Verhaltensmuster der Nutzer als Datenbasis zur Vorhersage verwenden, werden rechtliche Evaluationskriterien aus dem Recht des Nutzers auf informationelle Selbstbestimmung abgeleitet. Basierend auf den abgeleiteten Evaluationskriterien, werden der CCP Ansatz und weitere bekannte kontextvorhersagende Ansätze bezüglich ihrer Rechtsverträglichkeit untersucht. Die Evaluationsergebnisse zeigen die rechtliche Kompatibilität der untersuchten Vorhersageansätze bezüglich des Rechtes des Nutzers auf informationelle Selbstbestimmung auf. Zum Schluss wird in der Dissertation ein Ansatz für die verteilte und kollaborative Vorhersage von Kontexten vorgestellt. Mit Hilfe des Ansatzes wird eine Möglichkeit aufgezeigt, um den identifizierten rechtlichen Probleme, die bei der Vorhersage von Kontexten und besonders bei der kollaborativen Vorhersage von Kontexten, entgegenzuwirken.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Self-adaptive software provides a profound solution for adapting applications to changing contexts in dynamic and heterogeneous environments. Having emerged from Autonomic Computing, it incorporates fully autonomous decision making based on predefined structural and behavioural models. The most common approach for architectural runtime adaptation is the MAPE-K adaptation loop implementing an external adaptation manager without manual user control. However, it has turned out that adaptation behaviour lacks acceptance if it does not correspond to a user’s expectations – particularly for Ubiquitous Computing scenarios with user interaction. Adaptations can be irritating and distracting if they are not appropriate for a certain situation. In general, uncertainty during development and at run-time causes problems with users being outside the adaptation loop. In a literature study, we analyse publications about self-adaptive software research. The results show a discrepancy between the motivated application domains, the maturity of examples, and the quality of evaluations on the one hand and the provided solutions on the other hand. Only few publications analysed the impact of their work on the user, but many employ user-oriented examples for motivation and demonstration. To incorporate the user within the adaptation loop and to deal with uncertainty, our proposed solutions enable user participation for interactive selfadaptive software while at the same time maintaining the benefits of intelligent autonomous behaviour. We define three dimensions of user participation, namely temporal, behavioural, and structural user participation. This dissertation contributes solutions for user participation in the temporal and behavioural dimension. The temporal dimension addresses the moment of adaptation which is classically determined by the self-adaptive system. We provide mechanisms allowing users to influence or to define the moment of adaptation. With our solution, users can have full control over the moment of adaptation or the self-adaptive software considers the user’s situation more appropriately. The behavioural dimension addresses the actual adaptation logic and the resulting run-time behaviour. Application behaviour is established during development and does not necessarily match the run-time expectations. Our contributions are three distinct solutions which allow users to make changes to the application’s runtime behaviour: dynamic utility functions, fuzzy-based reasoning, and learning-based reasoning. The foundation of our work is a notification and feedback solution that improves intelligibility and controllability of self-adaptive applications by implementing a bi-directional communication between self-adaptive software and the user. The different mechanisms from the temporal and behavioural participation dimension require the notification and feedback solution to inform users on adaptation actions and to provide a mechanism to influence adaptations. Case studies show the feasibility of the developed solutions. Moreover, an extensive user study with 62 participants was conducted to evaluate the impact of notifications before and after adaptations. Although the study revealed that there is no preference for a particular notification design, participants clearly appreciated intelligibility and controllability over autonomous adaptations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Web services from different partners can be combined to applications that realize a more complex business goal. Such applications built as Web service compositions define how interactions between Web services take place in order to implement the business logic. Web service compositions not only have to provide the desired functionality but also have to comply with certain Quality of Service (QoS) levels. Maximizing the users' satisfaction, also reflected as Quality of Experience (QoE), is a primary goal to be achieved in a Service-Oriented Architecture (SOA). Unfortunately, in a dynamic environment like SOA unforeseen situations might appear like services not being available or not responding in the desired time frame. In such situations, appropriate actions need to be triggered in order to avoid the violation of QoS and QoE constraints. In this thesis, proper solutions are developed to manage Web services and Web service compositions with regard to QoS and QoE requirements. The Business Process Rules Language (BPRules) was developed to manage Web service compositions when undesired QoS or QoE values are detected. BPRules provides a rich set of management actions that may be triggered for controlling the service composition and for improving its quality behavior. Regarding the quality properties, BPRules allows to distinguish between the QoS values as they are promised by the service providers, QoE values that were assigned by end-users, the monitored QoS as measured by our BPR framework, and the predicted QoS and QoE values. BPRules facilitates the specification of certain user groups characterized by different context properties and allows triggering a personalized, context-aware service selection tailored for the specified user groups. In a service market where a multitude of services with the same functionality and different quality values are available, the right services need to be selected for realizing the service composition. We developed new and efficient heuristic algorithms that are applied to choose high quality services for the composition. BPRules offers the possibility to integrate multiple service selection algorithms. The selection algorithms are applicable also for non-linear objective functions and constraints. The BPR framework includes new approaches for context-aware service selection and quality property predictions. We consider the location information of users and services as context dimension for the prediction of response time and throughput. The BPR framework combines all new features and contributions to a comprehensive management solution. Furthermore, it facilitates flexible monitoring of QoS properties without having to modify the description of the service composition. We show how the different modules of the BPR framework work together in order to execute the management rules. We evaluate how our selection algorithms outperform a genetic algorithm from related research. The evaluation reveals how context data can be used for a personalized prediction of response time and throughput.