100 resultados para Optimisation of methods
Resumo:
Samples of Various industrial or pilot plant spray-dried materials were obtained from manufacturers together with details of drying conditions and feed concentrations. The samples were subjected to qualitative and semi-quantitative examination to identify structural and morphological features. The results were related to measured bulk physical properties and to drying conditions. Single particles were produced in a convective drying process Analogous to spray drying, in which different solids or mixtures of solids were dried from solutions, slurries or pastes as single suspended droplets. The localized chemical and physical structures were analysed and in some cases the retention of volatiles monitored. The results were related to experimental conditions, viz.; air temperature, initial solids concentration and the degree of feed aeration. Three distinct categories of particle morphology were identified, i.e.; crystalline, skin-forming and agglomerate. Each category is evidence of a characteristic drying behaviour which is dependent on initial solids concentration. the degree of feed aeration, and drying temperature. Powder flow ability, particle and bulk density, particle-size, particle friability, and the retention of volatiles bear a direct relationship to morphological structure. Morphologies of multicomponent mixtures were complex, but the respective migration rates of the solutes were dependent on drying temperature. Gas-film heat and SDSS transfer coefficients of single pure liquid droplets were also measured over a temperature range of 50•C to 200•C under forced convection. Balanced transfer rates were obtained attributed to droplet instability or oscillation within the airflow, demonstrated in associated work with single free-flight droplets. The results are of relevance to drier optimisation and to the optimisation of product characteristics, e.g.; particle strength and essential volatiles-retention, in convective drying.
Resumo:
A large number of compounds containing quinonoid or hindered phenol functions were examined for their roles as antifatigue agents. Among the evaluated quinones and phenols expected to have macroalkyl radical scavenging ability, BQ, αTOC, γTOC and GM showed relatively good performance for fatigue resistance (although their performance was slightly less effective than the commercial aromatic amine antioxidants, IPPD and 6PPD). The compounds which were shown to have higher reactivity with alkyl radicals (via calculated reactivity indices) showed better fatigue resistance. This fact supports the suggestion that strong alkyl radical scavengers should be also effective antifatigue agents. Evidence produced based on calculation of reactivity indices suggests that the quinones examined react with alkyl radicals on the meta position of the quinone rings producing phenoxyl radicals. The phenoxyl radicals are expected either to disproportionate, to recombine with a further alkyl radical, or to abstract a hydrogen from another alkyl radical producing an olefine. The regeneration of quinones and formation of the corresponding phenols is expected to occur during the antifatigue activity. The phenol antioxidant, HBA is expected to produce a quinonoid compound and this is also expected to function in a similar way to other quinones. Another phenol, GM, which is also known to scavenge alkyl radicals showed good antifatigue performance. Tocopherols had effective antifatigue activity and are expected to have different antifatigue mechanisms from that of other quinones, hence αTOC was examined for its mechanisms during rubber fatiguing using HPLC analysis. Trimers of αTOC which were produced during vulcanisation are suggested to contribute to the fatigue activity observed. The evidence suggests that the trimers reproduce αTOC and a mechanism was proposed. Although antifatigue agents evaluated showed antifatigue activity, most of them had poor thermoxidative resistance, hence it was necessary to compensate for this by using a combination of antioxidants with the antifatigue agents. Reactive antioxidants which have the potential to graft on the polymer chains during reactive processing were used for this purpose. APMA was the most effective antioxidant among other evaluated reactive antioxidants. Although high ratio of grafting was achieved after optimisation of grafting conditions, it is suggested that this was achieved by long branches of APMA due to large extent of polymerisation. This is expected to cause maldistribution of APMA leading to reducing the effect of CB-D activity (while CB-A activity showed clear advantages for grafting). Further optimisation of grafting conditions is required in order to use APMA more effectively. Moreover, although synergistic effects between APMA and antifatigue agents were expected, none of the evaluated antifatigue agents, BQ, αTOC, γTOC and TMQ, showed significant synergism both in fatigue and thermoxidative resistance. They performed just as additives.
The effective use of implicit parallelism through the use of an object-oriented programming language
Resumo:
This thesis explores translating well-written sequential programs in a subset of the Eiffel programming language - without syntactic or semantic extensions - into parallelised programs for execution on a distributed architecture. The main focus is on constructing two object-oriented models: a theoretical self-contained model of concurrency which enables a simplified second model for implementing the compiling process. There is a further presentation of principles that, if followed, maximise the potential levels of parallelism. Model of Concurrency. The concurrency model is designed to be a straightforward target for mapping sequential programs onto, thus making them parallel. It aids the compilation process by providing a high level of abstraction, including a useful model of parallel behaviour which enables easy incorporation of message interchange, locking, and synchronization of objects. Further, the model is sufficient such that a compiler can and has been practically built. Model of Compilation. The compilation-model's structure is based upon an object-oriented view of grammar descriptions and capitalises on both a recursive-descent style of processing and abstract syntax trees to perform the parsing. A composite-object view with an attribute grammar style of processing is used to extract sufficient semantic information for the parallelisation (i.e. code-generation) phase. Programming Principles. The set of principles presented are based upon information hiding, sharing and containment of objects and the dividing up of methods on the basis of a command/query division. When followed, the level of potential parallelism within the presented concurrency model is maximised. Further, these principles naturally arise from good programming practice. Summary. In summary this thesis shows that it is possible to compile well-written programs, written in a subset of Eiffel, into parallel programs without any syntactic additions or semantic alterations to Eiffel: i.e. no parallel primitives are added, and the parallel program is modelled to execute with equivalent semantics to the sequential version. If the programming principles are followed, a parallelised program achieves the maximum level of potential parallelisation within the concurrency model.
Resumo:
The main aim of this thesis is to investigate the application of methods of differential geometry to the constraint analysis of relativistic high spin field theories. As a starting point the coordinate dependent descriptions of the Lagrangian and Dirac-Bergmann constraint algorithms are reviewed for general second order systems. These two algorithms are then respectively employed to analyse the constraint structure of the massive spin-1 Proca field from the Lagrangian and Hamiltonian viewpoints. As an example of a coupled field theoretic system the constraint analysis of the massive Rarita-Schwinger spin-3/2 field coupled to an external electromagnetic field is then reviewed in terms of the coordinate dependent Dirac-Bergmann algorithm for first order systems. The standard Velo-Zwanziger and Johnson-Sudarshan inconsistencies that this coupled system seemingly suffers from are then discussed in light of this full constraint analysis and it is found that both these pathologies degenerate to a field-induced loss of degrees of freedom. A description of the geometrical version of the Dirac-Bergmann algorithm developed by Gotay, Nester and Hinds begins the geometrical examination of high spin field theories. This geometric constraint algorithm is then applied to the free Proca field and to two Proca field couplings; the first of which is the minimal coupling to an external electromagnetic field whilst the second is the coupling to an external symmetric tensor field. The onset of acausality in this latter coupled case is then considered in relation to the geometric constraint algorithm.
Resumo:
This thesis looks at two issues. Firstly, statistical work was undertaken examining profit margins, labour productivity and total factor productivity in telecommunications in ten member states of the EU over a 21-year period (not all member states of the EU could be included due to data inadequacy). Also, three non-members, namely Switzerland, Japan and US, were included for comparison. This research was to provide an understanding of how telecoms in the European Union (EU) have developed. There are two propositions in this part of the thesis: (i) privatisation and market liberalisation improve performance; (ii) countries that liberalised their telecoms sectors first show a better productivity growth than countries that liberalised later. In sum, a mixed picture is revealed. Some countries performed better than others over time, but there is no apparent relationship between productivity performance and the two propositions. Some of the results from this part of the thesis were published in Dabler et al. (2002). Secondly, the remainder of the tests the proposition that the telecoms directives of the European Commission created harmonised regulatory systems in the member states of the EU. By undertaking explanatory research, this thesis not only seeks to establish whether harmonisation has been achieved, but also tries to find an explanation as to why this is so. To accomplish this, as a first stage to questionnaire survey was administered to the fifteen telecoms regulators in the EU. The purpose of the survey was to provide knowledge of methods, rationales and approaches adopted by the regulatory offices across the EU. This allowed for the decision as to whether harmonisation in telecoms regulation has been achieved. Stemming from the results of the questionnaire analysis, follow-up case studies with four telecoms regulators were undertaken, in a second stage of this research. The objective of these case studies was to take into account the country-specific circumstances of telecoms regulation in the EU. To undertake the case studies, several sources of evidence were combined. More specifically, the annual Implementation Reports of the European Commission were reviewed, alongside the findings from the questionnaire. Then, interviews with senior members of staff in the four regulatory authorities were conducted. Finally, the evidence from the questionnaire survey and from the case studies was corroborated to provide an explanation as to why telecoms regulation in the EU has reached or has not reached a state of harmonisation. In addition to testing whether harmonisation has been achieved and why, this research has found evidence of different approaches to control over telecoms regulators and to market intervention administered by telecoms regulators within the EU. Regarding regulatory control, it was found that some member states have adopted mainly a proceduralist model, some have implemented more of a substantive model, and others have adopted a mix between both. Some findings from the second stage of the research were published in Dabler and Parker (2004). Similarly, regarding market intervention by regulatory authorities, different member states treat market intervention differently, namely according to market-driven or non-market-driven models, or a mix between both approaches.
Resumo:
The concept of a task is fundamental to the discipline of ergonomics. Approaches to the analysis of tasks began in the early 1900's. These approaches have evolved and developed to the present day, when there is a vast array of methods available. Some of these methods are specific to particular contexts or applications, others more general. However, whilst many of these analyses allow tasks to be examined in detail, they do not act as tools to aid the design process or the designer. The present thesis examines the use of task analysis in a process control context, and in particular the use of task analysis to specify operator information and display requirements in such systems. The first part of the thesis examines the theoretical aspect of task analysis and presents a review of the methods, issues and concepts relating to task analysis. A review of over 80 methods of task analysis was carried out to form a basis for the development of a task analysis method to specify operator information requirements in industrial process control contexts. Of the methods reviewed Hierarchical Task Analysis was selected to provide such a basis and developed to meet the criteria outlined for such a method of task analysis. The second section outlines the practical application and evolution of the developed task analysis method. Four case studies were used to examine the method in an empirical context. The case studies represent a range of plant contexts and types, both complex and more simple, batch and continuous and high risk and low risk processes. The theoretical and empirical issues are drawn together and a method developed to provide a task analysis technique to specify operator information requirements and to provide the first stages of a tool to aid the design of VDU displays for process control.
Resumo:
African Caribbean Owned Businesses (ACOBs) have been postulated as having performance-related problems especially when compared with other ethnic minority groups in Britain. This research investigates if ACOBs may be performing less than similar firms in the population and why this maybe so. Therefore the aspiration behind this study is one of ratifying the existence of performance differentials between ACOBs and White Asian Owned Businesses (WAOBs), by using a triangulation of methods and matched pair analysis. Every ACOB was matched along firm specific characteristics of age, size, legal form and industry (sector), with similar WAOBs. Findings show support for the hypothesis that ACOBs are more likely to perform less than the WAOBs; WAOBs out-performed ACOBs in the objective and subjective assessments. Though we found some differentials between both groups in the entrepreneur’s characteristics and various emphases in strategic orientation in overall business strategy. The most likely drivers of performance differentials were found in firm activities and operations. ACOBs tended to have brands that were not as popular in the mainstream with most of their manufactured goods being seen as ‘exotic’ while those by WAOBs were perceived as ‘traditional’. Moreover, ACOBs had a higher proportion of clients constituting of individuals than business organisations while the WAOBs had a higher proportion consisting of business organisations.
Resumo:
This work describes the fabrication of nanospheres from a range of novel polyhydroxyalkanoates supplied by Monsanto, St Louis, Missouri, USA for the delivery of selected actives of both pharmaceutical and agricultural interest. Initial evaluation of established microsphere and nanosphere fabrication techniques resulted in the adoption and optimisation of a double sonication solvent evaporation method involving the synperonic surfactant F68. Nanospheres could be consistently generated with this method. Studies on the incorporation and release of the surrogate protein Bovine Serum Albumin V demonstrated that BSA could be loaded with between 10-40% w/w BSA without nanosphere destabilisation. BSA release from nanospheres into Hanks Balanced Salts Solution, pH 7.4, could be monitored for up to 28 days at 37°C. The incorporation and release of the Monsanto actives - the insecticide Admire® ({ 1-[(6-chloro-3-pyridinyl)methyIJ-N-nitro-2-imidazolidinimine}) and the plant growth hormone potassium salt Gibberellic acid (GA3K) from physico-chemically characterised polymer nanospheres was monitored for up to 37 days and 28 days respectively, at both 4°C and 23°C. Release data was subsequently fitted to established kinetic models to elaborate the possible mechanisms of release of actives from the nanospheres. The exposure of unloaded nanospheres to a range of physiological media and rural rainwater has been used to investigate the role polymer biodegradation by enzymatic and chemical means might play in the in vivo release of actives and agricultural applications. The potential environmental biodegradation of Monsanto polymers has been investigated using a composting study (International Standard ISO/FDIS 14855) in which the ultimate aerobic biodegradation of the polymers has been monitored by the analysis of evolved carbon dioxide. These studies demonstrated the potential of the polymers for use in the environment, for example as a pesticide delivery system.
Resumo:
Development of accurate and sensitive analytical methods to measure the level of biomarkers, such as 8-oxo-guanine or its corresponding nucleoside, 8-oxo-2’-deoxyguanosine, has become imperative in the study of DNA oxidative damage in vivo. Of the most promising techniques, HPLC-MS/MS, has many attractive advantages. Like any method that employs the MS technique, its accuracy depends on the use of multiply, isotopically-labelled internal standards. This project is aimed at making available such internal standards. The first task was to synthesise the multiply, isotopically-labelled bases (M+4) guanine and (M+4) 8-oxo-guanine. Synthetic routes for both (M+4) guanine and (M+4) 8-oxo-guanine were designed and validated using the unlabelled compounds. The reaction conditions were also optimized during the “dry runs”. The amination of the 4-hydroxy-2,6-dichloropyrimidine, appeared to be very sensitive to the purity of the commercial [15]N benzylamine reagent. Having failed, after several attempts, to obtain the pure reagent from commercial suppliers, [15]N benzylamine was successfully synthesised in our laboratory and used in the first synthesis of (M+4) guanine. Although (M+4) bases can be, and indeed have been used as internal standards in the quantitative analysis of oxidative damage, they can not account for the errors that may occur during the early sample preparation stages. Therefore, internal standards in the form of nucleosides and DNA oligomers are more desirable. After evaluating a number of methods, an enzymatic transglycolization technique was adopted for the transfer of the labelled bases to give their corresponding nucleosides. Both (M+4) 2-deoxyguanosine and (M+4) 8-oxo-2’-deoxyguanosine can be purified on micro scale by HPLC. The challenge came from the purification of larger scale (>50 mg) synthesis of nucleosides. A gel filtration method was successfully developed, which resulted in excellent separation of (M+4) 2’-deoxyguanosine from the incubation mixture. The (M+4) 2’-deoxyguanosine was then fully protected in three steps and successfully incorporated, by solid supported synthesis, into a DNA oligomer containing 18 residues. Thus, synthesis of 8-oxo-deoxyguanosine on a bigger scale for its future incorporation into DNA oligomers is now a possibility resulting from this thesis work. We believe that these internal standards can be used to develop procedures that can make the measurement of oxidative DNA damage more accurate and sensitive.
Resumo:
In this project, antigen-containing microspheres were produced using a range of biodegradable polymers by single and double emulsion solvent evaporation and spray drying techniques. The proteins used in this study were mainly BSA, tetanus toxoid, F1 and V, Y. pestis subunit vaccines and the cytokine, interferon-gamma. The polymer chosen for use in the vaccine preparation will directly determine the characteristics of the formulation. Full in vitro analysis of the preparations was carried out, including surface hydrophobicity and drug release profiles. The influence of the surfactants employed on microsphere surface hydrophobicity was demonstrated. Preparations produced with polyhydroxybutyrate and poly(DTH carbonate) polymers were also shown to be more hydrophobic than PLA microspheres, which may enhance particle uptake by antigen presenting cells and Peyer's patches. Systematic immunisation with microspheres with a range of properties showed differences in the time course and extent of the immune response generated, which would allow optimisation of the dosing schedule to provide maximal response in a single dose preparation. Both systematic and mucosal responses were induced following oral delivery of microencapsulated tetanus toxoid indicating that the encapsulation of the antigen into a microsphere preparation provides protection in the gut and allows targeting of the mucosal-associated lymphoid tissue. Co-encapsulation of adjuvants for further enhancement of immune response was also carried out and the effect on loading and release pattern assessed. Co-encapsulated F1 and interferon-gamma was administered i.p. and the immune responses compared with singly encapsulated and free subunit antigen.
Resumo:
This research focused on the formation of particulate delivery systems for the sub-unit fusion protein, Ag85B-ESAT-6, a promising tuberculosis (TB) vaccine candidate. Initial work concentrated on formulating and characterising, both physico-chemically and immunologically, cationic liposomes based on the potent adjuvant dimethyl dioctadecyl ammonium (DDA). These studies demonstrated that addition of the immunomodulatory trehalose dibehenate (TDB) enhanced the physical stability of the system whilst also adding further adjuvanticity. Indeed, this formulation was effective in stimulating both a cell mediated and humoural immune response. In order to investigate an alternative to the DDA-TDB system, microspheres based on poly(DL-lactide-co-glycolide) (PLGA) incorporating the adjuvants DDA and TDB, either alone or in combination, were first optimised in terms of physico-chemical characteristics, followed by immunological analysis. The formulation incorporating PLGA and DDA emerged as the lead candidate, with promising protection data against TB. Subsequent optimisation of the lead microsphere formulation investigated the effect of several variables involved in the formulation process on physico-chemical and immunological characteristics of the particles produced. Further, freeze-drying studies were carried out with both sugar-based and amino acid-based cryoprotectants, in order to formulate a stable freexe-dried product. Finally, environmental scanning electron microscopy (ESEM) was investigated as a potential alternative to conventional SEM for the morphological investigation of microsphere formulations. Results revealed that the DDA-TDB liposome system proved to be the most immunologically efficient delivery vehicle studied, with high levels of antibody and cytokine production, particularly gamma-interferon (IFN-ϒ), considered the key cytokine marker for anti-mycobacterial immunity. Of the microsphere systems investigated, PLGA in combination with DDA showed the most promise, with an ability to initiate a broad spectrum of cytokine production, as well as antigen specific spleen cell proliferation comparable to that of the DDA-TDB formulation.
Resumo:
OObjectives: We explored the perceptions, views and experiences of diabetes education in people with type 2 diabetes who were participating in a UK randomized controlled trial of methods of education. The intervention arm of the trial was based on DESMOND, a structured programme of group education sessions aimed at enabling self-management of diabetes, while the standard arm was usual care from general practices. Methods: Individual semi-structured interviews were conducted with 36 adult patients, of whom 19 had attended DESMOND education sessions and 17 had been randomized to receive usual care. Data analysis was based on the constant comparative method. Results: Four principal orientations towards diabetes and its management were identified: `resisters', `identity resisters, consequence accepters', `identity accepters, consequence resisters' and `accepters'. Participants offered varying accounts of the degree of personal responsibility that needed to be assumed in response to the diagnosis. Preferences for different styles of education were also expressed, with many reporting that they enjoyed and benefited from group education, although some reported ambivalence or disappointment with their experiences of education. It was difficult to identify striking thematic differences between accounts of people on different arms of the trial, although there was some very tentative evidence that those who attended DESMOND were more accepting of a changed identity and its implications for their management of diabetes. Discussion: No one single approach to education is likely to suit all people newly diagnosed with diabetes, although structured group education may suit many. This paper identifies varying orientations and preferences of people with diabetes towards forms of both education and self-management, which should be taken into account when planning approaches to education.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
The occurrence of spalling is a major factor in determining the fire resistance of concrete constructions. The apparently random occurrence of spalling has limited the development and application of fire resistance modelling for concrete structures. This Thesis describes an experimental investigation into the spalling of concrete on exposure to elevated temperatures. It has been shown that spalling may be categorised into four distinct types, aggregate spalling, corner spalling, surface spalling and explosive spalling. Aggregate spalling has been found to be a form of shear failure of aggregates local to the heated surface. The susceptibility of any particular concrete to aggregate spalling can be quantified from parameters which include the coefficients of thermal expansion of both the aggregate and the surrounding mortar, the size and thermal diffusivity of the aggregate and the rate of heating. Corner spalling, which is particularly significant for the fire resistance of concrete columns, is a result of concrete losing its tensile strength at elevated temperatures. Surface spalling is the result of excessive pore pressures within heated concrete. An empirical model has been developed to allow quantification of the pore pressures and a material failure model proposed. The dominant parameters are rate of heating, pore saturation and concrete permeability. Surface spalling may be alleviated by limiting pore pressure development and a number of methods to this end have been evaluated. Explosive spalling involves the catastrophic failure of a concrete element and may be caused by either of two distinct mechanisms. In the first instance, excessive pore pressures can cause explosive spalling, although the effect is limited principally to unloaded or relatively small specimens. A second cause of explosive spalling is where the superimposition of thermally induced stresses on applied load stresses exceed the concrete's strength.
Resumo:
Service-based systems that are dynamically composed at run time to provide complex, adaptive functionality are currently one of the main development paradigms in software engineering. However, the Quality of Service (QoS) delivered by these systems remains an important concern, and needs to be managed in an equally adaptive and predictable way. To address this need, we introduce a novel, tool-supported framework for the development of adaptive service-based systems called QoSMOS (QoS Management and Optimisation of Service-based systems). QoSMOS can be used to develop service-based systems that achieve their QoS requirements through dynamically adapting to changes in the system state, environment and workload. QoSMOS service-based systems translate high-level QoS requirements specified by their administrators into probabilistic temporal logic formulae, which are then formally and automatically analysed to identify and enforce optimal system configurations. The QoSMOS self-adaptation mechanism can handle reliability- and performance-related QoS requirements, and can be integrated into newly developed solutions or legacy systems. The effectiveness and scalability of the approach are validated using simulations and a set of experiments based on an implementation of an adaptive service-based system for remote medical assistance.