48 resultados para Thread safe parallel run-time


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uncertainty can be defined as the difference between information that is represented in an executing system and the information that is both measurable and available about the system at a certain point in its life-time. A software system can be exposed to multiple sources of uncertainty produced by, for example, ambiguous requirements and unpredictable execution environments. A runtime model is a dynamic knowledge base that abstracts useful information about the system, its operational context and the extent to which the system meets its stakeholders' needs. A software system can successfully operate in multiple dynamic contexts by using runtime models that augment information available at design-time with information monitored at runtime. This chapter explores the role of runtime models as a means to cope with uncertainty. To this end, we introduce a well-suited terminology about models, runtime models and uncertainty and present a state-of-the-art summary on model-based techniques for addressing uncertainty both at development- and runtime. Using a case study about robot systems we discuss how current techniques and the MAPE-K loop can be used together to tackle uncertainty. Furthermore, we propose possible extensions of the MAPE-K loop architecture with runtime models to further handle uncertainty at runtime. The chapter concludes by identifying key challenges, and enabling technologies for using runtime models to address uncertainty, and also identifies closely related research communities that can foster ideas for resolving the challenges raised. © 2014 Springer International Publishing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Contemporary software systems are becoming increasingly large, heterogeneous, and decentralised. They operate in dynamic environments and their architectures exhibit complex trade-offs across dimensions of goals, time, and interaction, which emerges internally from the systems and externally from their environment. This gives rise to the vision of self-aware architecture, where design decisions and execution strategies for these concerns are dynamically analysed and seamlessly managed at run-time. Drawing on the concept of self-awareness from psychology, this paper extends the foundation of software architecture styles for self-adaptive systems to arrive at a new principled approach for architecting self-aware systems. We demonstrate the added value and applicability of the approach in the context of service provisioning to cloud-reliant service-based applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The behaviour of self adaptive systems can be emergent, which means that the system’s behaviour may be seen as unexpected by its customers and its developers. Therefore, a self-adaptive system needs to garner confidence in its customers and it also needs to resolve any surprise on the part of the developer during testing and maintenance. We believe that these two functions can only be achieved if a self-adaptive system is also capable of self-explanation. We argue a self-adaptive system’s behaviour needs to be explained in terms of satisfaction of its requirements. Since self-adaptive system requirements may themselves be emergent, we propose the use of goal-based requirements models at runtime to offer self-explanation of how a system is meeting its requirements. We demonstrate the analysis of run-time requirements models to yield a self-explanation codified in a domain specific language, and discuss possible future work.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Wireless sensor networks have been identified as one of the key technologies for the 21st century. In order to overcome their limitations such as fault tolerance and conservation of energy, we propose a middleware solution, In-Motes. In-Motes stands as a fault tolerant platform for deploying and monitoring applications in real time offers a number of possibilities for the end user giving him in parallel the freedom to experiment with various parameters, in an effort the deployed applications to run in an energy efficient manner inside the network. The proposed scheme is evaluated through the In-Motes EYE application, aiming to test its merits under real time conditions. In-Motes EYE application which is an agent based real time In-Motes application developed for sensing acceleration variations in an environment. The application was tested in a prototype area, road alike, for a period of four months.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents results from the first use of neural networks for the real-time feedback control of high temperature plasmas in a Tokamak fusion experiment. The Tokamak is currently the principal experimental device for research into the magnetic confinement approach to controlled fusion. In the Tokamak, hydrogen plasmas, at temperatures of up to 100 Million K, are confined by strong magnetic fields. Accurate control of the position and shape of the plasma boundary requires real-time feedback control of the magnetic field structure on a time-scale of a few tens of microseconds. Software simulations have demonstrated that a neural network approach can give significantly better performance than the linear technique currently used on most Tokamak experiments. The practical application of the neural network approach requires high-speed hardware, for which a fully parallel implementation of the multi-layer perceptron, using a hybrid of digital and analogue technology, has been developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background & Aims: Current models of visceral pain processing derived from metabolic brain imaging techniques fail to differentiate between exogenous (stimulus-dependent) and endogenous (non-stimulus-specific) neural activity. The aim of this study was to determine the spatiotemporal correlates of exogenous neural activity evoked by painful esophageal stimulation. Methods: In 16 healthy subjects (8 men; mean age, 30.2 ± 2.2 years), we recorded magnetoencephalographic responses to 2 runs of 50 painful esophageal electrical stimuli originating from 8 brain subregions. Subsequently, 11 subjects (6 men; mean age, 31.2 ± 1.8 years) had esophageal cortical evoked potentials recorded on a separate occasion by using similar experimental parameters. Results: Earliest cortical activity (P1) was recorded in parallel in the primary/secondary somatosensory cortex and posterior insula (∼85 ms). Significantly later activity was seen in the anterior insula (∼103 ms) and cingulate cortex (∼106 ms; P = .0001). There was no difference between the P1 latency for magnetoencephalography and cortical evoked potential (P = .16); however, neural activity recorded with cortical evoked potential was longer than with magnetoencephalography (P = .001). No sex differences were seen for psychophysical or neurophysiological measures. Conclusions: This study shows that exogenous cortical neural activity evoked by experimental esophageal pain is processed simultaneously in somatosensory and posterior insula regions. Activity in the anterior insula and cingulate - brain regions that process the affective aspects of esophageal pain - occurs significantly later than in the somatosensory regions, and no sex differences were observed with this experimental paradigm. Cortical evoked potential reflects the summation of cortical activity from these brain regions and has sufficient temporal resolution to separate exogenous and endogenous neural activity. © 2005 by the American Gastroenterological Association.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sudden loss of the plasma magnetic confinement, known as disruption, is one of the major issue in a nuclear fusion machine as JET (Joint European Torus), Disruptions pose very serious problems to the safety of the machine. The energy stored in the plasma is released to the machine structure in few milliseconds resulting in forces that at JET reach several Mega Newtons. The problem is even more severe in the nuclear fusion power station where the forces are in the order of one hundred Mega Newtons. The events that occur during a disruption are still not well understood even if some mechanisms that can lead to a disruption have been identified and can be used to predict them. Unfortunately it is always a combination of these events that generates a disruption and therefore it is not possible to use simple algorithms to predict it. This thesis analyses the possibility of using neural network algorithms to predict plasma disruptions in real time. This involves the determination of plasma parameters every few milliseconds. A plasma boundary reconstruction algorithm, XLOC, has been developed in collaboration with Dr. D. Ollrien and Dr. J. Ellis capable of determining the plasma wall/distance every 2 milliseconds. The XLOC output has been used to develop a multilayer perceptron network to determine plasma parameters as ?i and q? with which a machine operational space has been experimentally defined. If the limits of this operational space are breached the disruption probability increases considerably. Another approach for prediction disruptions is to use neural network classification methods to define the JET operational space. Two methods have been studied. The first method uses a multilayer perceptron network with softmax activation function for the output layer. This method can be used for classifying the input patterns in various classes. In this case the plasma input patterns have been divided between disrupting and safe patterns, giving the possibility of assigning a disruption probability to every plasma input pattern. The second method determines the novelty of an input pattern by calculating the probability density distribution of successful plasma patterns that have been run at JET. The density distribution is represented as a mixture distribution, and its parameters arc determined using the Expectation-Maximisation method. If the dataset, used to determine the distribution parameters, covers sufficiently well the machine operational space. Then, the patterns flagged as novel can be regarded as patterns belonging to a disrupting plasma. Together with these methods, a network has been designed to predict the vertical forces, that a disruption can cause, in order to avoid that too dangerous plasma configurations are run. This network can be run before the pulse using the pre-programmed plasma configuration or on line becoming a tool that allows to stop dangerous plasma configuration. All these methods have been implemented in real time on a dual Pentium Pro based machine. The Disruption Prediction and Prevention System has shown that internal plasma parameters can be determined on-line with a good accuracy. Also the disruption detection algorithms showed promising results considering the fact that JET is an experimental machine where always new plasma configurations are tested trying to improve its performances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: to evaluate changes in tear metrics and ocular signs induced by six months of silicone-hydrogel contact lens wear and the difference in baseline characteristics between those who successfully continued in contact lens wear compared to those that did not. Methods: Non-invasive Keratograph, Tearscope and fluorescein tear break-up times (TBUTs), tear meniscus height, bulbar and limbal hyperaemia, lid-parallel conjunctival folds (LIPCOF), phenol red thread, fluorescein and lissamine-green staining, and lid wiper epitheliopathy were measured on 60 new contact lens wearers fitted with monthly silicone-hydrogels (average age 36 ± 14 years, 40 females). Symptoms were evaluated by the Ocular Surface Disease Index (OSDI). After six months full time contact lens wear the above metrics were re-measured on those patients still in contact lens wear (n= 33). The initial measurements were also compared between the group still wearing lenses after six months and those who had ceased lens wear (n= 27). Results: There were significant changes in tear meniscus height (p= 0.031), bulbar hyperaemia (p= 0.011), fluorescein TBUT (p= 0.027), corneal (p= 0.007) and conjunctival (p= 0.009) staining, LIPCOF (p= 0.011) and lid wiper epitheliopathy (p= 0.002) after six months of silicone-hydrogel wear. Successful wearers had a higher non-invasive (17.0 ± 8.2. s vs 12.0 ± 5.6. s; p= 0.001) and fluorescein (10.7 ± 6.4. s vs 7.5 ± 4.7. s; p= 0.001) TBUT than drop-outs, although OSDI (cut-off 4.2) was also a strong predictor of success. Conclusion: Silicone-hydrogel lenses induced significant changes in the tear film and ocular surface as well as lid margin staining. Wettability of the ocular surface is the main factor affecting contact lens drop-out. © 2013 British Contact Lens Association.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dynamics of the non-equilibrium Ising model with parallel updates is investigated using a generalized mean field approximation that incorporates multiple two-site correlations at any two time steps, which can be obtained recursively. The proposed method shows significant improvement in predicting local system properties compared to other mean field approximation techniques, particularly in systems with symmetric interactions. Results are also evaluated against those obtained from Monte Carlo simulations. The method is also employed to obtain parameter values for the kinetic inverse Ising modeling problem, where couplings and local field values of a fully connected spin system are inferred from data. © 2014 IOP Publishing Ltd and SISSA Medialab srl.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presented is a study on a single-drive dual-parallel Mach-Zehnder modulator implementation as a single sideband suppressed carrier generator. High values of both extinction ratio and sidemode suppression ratio were obtained at different modulation frequencies over the Cband. In addition, a stabilisation loop had been developed to preserve the single sideband generation over time. © The Institution of Engineering and Technology 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

If humans monitor streams of rapidly presented (approximately 100-ms intervals) visual stimuli, which are typically specific single letters of the alphabet, for two targets (T1 and T2), they often miss T2 if it follows T1 within an interval of 200-500 ms. If T2 follows T1 directly (within 100 ms; described as occurring at 'Lag 1'), however, performance is often excellent: the so-called 'Lag-1 sparing' phenomenon. Lag-1 sparing might result from the integration of the two targets into the same 'event representation', which fits with the observation that sparing is often accompanied by a loss of T1-T2 order information. Alternatively, this might point to competition between the two targets (implying a trade-off between performance on T1 and T2) and Lag-1 sparing might solely emerge from conditional data analysis (i.e. T2 performance given T1 correct). We investigated the neural correlates of Lag-1 sparing by carrying out magnetoencephalography (MEG) recordings during an attentional blink (AB) task, by presenting two targets with a temporal lag of either 1 or 2 and, in the case of Lag 2, with a nontarget or a blank intervening between T1 and T2. In contrast to Lag 2, where two distinct neural responses were observed, at Lag 1 the two targets produced one common neural response in the left temporo-parieto-frontal (TPF) area but not in the right TPF or prefrontal areas. We discuss the implications of this result with respect to competition and integration hypotheses, and with respect to the different functional roles of the cortical areas considered. We suggest that more than one target can be identified in parallel in left TPF, at least in the absence of intervening nontarget information (i.e. masks), yet identified targets are processed and consolidated as two separate events by other cortical areas (right TPF and PFC, respectively).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to investigate an underexplored aspect of outsourcing involving a mixed strategy in which parallel production is continued in-house at the same time as outsourcing occurs. Design/methodology/approach – The study applied a multiple case study approach and drew on qualitative data collected through in-depth interviews with wood product manufacturing companies. Findings – The paper posits that there should be a variety of mixed strategies between the two governance forms of “make” or “buy.” In order to address how companies should consider the extent to which they outsource, the analysis was structured around two ends of a continuum: in-house dominance or outsourcing dominance. With an in-house-dominant strategy, outsourcing complements an organization's own production to optimize capacity utilization and outsource less cost-efficient production, or is used as a tool to learn how to outsource. With an outsourcing-dominant strategy, in-house production helps maintain complementary competencies and avoids lock-in risk. Research limitations/implications – This paper takes initial steps toward an exploration of different mixed strategies. Additional research is required to understand the costs of different mixed strategies compared with insourcing and outsourcing, and to study parallel production from a supplier viewpoint. Practical implications – This paper suggests that managers should think twice before rushing to a “me too” outsourcing strategy in which in-house capacities are completely closed. It is important to take a dynamic view of outsourcing that maintains a mixed strategy as an option, particularly in situations that involve an underdeveloped supplier market and/or as a way to develop resources over the long term. Originality/value – The concept of combining both “make” and “buy” is not new. However, little if any research has focussed explicitly on exploring the variety of different types of mixed strategies that exist on the continuum between insourcing and outsourcing.