25 resultados para temporal-logic model
em Aston University Research Archive
Resumo:
This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.
Resumo:
Hard real-time systems are a class of computer control systems that must react to demands of their environment by providing `correct' and timely responses. Since these systems are increasingly being used in systems with safety implications, it is crucial that they are designed and developed to operate in a correct manner. This thesis is concerned with developing formal techniques that allow the specification, verification and design of hard real-time systems. Formal techniques for hard real-time systems must be capable of capturing the system's functional and performance requirements, and previous work has proposed a number of techniques which range from the mathematically intensive to those with some mathematical content. This thesis develops formal techniques that contain both an informal and a formal component because it is considered that the informality provides ease of understanding and the formality allows precise specification and verification. Specifically, the combination of Petri nets and temporal logic is considered for the specification and verification of hard real-time systems. Approaches that combine Petri nets and temporal logic by allowing a consistent translation between each formalism are examined. Previously, such techniques have been applied to the formal analysis of concurrent systems. This thesis adapts these techniques for use in the modelling, design and formal analysis of hard real-time systems. The techniques are applied to the problem of specifying a controller for a high-speed manufacturing system. It is shown that they can be used to prove liveness and safety properties, including qualitative aspects of system performance. The problem of verifying quantitative real-time properties is addressed by developing a further technique which combines the formalisms of timed Petri nets and real-time temporal logic. A unifying feature of these techniques is the common temporal description of the Petri net. A common problem with Petri net based techniques is the complexity problems associated with generating the reachability graph. This thesis addresses this problem by using concurrency sets to generate a partial reachability graph pertaining to a particular state. These sets also allows each state to be checked for the presence of inconsistencies and hazards. The problem of designing a controller for the high-speed manufacturing system is also considered. The approach adopted mvolves the use of a model-based controller: This type of controller uses the Petri net models developed, thus preservIng the properties already proven of the controller. It. also contains a model of the physical system which is synchronised to the real application to provide timely responses. The various way of forming the synchronization between these processes is considered and the resulting nets are analysed using concurrency sets.
Resumo:
Service-based systems that are dynamically composed at run time to provide complex, adaptive functionality are currently one of the main development paradigms in software engineering. However, the Quality of Service (QoS) delivered by these systems remains an important concern, and needs to be managed in an equally adaptive and predictable way. To address this need, we introduce a novel, tool-supported framework for the development of adaptive service-based systems called QoSMOS (QoS Management and Optimisation of Service-based systems). QoSMOS can be used to develop service-based systems that achieve their QoS requirements through dynamically adapting to changes in the system state, environment and workload. QoSMOS service-based systems translate high-level QoS requirements specified by their administrators into probabilistic temporal logic formulae, which are then formally and automatically analysed to identify and enforce optimal system configurations. The QoSMOS self-adaptation mechanism can handle reliability- and performance-related QoS requirements, and can be integrated into newly developed solutions or legacy systems. The effectiveness and scalability of the approach are validated using simulations and a set of experiments based on an implementation of an adaptive service-based system for remote medical assistance.
Resumo:
Cloud computing is a new technological paradigm offering computing infrastructure, software and platforms as a pay-as-you-go, subscription-based service. Many potential customers of cloud services require essential cost assessments to be undertaken before transitioning to the cloud. Current assessment techniques are imprecise as they rely on simplified specifications of resource requirements that fail to account for probabilistic variations in usage. In this paper, we address these problems and propose a new probabilistic pattern modelling (PPM) approach to cloud costing and resource usage verification. Our approach is based on a concise expression of probabilistic resource usage patterns translated to Markov decision processes (MDPs). Key costing and usage queries are identified and expressed in a probabilistic variant of temporal logic and calculated to a high degree of precision using quantitative verification techniques. The PPM cost assessment approach has been implemented as a Java library and validated with a case study and scalability experiments. © 2012 Springer-Verlag Berlin Heidelberg.
Resumo:
Processes of European integration and growing consumer scrutiny of public services have served to place the spotlight on the traditional French model of public/private interaction in the urban services domain. This article discusses recent debates within France of the institutionalised approach to local public/private partnership, and presents case study evidence from three urban agglomerations of a possible divergence from this approach. Drawing on the work of French academic, Dominique Lorrain, whose historical institutionalist accounts of the French model are perhaps the most comprehensive and best known, the article develops two hypotheses of institutional change, one from the historical institutionalist perspective of institutional stability and persistence, and the other from an explicitly sociological perspective, which emphasises the legitimating benefits of following appropriate rules of conduct. It argues that further studying the French model as an institution offers valuable empirical insight into processes of institutional change and persistence. © 2004 Taylor & Francis Ltd.
Resumo:
In this paper we propose a data envelopment analysis (DEA) based method for assessing the comparative efficiencies of units operating production processes where input-output levels are inter-temporally dependent. One cause of inter-temporal dependence between input and output levels is capital stock which influences output levels over many production periods. Such units cannot be assessed by traditional or 'static' DEA which assumes input-output correspondences are contemporaneous in the sense that the output levels observed in a time period are the product solely of the input levels observed during that same period. The method developed in the paper overcomes the problem of inter-temporal input-output dependence by using input-output 'paths' mapped out by operating units over time as the basis of assessing them. As an application we compare the results of the dynamic and static model for a set of UK universities. The paper is suggested that dynamic model capture the efficiency better than static model. © 2003 Elsevier Inc. All rights reserved.
Resumo:
A new general linear model (GLM) beamformer method is described for processing magnetoencephalography (MEG) data. A standard nonlinear beamformer is used to determine the time course of neuronal activation for each point in a predefined source space. A Hilbert transform gives the envelope of oscillatory activity at each location in any chosen frequency band (not necessary in the case of sustained (DC) fields), enabling the general linear model to be applied and a volumetric T statistic image to be determined. The new method is illustrated by a two-source simulation (sustained field and 20 Hz) and is shown to provide accurate localization. The method is also shown to locate accurately the increasing and decreasing gamma activities to the temporal and frontal lobes, respectively, in the case of a scintillating scotoma. The new method brings the advantages of the general linear model to the analysis of MEG data and should prove useful for the localization of changing patterns of activity across all frequency ranges including DC (sustained fields). © 2004 Elsevier Inc. All rights reserved.
Resumo:
How does the brain combine spatio-temporal signals from the two eyes? We quantified binocular summation as the improvement in 2AFC contrast sensitivity for flickering gratings seen by two eyes compared with one. Binocular gratings in-phase showed sensitivity up to 1.8 times higher, suggesting nearly linear summation of contrasts. The binocular advantage decreased to 1.4 at lower spatial and higher temporal frequencies (0.25 cycle deg-1, 30 Hz). Dichoptic, antiphase gratings showed only a small binocular advantage, by a factor of 1.1 to 1.2, but no evidence of cancellation. We present a signal-processing model to account for the contrast-sensitivity functions and the pattern of binocular summation. It has linear sustained and transient temporal filters, nonlinear transduction, and half-wave rectification that creates ON and OFF channels. Binocular summation occurs separately within ON and OFF channels, thus explaining the phase-specific binocular advantage. The model also accounts for earlier findings on detection of brief antiphase flashes and the surprising finding that dichoptic antiphase flicker is seen as frequency-doubled (Cavonius et al, 1992 Ophthalmic and Physiological Optics 12 153 - 156). [Supported by EPSRC project GR/S74515/01].
Resumo:
Blurred edges appear sharper in motion than when they are stationary. We proposed a model of this motion sharpening that invokes a local, nonlinear contrast transducer function (Hammett et al, 1998 Vision Research 38 2099-2108). Response saturation in the transducer compresses or 'clips' the input spatial waveform, rendering the edges as sharper. To explain the increasing distortion of drifting edges at higher speeds, the degree of nonlinearity must increase with speed or temporal frequency. A dynamic contrast gain control before the transducer can account for both the speed dependence and approximate contrast invariance of motion sharpening (Hammett et al, 2003 Vision Research, in press). We show here that this model also predicts perceived sharpening of briefly flashed and flickering edges, and we show that the model can account fairly well for experimental data from all three modes of presentation (motion, flash, and flicker). At moderate durations and lower temporal frequencies the gain control attenuates the input signal, thus protecting it from later compression by the transducer. The gain control is somewhat sluggish, and so it suffers both a slow onset, and loss of power at high temporal frequencies. Consequently, brief presentations and high temporal frequencies of drift and flicker are less protected from distortion, and show greater perceptual sharpening.
Resumo:
A well-known property of orientation-tuned neurons in the visual cortex is that they are suppressed by the superposition of an orthogonal mask. This phenomenon has been explained in terms of physiological constraints (synaptic depression), engineering solutions for components with poor dynamic range (contrast normalization) and fundamental coding strategies for natural images (redundancy reduction). A common but often tacit assumption is that the suppressive process is equally potent at different spatial and temporal scales of analysis. To determine whether it is so, we measured psychophysical cross-orientation masking (XOM) functions for flickering horizontal Gabor stimuli over wide ranges of spatio-temporal frequency and contrast. We found that orthogonal masks raised contrast detection thresholds substantially at low spatial frequencies and high temporal frequencies (high speeds), and that small and unexpected levels of facilitation were evident elsewhere. The data were well fit by a functional model of contrast gain control, where (i) the weight of suppression increased with the ratio of temporal to spatial frequency and (ii) the weight of facilitatory modulation was the same for all conditions, but outcompeted by suppression at higher contrasts. These results (i) provide new constraints for models of primary visual cortex, (ii) associate XOM and facilitation with the transient magno- and sustained parvostreams, respectively, and (iii) reconcile earlier conflicting psychophysical reports on XOM.
Resumo:
The size frequency distributions of diffuse, primitive and cored senile plaques (SP) were studied in single sections of the temporal lobe from 10 patients with Alzheimer’s disease (AD). The size distribution curves were unimodal and positively skewed. The size distribution curve of the diffuse plaques was shifted towards larger plaques while those of the neuritic and cored plaques were shifted towards smaller plaques. The neuritic/diffuse plaque ratio was maximal in the 11 – 30 micron size class and the cored/ diffuse plaque ratio in the 21 – 30 micron size class. The size distribution curves of the three types of plaque deviated significantly from a log-normal distribution. Distributions expressed on a logarithmic scale were ‘leptokurtic’, i.e. with excess of observations near the mean. These results suggest that SP in AD grow to within a more restricted size range than predicted from a log-normal model. In addition, there appear to be differences in the patterns of growth of diffuse, primitive and cored plaques. If neuritic and cored plaques develop from earlier diffuse plaques, then smaller diffuse plaques are more likely to be converted to mature plaques.
Resumo:
Masking, adaptation, and summation paradigms have been used to investigate the characteristics of early spatio-temporal vision. Each has been taken to provide evidence for (i) oriented and (ii) nonoriented spatial-filtering mechanisms. However, subsequent findings suggest that the evidence for nonoriented mechanisms has been misinterpreted: those experiments might have revealed the characteristics of suppression (eg, gain control), not excitation, or merely the isotropic subunits of the oriented detecting mechanisms. To shed light on this, we used all three paradigms to focus on the ‘high-speed’ corner of spatio-temporal vision (low spatial frequency, high temporal frequency), where cross-oriented achromatic effects are greatest. We used flickering Gabor patches as targets and a 2IFC procedure for monocular, binocular, and dichoptic stimulus presentations. To account for our results, we devised a simple model involving an isotropic monocular filter-stage feeding orientation-tuned binocular filters. Both filter stages are adaptable, and their outputs are available to the decision stage following nonlinear contrast transduction. However, the monocular isotropic filters (i) adapt only to high-speed stimuli—consistent with a magnocellular subcortical substrate—and (ii) benefit decision making only for high-speed stimuli (ie, isotropic monocular outputs are available only for high-speed stimuli). According to this model, the visual processes revealed by masking, adaptation, and summation are related but not identical.
Resumo:
We introduce a continuum model describing data losses in a single node of a packet-switched network (like the Internet) which preserves the discrete nature of the data loss process. By construction, the model has critical behavior with a sharp transition from exponentially small to finite losses with increasing data arrival rate. We show that such a model exhibits strong fluctuations in the loss rate at the critical point and non-Markovian power-law correlations in time, in spite of the Markovian character of the data arrival process. The continuum model allows for rather general incoming data packet distributions and can be naturally generalized to consider the buffer server idleness statistics.
Resumo:
We measured the properties of interocular suppression in strabismic amblyopes and compared these to dichoptic masking in binocularly normal observers. We used a dichoptic version of the well-established probed-sinewave paradigm that measured sensitivity to a brief target stimulus (one of four letters to be discriminated) in the amblyopic eye at different times relative to a suppression-inducing mask in the fixing eye. This was done using both sinusoidal steady state and transient approaches. The suppression-inducing masks were either modulations of luminance or contrast (full field, just overlaying the target, or just surrounding the target). Our results were interpreted using a descriptive model that included contrast gain control and spatio-temporal filtering prior to excitatory binocular combination. The suppression we measured, other than in magnitude, was not fundamentally different from normal dichoptic masking: lowpass spatio-temporal properties with similar contributions from both surround and overlay suppression.
Resumo:
This article develops a model of practice-driven institutional change - or change that originates in the everyday work of individuals but results in a shift in field-level logic. In demonstrating how improvisations at work can generate institutional change, we attend to the earliest moments of change, which extant research has neglected; and we contrast existing accounts that focus on active entrepreneurship and the contested nature of change. We outline the specific mechanisms by which change emerges from everyday work, becomes justified, and diffuses within an organization and field, as well as precipitating and enabling dynamics that trigger and condition these mechanisms. © Academy of Management Journal.