24 resultados para Early Warning System
em Aston University Research Archive
Resumo:
Self-adaptation is emerging as an increasingly important capability for many applications, particularly those deployed in dynamically changing environments, such as ecosystem monitoring and disaster management. One key challenge posed by Dynamically Adaptive Systems (DASs) is the need to handle changes to the requirements and corresponding behavior of a DAS in response to varying environmental conditions. Berry et al. previously identified four levels of RE that should be performed for a DAS. In this paper, we propose the Levels of RE for Modeling that reify the original levels to describe RE modeling work done by DAS developers. Specifically, we identify four types of developers: the system developer, the adaptation scenario developer, the adaptation infrastructure developer, and the DAS research community. Each level corresponds to the work of a different type of developer to construct goal model(s) specifying their requirements. We then leverage the Levels of RE for Modeling to propose two complementary processes for performing RE for a DAS. We describe our experiences with applying this approach to GridStix, an adaptive flood warning system, deployed to monitor the River Ribble in Yorkshire, England.
Resumo:
Poly(methyl methacrylate) (PMMA) based polymer optical fiber Bragg gratings have been used for measuring water activity of aviation fuel. Jet A-1 samples with water content ranging from 100% ERH (wet fuel) to 10 ppm (dried fuel), have been conditioned and calibrated for measurement. The PMMA based optical fiber grating exhibits consistent response and a good sensitivity of 59±3pm/ppm (water content in mass). This water activity measurement allows PMMA based optical fiber gratings to detect very tiny amounts of water in fuels that have a low water saturation point, potentially giving early warning of unsafe operation of a fuel system. © 2014 SPIE.
Resumo:
Hospitals can experience difficulty in detecting and responding to early signs of patient deterioration leading to late intensive care referrals, excess mortality and morbidity, and increased hospital costs. Our study aims to explore potential indicators of physiological deterioration by the analysis of vital-signs. The dataset used comprises heart rate (HR) measurements from MIMIC II waveform database, taken from six patients admitted to the Intensive Care Unit (ICU) and diagnosed with severe sepsis. Different indicators were considered: 1) generic early warning indicators used in ecosystems analysis (autocorrelation at-1-lag (ACF1), standard deviation (SD), skewness, kurtosis and heteroskedasticity) and 2) entropy analysis (kernel entropy and multi scale entropy). Our preliminary findings suggest that when a critical transition is approaching, the equilibrium state changes what is visible in the ACF1 and SD values, but also by the analysis of the entropy. Entropy allows to characterize the complexity of the time series during the hospital stay and can be used as an indicator of regime shifts in a patient’s condition. One of the main problems is its dependency of the scale used. Our results demonstrate that different entropy scales should be used depending of the level of entropy verified.
Resumo:
Describes the impact of the English Landlord and Tenant (Covenants) Act 1995, reforming liability in the context of new leases, extending the 'touching and concerning' requirement so all covenants 'run with the land' (with some exceptions), and abolishing the enduring liability of the original tenants and landlords. Explains that landlords will have more freedom to prescribe in advance the circumstances in which they consent to an assignment, referring also to changes in default notices requiring an 'early warning' to defaulters, and overriding leases, with a remedy for former tenants. Expects future leases to be shorter as landlords realize they cannot hold original tenants liable any more.
Resumo:
Engineering adaptive software is an increasingly complex task. Here, we demonstrate Genie, a tool that supports the modelling, generation, and operation of highly reconfigurable, component-based systems. We showcase how Genie is used in two case-studies: i) the development and operation of an adaptive flood warning system, and ii) a service discovery application. In this context, adaptation is enabled by the Gridkit reflective middleware platform.
Resumo:
Large-scale disasters are constantly occurring around the world, and in many cases evacuation of regions of city is needed. ‘Operational Research/Management Science’ (OR/MS) has been widely used in emergency planning for over five decades. Warning dissemination, evacuee transportation and shelter management are three ‘Evacuation Support Functions’ (ESF) generic to many hazards. This thesis has adopted a case study approach to illustrate the importance of integrated approach of evacuation planning and particularly the role of OR/MS models. In the warning dissemination phase, uncertainty in the household’s behaviour as ‘warning informants’ has been investigated along with uncertainties in the warning system. An agentbased model (ABM) was developed for ESF-1 with households as agents and ‘warning informants’ behaviour as the agent behaviour. The model was used to study warning dissemination effectiveness under various conditions of the official channel. In the transportation phase, uncertainties in the household’s behaviour such as departure time (a function of ESF-1), means of transport and destination have been. Households could evacuate as pedestrians, using car or evacuation buses. An ABM was developed to study the evacuation performance (measured in evacuation travel time). In this thesis, a holistic approach for planning the public evacuation shelters called ‘Shelter Information Management System’ (SIMS) has been developed. A generic allocation framework of was developed to available shelter capacity to the shelter demand by considering the evacuation travel time. This was formulated using integer programming. In the sheltering phase, the uncertainty in household shelter choices (either nearest/allocated/convenient) has been studied for its impact on allocation policies using sensitivity analyses. Using analyses from the models and detailed examination of household states from ‘warning to safety’, it was found that the three ESFs though sequential in time, however have lot of interdependencies from the perspective of evacuation planning. This thesis has illustrated an OR/MS based integrated approach including and beyond single ESF preparedness. The developed approach will help in understanding the inter-linkages of the three evacuation phases and preparing a multi-agency-based evacuation planning evacuation
Resumo:
The objective was to identify evidence to support use of specific harms for the development of a children and young people's safety thermometer (CYPST). We searched PubMed, Web of Knowledge, and Cochrane Library post-1999 for studies in pediatric settings about pain, skin integrity, extravasation injury, and use of pediatric early warning scores (PEWS). Following screening, nine relevant articles were included. Convergent synthesis methods were used drawing on thematic analysis to combine findings from studies using a range of methods (qualitative, quantitative, and mixed methods). A review of PEWS was identified so other studies on this issue were excluded. No relevant studies about extravasation injury were identified. The synthesized results therefore focused on pain and skin integrity. Measurement and perception of pain were complex and not always carried out according to best practice. Skin abrasions were common and mostly associated with device related injuries. The findings demonstrate a need for further work on perceptions of pain and effective communication of concerns about pain between parents and nursing staff. Strategies for reducing device-related injuries warrant further research focusing on prevention. Together with the review of PEWS, these synthesized findings support the inclusion of pain, skin integrity, and PEWS in the CYPST.
Resumo:
OBJECTIVE: The aim of this study was to devise a scoring system that could aid in predicting neurologic outcome at the onset of neonatal seizures. METHODS: A total of 106 newborns who had neonatal seizures and were consecutively admitted to the NICU of the University of Parma from January 1999 through December 2004 were prospectively followed-up, and neurologic outcome was assessed at 24 months’ postconceptional age. We conducted a retrospective analysis on this cohort to identify variables that were significantly related to adverse outcome and to develop a scoring system that could provide early prognostic indications. RESULTS: A total of 70 (66%) of 106 infants had an adverse neurologic outcome. Six variables were identified as the most important independent risk factors for adverse outcome and were used to construct a scoring system: birth weight, Apgar score at 1 minute, neurologic examination at seizure onset, cerebral ultrasound, efficacy of anticonvulsant therapy, and presence of neonatal status epilepticus. Each variable was scored from 0 to 3 to represent the range from “normal” to “severely abnormal.” A total composite score was computed by addition of the raw scores of the 6 variables. This score ranged from 0 to 12. A cutoff score of =4 provided the greatest sensitivity and specificity. CONCLUSIONS: This scoring system may offer an easy, rapid, and reliable prognostic indicator of neurologic outcome after the onset of neonatal seizures. A final assessment of the validity of this score in routine clinical practice will require independent validation in other centers.
Resumo:
We describe a template model for perception of edge blur and identify a crucial early nonlinearity in this process. The main principle is to spatially filter the edge image to produce a 'signature', and then find which of a set of templates best fits that signature. Psychophysical blur-matching data strongly support the use of a second-derivative signature, coupled to Gaussian first-derivative templates. The spatial scale of the best-fitting template signals the edge blur. This model predicts blur-matching data accurately for a wide variety of Gaussian and non-Gaussian edges, but it suffers a bias when edges of opposite sign come close together in sine-wave gratings and other periodic images. This anomaly suggests a second general principle: the region of an image that 'belongs' to a given edge should have a consistent sign or direction of luminance gradient. Segmentation of the gradient profile into regions of common sign is achieved by implementing the second-derivative 'signature' operator as two first-derivative operators separated by a half-wave rectifier. This multiscale system of nonlinear filters predicts perceived blur accurately for periodic and aperiodic waveforms. We also outline its extension to 2-D images and infer the 2-D shape of the receptive fields.
Resumo:
How does nearby motion affect the perceived speed of a target region? When a central drifting Gabor patch is surrounded by translating noise, its speed can be misperceived over a fourfold range. Typically, when a surround moves in the same direction, perceived centre speed is reduced; for opposite-direction surrounds it increases. Measuring this illusion for a variety of surround properties reveals that the motion context effects are a saturating function of surround speed (Experiment I) and contrast (Experiment II). Our analyses indicate that the effects are consistent with a subtractive process, rather than with speed being averaged over area. In Experiment III we exploit known properties of the motion system to ask where these surround effects impact. Using 2D plaid stimuli, we find that surround-induced shifts in perceived speed of one plaid component produce substantial shifts in perceived plaid direction. This indicates that surrounds exert their influence early in processing, before pattern motion direction is computed. These findings relate to ongoing investigations of surround suppression for direction discrimination, and are consistent with single-cell findings of direction-tuned suppressive and facilitatory interactions in primary visual cortex (V1).
Resumo:
Development of the cerebral cortex is influenced by sensory experience during distinct phases of postnatal development known as critical periods. Disruption of experience during a critical period produces neurons that lack specificity for particular stimulus features, such as location in the somatosensory system. Synaptic plasticity is the agent by which sensory experience affects cortical development. Here, we describe, in mice, a developmental critical period that affects plasticity itself. Transient neonatal disruption of signaling via the C-terminal domain of "disrupted in schizophrenia 1" (DISC1)-a molecule implicated in psychiatric disorders-resulted in a lack of long-term potentiation (LTP) (persistent strengthening of synapses) and experience-dependent potentiation in adulthood. Long-term depression (LTD) (selective weakening of specific sets of synapses) and reversal of LTD were present, although impaired, in adolescence and absent in adulthood. These changes may form the basis for the cognitive deficits associated with mutations in DISC1 and the delayed onset of a range of psychiatric symptoms in late adolescence.
Resumo:
The aim of this thesis is to examine the specific contextual factors affecting the applicability and development of the planning, programming, budgeting system (P.P.B.S.) as a systems approach to public sector budgeting. The concept of P.P.B.S. as a systems approach to public sector budgeting will first be developed and the preliminary hypothesis that general contextual factors may be classified under political, structural and cognitive headings will be put forward. This preliminary hypothesis will be developed and refined using American and early British experience. The refined hypothesis will then be tested in detail in the case of the English health and personal social services (H.P.S.S.), The reasons for this focus are that it is the most recent, the sole remaining, and the most significant example in British central government outside of defence, and is fairly representative of non-defence government programme areas. The method of data collection relies on the examination of unpublished and difficult to obtain central government, health and local authority documents, and interviews with senior civil servants and public officials. The conclusion will be that the political constraints on, or factors affecting P.P.B.S., vary with product characteristics and cultural imperatives on pluralistic decision-making; that structural constraints vary with the degree of coincidence of programme and organisation structure and with the degree of controllability of the organisation; and finally, that cognitive constraints vary according to product characteristics, organisational responsibilities, and analytical effort.
Resumo:
This thesis examines the reasons for Cadburys' move from a city centre site to a greenfield site in Bournville in 1879 and the subsequent development of the factory and the Bournville community. The founding of the Bournville Village Trust by George Cadbury is discussed in relation to the Garden City movement. The welfare and personnel management policies which Cadburys adopted in the 1900s are considered in relation to welfarism in general, especially in the United States. The extent to which the idea of a `Quaker employer' can explain Cadburys policies is questioned both methodologically and empirically. The early use of scientific management at Bournville is described and related to Edward Cadbury's writings on the subject. Finally, the institution of a Works Council Scheme in 1918 is described and its uses are discussed. It is concluded that Cadburys instituted a new factory system in this period which consisted of a synthesis of ideas borrowed from elsewhere and that for a variety of reasons Cadburys was an appropriate site for their implementation.
Resumo:
Faced with a future of rising energy costs there is a need for industry to manage energy more carefully in order to meet its economic objectives. A problem besetting the growth of energy conservation in the UK is that a large proportion of energy consumption is used in a low intensive manner in organisations where they would be responsibility for energy efficiency is spread over a large number of personnel who each see only small energy costs. In relation to this problem in the non-energy intensive industrial sector, an application of an energy management technique known as monitoring and targeting (M & T) has been installed at the Whetstone site of the General Electric Company Limited in an attempt to prove it as a means for motivating line management and personnel to save energy. The objective energy saving for which the M & T was devised is very specific. During early energy conservation work at the site there had been a change from continuous to intermittent heating but the maintenance of the strategy was receiving a poor level of commitment from line management and performance was some 5% - 10% less than expected. The M & T is concerned therefore with heat for space heating for which a heat metering system was required. Metering of the site high pressure hot water system posed technical difficulties and expenditure was also limited. This led to a ‘tin-house' design being installed for a price less than the commercial equivalent. The timespan of work to achieve an operational heat metering system was 3 years which meant that energy saving results from the scheme were not observed during the study. If successful the replication potential is the larger non energy intensive sites from which some 30 PT savings could be expected in the UK.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.