862 resultados para Program Action Model
Resumo:
Remote sensing observations often have correlated errors, but the correlations are typically ignored in data assimilation for numerical weather prediction. The assumption of zero correlations is often used with data thinning methods, resulting in a loss of information. As operational centres move towards higher-resolution forecasting, there is a requirement to retain data providing detail on appropriate scales. Thus an alternative approach to dealing with observation error correlations is needed. In this article, we consider several approaches to approximating observation error correlation matrices: diagonal approximations, eigendecomposition approximations and Markov matrices. These approximations are applied in incremental variational assimilation experiments with a 1-D shallow water model using synthetic observations. Our experiments quantify analysis accuracy in comparison with a reference or ‘truth’ trajectory, as well as with analyses using the ‘true’ observation error covariance matrix. We show that it is often better to include an approximate correlation structure in the observation error covariance matrix than to incorrectly assume error independence. Furthermore, by choosing a suitable matrix approximation, it is feasible and computationally cheap to include error correlation structure in a variational data assimilation algorithm.
The capability-affordance model: a method for analysis and modelling of capabilities and affordances
Resumo:
Existing capability models lack qualitative and quantitative means to compare business capabilities. This paper extends previous work and uses affordance theories to consistently model and analyse capabilities. We use the concept of objective and subjective affordances to model capability as a tuple of a set of resource affordance system mechanisms and action paths, dependent on one or more critical affordance factors. We identify an affordance chain of subjective affordances by which affordances work together to enable an action and an affordance path that links action affordances to create a capability system. We define the mechanism and path underlying capability. We show how affordance modelling notation, AMN, can represent affordances comprising a capability. We propose a method to quantitatively and qualitatively compare capabilities using efficiency, effectiveness and quality metrics. The method is demonstrated by a medical example comparing the capability of syringe and needless anaesthetic systems.
Resumo:
Dynamical downscaling is frequently used to investigate the dynamical variables of extra-tropical cyclones, for example, precipitation, using very high-resolution models nested within coarser resolution models to understand the processes that lead to intense precipitation. It is also used in climate change studies, using long timeseries to investigate trends in precipitation, or to look at the small-scale dynamical processes for specific case studies. This study investigates some of the problems associated with dynamical downscaling and looks at the optimum configuration to obtain the distribution and intensity of a precipitation field to match observations. This study uses the Met Office Unified Model run in limited area mode with grid spacings of 12, 4 and 1.5 km, driven by boundary conditions provided by the ECMWF Operational Analysis to produce high-resolution simulations for the Summer of 2007 UK flooding events. The numerical weather prediction model is initiated at varying times before the peak precipitation is observed to test the importance of the initialisation and boundary conditions, and how long the simulation can be run for. The results are compared to raingauge data as verification and show that the model intensities are most similar to observations when the model is initialised 12 hours before the peak precipitation is observed. It was also shown that using non-gridded datasets makes verification more difficult, with the density of observations also affecting the intensities observed. It is concluded that the simulations are able to produce realistic precipitation intensities when driven by the coarser resolution data.
Resumo:
The study of workarounds (WA) has increased in importance due to their impact on patient safety and efficiency. However, there are no adequate theories to explain the motivation to create and use a workaround in a healthcare sitting. Although theories of technology acceptance help to understand the reasons to accept or reject technology, they fail to explain drivers for alternatives. Also workarounds involve creators and performers that have different motivations. Models such as Theory of Planned Behaviour (TPB) or Theory of Reasoned Action (TRA) can help to explain the role of workaround users, but lack explanation of workaround creators’ dynamics. Our aim is to develop a theoretical foundation to explain workaround motivation behaviour models with norms that relate to sanctions to provide an integrated Workaround Motivation Model; WAMM. The development of WAMM model is explained in this paper based on workaround cases as part of further research to establish the model.
Resumo:
Many human behaviours and pathologies have been attributed to the putative mirror neuron system, a neural system that is active during both the observation and execution of actions. While there are now a very large number of papers on the mirror neuron system, variations in the methods and analyses employed by researchers mean that the basic characteristics of the mirror response are not clear. This review focuses on three important aspects of the mirror response, as measured by modulations in corticospinal excitability: (1) muscle specificity, (2) direction, and (3) timing of modulation. We focus mainly on electromyographic (EMG) data gathered following single-pulse transcranial magnetic stimulation (TMS), because this method provides precise information regarding these three aspects of the response. Data from paired-pulse TMS paradigms and peripheral nerve stimulation (PNS) are also considered when we discuss the possible mechanisms underlying the mirror response. In this systematic review of the literature, we examine the findings of 85 TMS and PNS studies of the human mirror response, and consider the limitations and advantages of the different methodological approaches these have adopted in relation to discrepancies between their findings. We conclude by proposing a testable model of how action observation modulates corticospinal excitability in humans. Specifically, we propose that action observation elicits an early, non-specific facilitation of corticospinal excitability (at around 90 ms from action onset), followed by a later modulation of activity specific to the muscles involved in the observed action (from around 200 ms). Testing this model will greatly advance our understanding of the mirror mechanism and provide a more stable grounding on which to base inferences about its role in human behaviour.
Resumo:
The climate over the Arctic has undergone changes in recent decades. In order to evaluate the coupled response of the Arctic system to external and internal forcing, our study focuses on the estimation of regional climate variability and its dependence on large-scale atmospheric and regional ocean circulations. A global ocean–sea ice model with regionally high horizontal resolution is coupled to an atmospheric regional model and global terrestrial hydrology model. This way of coupling divides the global ocean model setup into two different domains: one coupled, where the ocean and the atmosphere are interacting, and one uncoupled, where the ocean model is driven by prescribed atmospheric forcing and runs in a so-called stand-alone mode. Therefore, selecting a specific area for the regional atmosphere implies that the ocean–atmosphere system can develop ‘freely’ in that area, whereas for the rest of the global ocean, the circulation is driven by prescribed atmospheric forcing without any feedbacks. Five different coupled setups are chosen for ensemble simulations. The choice of the coupled domains was done to estimate the influences of the Subtropical Atlantic, Eurasian and North Pacific regions on northern North Atlantic and Arctic climate. Our simulations show that the regional coupled ocean–atmosphere model is sensitive to the choice of the modelled area. The different model configurations reproduce differently both the mean climate and its variability. Only two out of five model setups were able to reproduce the Arctic climate as observed under recent climate conditions (ERA-40 Reanalysis). Evidence is found that the main source of uncertainty for Arctic climate variability and its predictability is the North Pacific. The prescription of North Pacific conditions in the regional model leads to significant correlation with observations, even if the whole North Atlantic is within the coupled model domain. However, the inclusion of the North Pacific area into the coupled system drastically changes the Arctic climate variability to a point where the Arctic Oscillation becomes an ‘internal mode’ of variability and correlations of year-to-year variability with observational data vanish. In line with previous studies, our simulations provide evidence that Arctic sea ice export is mainly due to ‘internal variability’ within the Arctic region. We conclude that the choice of model domains should be based on physical knowledge of the atmospheric and oceanic processes and not on ‘geographic’ reasons. This is particularly the case for areas like the Arctic, which has very complex feedbacks between components of the regional climate system.
Resumo:
Spatially dense observations of gust speeds are necessary for various applications, but their availability is limited in space and time. This work presents an approach to help to overcome this problem. The main objective is the generation of synthetic wind gust velocities. With this aim, theoretical wind and gust distributions are estimated from 10 yr of hourly observations collected at 123 synoptic weather stations provided by the German Weather Service. As pre-processing, an exposure correction is applied on measurements of the mean wind velocity to reduce the influence of local urban and topographic effects. The wind gust model is built as a transfer function between distribution parameters of wind and gust velocities. The aim of this procedure is to estimate the parameters of gusts at stations where only wind speed data is available. These parameters can be used to generate synthetic gusts, which can improve the accuracy of return periods at test sites with a lack of observations. The second objective is to determine return periods much longer than the nominal length of the original time series by considering extreme value statistics. Estimates for both local maximum return periods and average return periods for single historical events are provided. The comparison of maximum and average return periods shows that even storms with short average return periods may lead to local wind gusts with return periods of several decades. Despite uncertainties caused by the short length of the observational records, the method leads to consistent results, enabling a wide range of possible applications.
Resumo:
We present an analysis of a cusp ion step, observed by the Defense Meteorological Satellite Program (DMSP) F10 spacecraft, between two poleward moving events of enhanced ionospheric electron temperature, observed by the European Incoherent Scatter (EISCAT) radar. From the ions detected by the satellite, the variation of the reconnection rate is computed for assumed distances along the open-closed field line separatrix from the satellite to the X line, do. Comparison with the onset times of the associated ionospheric events allows this distance to be estimated, but with an uncertainty due to the determination of the low-energy cutoff of the ion velocity distribution function, ƒ(ν). Nevertheless, the reconnection site is shown to be on the dayside magnetopause, consistent with the reconnection model of the cusp during southward interplanetary magnetic field (IMF). Analysis of the time series of distribution function at constant energies, ƒ(ts), shows that the best estimate of the distance do is 14.5±2 RE. This is consistent with various magnetopause observations of the signatures of reconnection for southward IMF. The ion precipitation is used to reconstruct the field-parallel part of the Cowley D ion distribution function injected into the open low-latitude boundary layer in the vicinity of the X line. From this reconstruction, the field-aligned component of the magnetosheath flow is found to be only −55±65 km s−1 near the X line, which means either that the reconnection X line is near the stagnation region at the nose of the magnetosphere, or that it is closely aligned with the magnetosheath flow streamline which is orthogonal to the magnetosheath field, or both. In addition, the sheath Alfvén speed at the X line is found to be 220±45 km s−1, and the speed with which newly opened field lines are ejected from the X line is 165±30 km s−1. We show that the inferred magnetic field, plasma density, and temperature of the sheath near the X line are consistent with a near-subsolar reconnection site and confirm that the magnetosheath field makes a large angle (>58°) with the X line.
Resumo:
Observations of the amplitudes and Doppler shifts of received HF radio waves are compared with model predictions made using a two-dimensional ray-tracing program. The signals are propagated over a sub-auroral path, which is shown to lie along the latitudes of the mid-latitude trough at times of low geomagnetic activity. Generalizing the predictions to include a simple model of the trough in the density and height of the F2 peak enables the explanation of the anomalous observed diurnal variations. The behavior of received amplitude, Doppler shift, and signal-to-noise ratio as a function of the Kp index value, the time of day, and the season (in 17 months of continuous recording) is found to agree closely with that predicted using the statistical position of the trough as deduced from 8 years of Alouette satellite soundings. The variation in the times of the observation of large signal amplitudes with the Kp value and the complete absence of such amplitudes when it exceeds 2.75 are two features that implicate the trough in these effects.
Resumo:
Purpose of the study: Reduced subjective experience of reward (anhedonia) is a key symptom of major depression. We have developed a human model of reward processing to investigate the neural correlates of anhedonia. Methods: We report the data from studies that examined reward processing using functional magnetic resonance imaging (fMRI) in those vulnerable to depression. We also report the effects of antidepressant medications on our neural model of reward processing and on the resting state in healthy volunteers. Results: Our results thus far indicate that deficits in reward processing are apparent in those vulnerable to depression, and also that antidepressant medication modulates reward processing and resting state functional connectivity in parts of the brain consistent with serotonin and catecholamine transmitter pathways in healthy volunteers. Conclusions: We conclude that this type of human model of reward processing might be useful in detecting biomarkers for depression and also in illuminating why antidepressant medications may not be very effective in treating anhedonia.
Resumo:
This paper details a strategy for modifying the source code of a complex model so that the model may be used in a data assimilation context, {and gives the standards for implementing a data assimilation code to use such a model}. The strategy relies on keeping the model separate from any data assimilation code, and coupling the two through the use of Message Passing Interface (MPI) {functionality}. This strategy limits the changes necessary to the model and as such is rapid to program, at the expense of ultimate performance. The implementation technique is applied in different models with state dimension up to $2.7 \times 10^8$. The overheads added by using this implementation strategy in a coupled ocean-atmosphere climate model are shown to be an order of magnitude smaller than the addition of correlated stochastic random errors necessary for some nonlinear data assimilation techniques.
Resumo:
Burst suppression in the electroencephalogram (EEG) is a well-described phenomenon that occurs during deep anesthesia, as well as in a variety of congenital and acquired brain insults. Classically it is thought of as spatially synchronous, quasi-periodic bursts of high amplitude EEG separated by low amplitude activity. However, its characterization as a “global brain state” has been challenged by recent results obtained with intracranial electrocortigraphy. Not only does it appear that burst suppression activity is highly asynchronous across cortex, but also that it may occur in isolated regions of circumscribed spatial extent. Here we outline a realistic neural field model for burst suppression by adding a slow process of synaptic resource depletion and recovery, which is able to reproduce qualitatively the empirically observed features during general anesthesia at the whole cortex level. Simulations reveal heterogeneous bursting over the model cortex and complex spatiotemporal dynamics during simulated anesthetic action, and provide forward predictions of neuroimaging signals for subsequent empirical comparisons and more detailed characterization. Because burst suppression corresponds to a dynamical end-point of brain activity, theoretically accounting for its spatiotemporal emergence will vitally contribute to efforts aimed at clarifying whether a common physiological trajectory is induced by the actions of general anesthetic agents. We have taken a first step in this direction by showing that a neural field model can qualitatively match recent experimental data that indicate spatial differentiation of burst suppression activity across cortex.
Resumo:
The detection of physiological signals from the motor system (electromyographic signals) is being utilized in the practice clinic to guide the therapist in a more precise and accurate diagnosis of motor disorders. In this context, the process of decomposition of EMG (electromyographic) signals that includes the identification and classification of MUAP (Motor Unit Action Potential) of a EMG signal, is very important to help the therapist in the evaluation of motor disorders. The EMG decomposition is a complex task due to EMG features depend on the electrode type (needle or surface), its placement related to the muscle, the contraction level and the health of the Neuromuscular System. To date, the majority of researches on EMG decomposition utilize EMG signals acquired by needle electrodes, due to their advantages in processing this type of signal. However, relatively few researches have been conducted using surface EMG signals. Thus, this article aims to contribute to the clinical practice by presenting a technique that permit the decomposition of surface EMG signal via the use of Hidden Markov Models. This process is supported by the use of differential evolution and spectral clustering techniques. The developed system presented coherent results in: (1) identification of the number of Motor Units actives in the EMG signal; (2) presentation of the morphological patterns of MUAPs in the EMG signal; (3) identification of the firing sequence of the Motor Units. The model proposed in this work is an advance in the research area of decomposition of surface EMG signals.
Resumo:
For general home monitoring, a system should automatically interpret people’s actions. The system should be non-intrusive, and able to deal with a cluttered background, and loose clothes. An approach based on spatio-temporal local features and a Bag-of-Words (BoW) model is proposed for single-person action recognition from combined intensity and depth images. To restore the temporal structure lost in the traditional BoW method, a dynamic time alignment technique with temporal binning is applied in this work, which has not been previously implemented in the literature for human action recognition on depth imagery. A novel human action dataset with depth data has been created using two Microsoft Kinect sensors. The ReadingAct dataset contains 20 subjects and 19 actions for a total of 2340 videos. To investigate the effect of using depth images and the proposed method, testing was conducted on three depth datasets, and the proposed method was compared to traditional Bag-of-Words methods. Results showed that the proposed method improves recognition accuracy when adding depth to the conventional intensity data, and has advantages when dealing with long actions.
Resumo:
We systematically compare the performance of ETKF-4DVAR, 4DVAR-BEN and 4DENVAR with respect to two traditional methods (4DVAR and ETKF) and an ensemble transform Kalman smoother (ETKS) on the Lorenz 1963 model. We specifically investigated this performance with increasing nonlinearity and using a quasi-static variational assimilation algorithm as a comparison. Using the analysis root mean square error (RMSE) as a metric, these methods have been compared considering (1) assimilation window length and observation interval size and (2) ensemble size to investigate the influence of hybrid background error covariance matrices and nonlinearity on the performance of the methods. For short assimilation windows with close to linear dynamics, it has been shown that all hybrid methods show an improvement in RMSE compared to the traditional methods. For long assimilation window lengths in which nonlinear dynamics are substantial, the variational framework can have diffculties fnding the global minimum of the cost function, so we explore a quasi-static variational assimilation (QSVA) framework. Of the hybrid methods, it is seen that under certain parameters, hybrid methods which do not use a climatological background error covariance do not need QSVA to perform accurately. Generally, results show that the ETKS and hybrid methods that do not use a climatological background error covariance matrix with QSVA outperform all other methods due to the full flow dependency of the background error covariance matrix which also allows for the most nonlinearity.