908 resultados para Capability Maturity Model for Software
Resumo:
Predictability of the western North Pacific (WNP) summer climate associated with different El Niño–Southern Oscillation (ENSO) phases is investigated in this study based on the 1-month lead retrospective forecasts of five state-of-the-art coupled models from ENSEMBLES. During the period from 1960 to 2005, the models well capture the WNP summer climate anomalies during most of years in different ENSO phases except the La Niña decaying summers. In the El Niño developing, El Niño decaying and La Niña developing summers, the prediction skills are high for the WNP summer monsoon index (WNPMI), with the prediction correlation larger than 0.7. The high prediction skills of the lower-tropospheric circulation during these phases are found mainly over the tropical western Pacific Ocean, South China Sea and subtropical WNP. These good predictions correspond well to their close teleconnection with ENSO and the high prediction skills of tropical SSTs. By contrast, for the La Niña decaying summers, the prediction skills are considerably low with the prediction correlation for the WNPMI near to zero and low prediction skills around the Philippines and subtropical WNP. These poor predictions relate to the weak summer anomalies of the WNPMI during the La Niña decaying years and no significant connections between the WNP lower-tropospheric circulation anomalies and the SSTs over the tropical central and eastern Pacific Ocean in observations. However, the models tend to predict an apparent anomalous cyclone over the WNP during the La Niña decaying years, indicating a linearity of the circulation response over WNP in the models prediction in comparison with that during the El Niño decaying years which differs from observations. In addition, the models show considerable capability in describing the WNP summer anomalies during the ENSO neutral summers. These anomalies are related to the positive feedback between the WNP lower-tropospheric circulation and the local SSTs. The models can capture this positive feedback but with some uncertainties from different ensemble members during the ENSO neutral summers.
Resumo:
The Environmental Data Abstraction Library provides a modular data management library for bringing new and diverse datatypes together for visualisation within numerous software packages, including the ncWMS viewing service, which already has very wide international uptake. The structure of EDAL is presented along with examples of its use to compare satellite, model and in situ data types within the same visualisation framework. We emphasize the value of this capability for cross calibration of datasets and evaluation of model products against observations, including preparation for data assimilation.
Resumo:
The 2008-2009 financial crisis and related organizational and economic failures have meant that financial organizations are faced with a ‘tsunami’ of new regulatory obligations. This environment provides new managerial challenges as organizations are forced to engage in complex and costly remediation projects with short deadlines. Drawing from a longitudinal study conducted with nine financial institutions over twelve years, this paper identifies nine IS capabilities which underpin activities for managing regulatory themed governance, risk and compliance efforts. The research shows that many firms are now focused on meeting the Regulators’ deadlines at the expense of developing a strategic, enterprise-wide connected approach to compliance. Consequently, executives are in danger of implementing siloed compliance solutions within business functions. By evaluating the maturity of their IS capabilities which underpin regulatory adherence, managers have an opportunity to develop robust operational architectures and so are better positioned to face the challenges derived from shifting regulatory landscapes.
Resumo:
Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. We investigate the reasons for this for one model, INCA-P, testing model output against 18 months of daily data in a small Scottish catchment. We examine key model processes and provide recommendations for model improvement and simplification. Improvements to the particulate phosphorus simulation are especially needed. The model evaluation procedure is then generalised to provide a checklist for identifying why model performance may be poor or unreliable, incorporating calibration, data, structural and conceptual challenges. There needs to be greater recognition that current models struggle to produce positive Nash–Sutcliffe statistics in agricultural catchments when evaluated against daily data. Phosphorus modelling is difficult, but models are not as useless as this might suggest. We found a combination of correlation coefficients, bias, a comparison of distributions and a visual assessment of time series a better means of identifying realistic simulations.
Resumo:
This paper details a strategy for modifying the source code of a complex model so that the model may be used in a data assimilation context, {and gives the standards for implementing a data assimilation code to use such a model}. The strategy relies on keeping the model separate from any data assimilation code, and coupling the two through the use of Message Passing Interface (MPI) {functionality}. This strategy limits the changes necessary to the model and as such is rapid to program, at the expense of ultimate performance. The implementation technique is applied in different models with state dimension up to $2.7 \times 10^8$. The overheads added by using this implementation strategy in a coupled ocean-atmosphere climate model are shown to be an order of magnitude smaller than the addition of correlated stochastic random errors necessary for some nonlinear data assimilation techniques.
Resumo:
Operational forecasting centres are currently developing data assimilation systems for coupled atmosphere-ocean models. Strongly coupled assimilation, in which a single assimilation system is applied to a coupled model, presents significant technical and scientific challenges. Hence weakly coupled assimilation systems are being developed as a first step, in which the coupled model is used to compare the current state estimate with observations, but corrections to the atmosphere and ocean initial conditions are then calculated independently. In this paper we provide a comprehensive description of the different coupled assimilation methodologies in the context of four dimensional variational assimilation (4D-Var) and use an idealised framework to assess the expected benefits of moving towards coupled data assimilation. We implement an incremental 4D-Var system within an idealised single column atmosphere-ocean model. The system has the capability to run both strongly and weakly coupled assimilations as well as uncoupled atmosphere or ocean only assimilations, thus allowing a systematic comparison of the different strategies for treating the coupled data assimilation problem. We present results from a series of identical twin experiments devised to investigate the behaviour and sensitivities of the different approaches. Overall, our study demonstrates the potential benefits that may be expected from coupled data assimilation. When compared to uncoupled initialisation, coupled assimilation is able to produce more balanced initial analysis fields, thus reducing initialisation shock and its impact on the subsequent forecast. Single observation experiments demonstrate how coupled assimilation systems are able to pass information between the atmosphere and ocean and therefore use near-surface data to greater effect. We show that much of this benefit may also be gained from a weakly coupled assimilation system, but that this can be sensitive to the parameters used in the assimilation.
Resumo:
Customers will not continue to pay for a service if it is perceived to be of poor quality, and/or of no value. With a paradigm shift towards business dependence on service orientated IS solutions [1], it is critical that alignment exists between service definition, delivery, and customer expectation, businesses are to ensure customer satisfaction. Services, and micro-service development, offer businesses a flexible structure for solution innovation, however, constant changes in technology, business and societal expectations means an iterative analysis solution is required to i) determine whether provider services adequately meet customer segment needs and expectations, and ii) to help guide business service innovation and development. In this paper, by incorporating multiple models, we propose a series of steps to help identify and prioritise service gaps. Moreover, the authors propose the Dual Semiosis Analysis Model, i.e. a tool that highlights where within the symbiotic customer / provider semiosis process, requirements misinterpretation, and/or service provision deficiencies occur. This paper offers the reader a powerful customer-centric tool, designed to help business managers highlight both what services are critical to customer quality perception, and where future innovation
Resumo:
Background In the UK occupational therapy pre-discharge home visits are routinely carried out as a means of facilitating safe transfer from the hospital to home. Whilst they are an integral part of practice, there is little evidence to demonstrate they have a positive outcome on the discharge process. Current issues for patients are around the speed of home visits and the lack of shared decision making in the process, resulting in less than 50 % of the specialist equipment installed actually being used by patients on follow-up. To improve practice there is an urgent need to examine other ways of conducting home visits to facilitate safe discharge. We believe that Computerised 3D Interior Design Applications (CIDAs) could be a means to support more efficient, effective and collaborative practice. A previous study explored practitioners perceptions of using CIDAs; however it is important to ascertain older adult’s views about the usability of technology and to compare findings. This study explores the perceptions of community dwelling older adults with regards to adopting and using CIDAs as an assistive tool for the home adaptations process. Methods Ten community dwelling older adults participated in individual interactive task-focused usability sessions with a customised CIDA, utilising the think-aloud protocol and individual semi-structured interviews. Template analysis was used to carry out both deductive and inductive analysis of the think-aloud and interview data. Initially, a deductive stance was adopted, using the three pre-determined high-level themes of the technology acceptance model (TAM): Perceived Usefulness (PU), Perceived Ease of Use (PEOU), Actual Use (AU). Inductive template analysis was then carried out on the data within these themes, from which a number of sub-thmes emerged. Results Regarding PU, participants believed CIDAs served as a useful visual tool and saw clear potential to facilitate shared understanding and partnership in care delivery. For PEOU, participants were able to create 3D home environments however a number of usability issues must still be addressed. The AU theme revealed the most likely usage scenario would be collaborative involving both patient and practitioner, as many participants did not feel confident or see sufficient value in using the application autonomously. Conclusions This research found that older adults perceived that CIDAs were likely to serve as a valuable tool which facilitates and enhances levels of patient/practitioner collaboration and empowerment. Older adults also suggested a redesign of the interface so that less sophisticated dexterity and motor functions are required. However, older adults were not confident, or did not see sufficient value in using the application autonomously. Future research is needed to further customise the CIDA software, in line with the outcomes of this study, and to explore the potential of collaborative application patient/practitioner-based deployment.
Resumo:
In this paper, we consider the problem of estimating the number of times an air quality standard is exceeded in a given period of time. A non-homogeneous Poisson model is proposed to analyse this issue. The rate at which the Poisson events occur is given by a rate function lambda(t), t >= 0. This rate function also depends on some parameters that need to be estimated. Two forms of lambda(t), t >= 0 are considered. One of them is of the Weibull form and the other is of the exponentiated-Weibull form. The parameters estimation is made using a Bayesian formulation based on the Gibbs sampling algorithm. The assignation of the prior distributions for the parameters is made in two stages. In the first stage, non-informative prior distributions are considered. Using the information provided by the first stage, more informative prior distributions are used in the second one. The theoretical development is applied to data provided by the monitoring network of Mexico City. The rate function that best fit the data varies according to the region of the city and/or threshold that is considered. In some cases the best fit is the Weibull form and in other cases the best option is the exponentiated-Weibull. Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
Security administrators face the challenge of designing, deploying and maintaining a variety of configuration files related to security systems, especially in large-scale networks. These files have heterogeneous syntaxes and follow differing semantic concepts. Nevertheless, they are interdependent due to security services having to cooperate and their configuration to be consistent with each other, so that global security policies are completely and correctly enforced. To tackle this problem, our approach supports a comfortable definition of an abstract high-level security policy and provides an automated derivation of the desired configuration files. It is an extension of policy-based management and policy hierarchies, combining model-based management (MBM) with system modularization. MBM employs an object-oriented model of the managed system to obtain the details needed for automated policy refinement. The modularization into abstract subsystems (ASs) segment the system-and the model-into units which more closely encapsulate related system components and provide focused abstract views. As a result, scalability is achieved and even comprehensive IT systems can be modelled in a unified manner. The associated tool MoBaSeC (Model-Based-Service-Configuration) supports interactive graphical modelling, automated model analysis and policy refinement with the derivation of configuration files. We describe the MBM and AS approaches, outline the tool functions and exemplify their applications and results obtained. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
Architectures based on Coordinated Atomic action (CA action) concepts have been used to build concurrent fault-tolerant systems. This conceptual model combines concurrent exception handling with action nesting to provide a general mechanism for both enclosing interactions among system components and coordinating forward error recovery measures. This article presents an architectural model to guide the formal specification of concurrent fault-tolerant systems. This architecture provides built-in Communicating Sequential Processes (CSPs) and predefined channels to coordinate exception handling of the user-defined components. Hence some safety properties concerning action scoping and concurrent exception handling can be proved by using the FDR (Failure Divergence Refinement) verification tool. As a result, a formal and general architecture supporting software fault tolerance is ready to be used and proved as users define components with normal and exceptional behaviors. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper we study the Lyapunov stability and Hopf bifurcation in a biological system which models the biological control of parasites of orange plantations. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Introduction Performance in cross-country skiing is influenced by the skier’s ability to continuously produce propelling forces and force magnitude in relation to the net external forces. A surrogate indicator of the “power supply” in cross-country skiing would be a physiological variable that reflects an important performance-related capability, whereas the body mass itself is an indicator of the “power demand” experienced by the skier. To adequately evaluate an elite skier’s performance capability, it is essential to establish the optimal ratio between the physiological variable and body mass. The overall aim of this doctoral thesis was to investigate the importance of body-mass exponent optimization for the evaluation of performance capability in cross-country skiing. Methods In total, 83 elite cross-country skiers (56 men and 27 women) volunteered to participate in the four studies. The physiological variables of maximal oxygen uptake (V̇O2max) and oxygen uptake corresponding to a blood-lactate concentration of 4 mmol∙l-1 (V̇O2obla) were determined while treadmill roller skiing using the diagonal-stride technique; mean oxygen uptake (V̇O2dp) and upper-body power output (Ẇ) were determined during double-poling tests using a ski-ergometer. Competitive performance data for elite male skiers were collected from two 15-km classical-technique skiing competitions and a 1.25-km sprint prologue; additionally, a 2-km double-poling roller-skiing time trial using the double-poling technique was used as an indicator of upper-body performance capability among elite male and female junior skiers. Power-function modelling was used to explain the race and time-trial speeds based on the physiological variables and body mass. Results The optimal V̇O2max-to-mass ratios to explain 15-km race speed were V̇O2max divided by body mass raised to the 0.48 and 0.53 power, and these models explained 68% and 69% of the variance in mean skiing speed, respectively; moreover, the 95% confidence intervals (CI) for the body-mass exponents did not include either 0 or 1. For the modelling of race speed in the sprint prologue, body mass failed to contribute to the models based on V̇O2max, V̇O2obla, and V̇O2dp. The upper-body power output-to-body mass ratio that optimally explained time-trial speed was Ẇ ∙ m-0.57 and the model explained 63% of the variance in speed. Conclusions The results in this thesis suggest that V̇O2max divided by the square root of body mass should be used as an indicator of performance in 15-km classical-technique races among elite male skiers rather than the absolute or simple ratio-standard scaled expression. To optimally explain an elite male skier’s performance capability in sprint prologues, power-function models based on oxygen-uptake variables expressed absolutely are recommended. Moreover, to evaluate elite junior skiers’ performance capabilities in 2-km double-poling roller-skiing time trials, it is recommended that Ẇ divided by the square root of body mass should be used rather than absolute or simple ratio-standard scaled expression of power output.
Resumo:
The specification of Quality of Service (QoS) constraints over software design requires measures that ensure such requirements are met by the delivered product. Achieving this goal is non-trivial, as it involves, at least, identifying how QoS constraint specifications should be checked at the runtime. In this paper we present an implementation of a Model Driven Architecture (MDA) based framework for the runtime monitoring of QoS properties. We incorporate the UML2 superstructure and the UML profile for Quality of Service to provide abstract descriptions of component-and-connector systems. We then define transformations that refine the UML2 models to conform with the Distributed Management Taskforce (DMTF) Common Information Model (CIM) (Distributed Management Task Force Inc. 2006), a schema standard for management and instrumentation of hardware and software. Finally, we provide a mapping the CIM metamodel to a .NET-based metamodel for implementation of the monitoring infrastructure utilising various .NET features including the Windows Management Instrumentation (WMI) interface.