812 resultados para multi-class queueing systems
Resumo:
Medium range flood forecasting activities, driven by various meteorological forecasts ranging from high resolution deterministic forecasts to low spatial resolution ensemble prediction systems, share a major challenge in the appropriateness and design of performance measures. In this paper possible limitations of some traditional hydrological and meteorological prediction quality and verification measures are identified. Some simple modifications are applied in order to circumvent the problem of the autocorrelation dominating river discharge time-series and in order to create a benchmark model enabling the decision makers to evaluate the forecast quality and the model quality. Although the performance period is quite short the advantage of a simple cost-loss function as a measure of forecast quality can be demonstrated.
Resumo:
Smart healthcare is a complex domain for systems integration due to human and technical factors and heterogeneous data sources involved. As a part of smart city, it is such a complex area where clinical functions require smartness of multi-systems collaborations for effective communications among departments, and radiology is one of the areas highly relies on intelligent information integration and communication. Therefore, it faces many challenges regarding integration and its interoperability such as information collision, heterogeneous data sources, policy obstacles, and procedure mismanagement. The purpose of this study is to conduct an analysis of data, semantic, and pragmatic interoperability of systems integration in radiology department, and to develop a pragmatic interoperability framework for guiding the integration. We select an on-going project at a local hospital for undertaking our case study. The project is to achieve data sharing and interoperability among Radiology Information Systems (RIS), Electronic Patient Record (EPR), and Picture Archiving and Communication Systems (PACS). Qualitative data collection and analysis methods are used. The data sources consisted of documentation including publications and internal working papers, one year of non-participant observations and 37 interviews with radiologists, clinicians, directors of IT services, referring clinicians, radiographers, receptionists and secretary. We identified four primary phases of data analysis process for the case study: requirements and barriers identification, integration approach, interoperability measurements, and knowledge foundations. Each phase is discussed and supported by qualitative data. Through the analysis we also develop a pragmatic interoperability framework that summaries the empirical findings and proposes recommendations for guiding the integration in the radiology context.
Resumo:
Distributed generation plays a key role in reducing CO2 emissions and losses in transmission of power. However, due to the nature of renewable resources, distributed generation requires suitable control strategies to assure reliability and optimality for the grid. Multi-agent systems are perfect candidates for providing distributed control of distributed generation stations as well as providing reliability and flexibility for the grid integration. The proposed multi-agent energy management system consists of single-type agents who control one or more gird entities, which are represented as generic sub-agent elements. The agent applies one control algorithm across all elements and uses a cost function to evaluate the suitability of the element as a supplier. The behavior set by the agent's user defines which parameters of an element have greater weight in the cost function, which allows the user to specify the preference on suppliers dynamically. This study shows the ability of the multi-agent energy management system to select suppliers according to the selection behavior given by the user. The optimality of the supplier for the required demand is ensured by the cost function based on the parameters of the element.
Resumo:
The integration of ecological principles into agricultural systems presents major opportunities for spreading risk at the crop and farm scale. This paper presents mechanisms by which diversity at several scales within the farming system can increase the stability of production. Diversity of above- and below-ground biota, but also genetic and phenotypic diversity within crops, has an essential role in safeguarding farm production. Novel mixtures of legume-grass leys have been shown to potentially provide significant benefits for pollinator and decomposer ecosystem services but to realise the greatest improvements carefully tailored farm management is needed such as mowing or grazing time, and the type and depth of cutivation. Complex farmland landscapes such as agroforestry systems have the potential to support pollinator abundance and diversity and spread risk across production enterprises. At the crop level, early results indicate that the vulnerability of pollen development, flowering and early grain set to abiotic stress can be ameliorated by managing flowering time through genotypic selection, and through the buffering effects of pollinators. Finally, the risk of sub-optimal quality in cereals can be mitigated through integration of near isogenic lines selected to escape specific abiotic stress events. We conclude that genotypic, phenotypic and community diversity can all be increased at multiple scales to enhance resilience in agricultural systems.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
Most current state-of-the-art haptic devices render only a single force, however almost all human grasps are characterised by multiple forces and torques applied by the fingers and palms of the hand to the object. In this chapter we will begin by considering the different types of grasp and then consider the physics of rigid objects that will be needed for correct haptic rendering. We then describe an algorithm to represent the forces associated with grasp in a natural manner. The power of the algorithm is that it considers only the capabilities of the haptic device and requires no model of the hand, thus applies to most practical grasp types. The technique is sufficiently general that it would also apply to multi-hand interactions, and hence to collaborative interactions where several people interact with the same rigid object. Key concepts in friction and rigid body dynamics are discussed and applied to the problem of rendering multiple forces to allow the person to choose their grasp on a virtual object and perceive the resulting movement via the forces in a natural way. The algorithm also generalises well to support computation of multi-body physics
Resumo:
Awareness of emerging situations in a dynamic operational environment of a robotic assistive device is an essential capability of such a cognitive system, based on its effective and efficient assessment of the prevailing situation. This allows the system to interact with the environment in a sensible (semi)autonomous / pro-active manner without the need for frequent interventions from a supervisor. In this paper, we report a novel generic Situation Assessment Architecture for robotic systems directly assisting humans as developed in the CORBYS project. This paper presents the overall architecture for situation assessment and its application in proof-of-concept Demonstrators as developed and validated within the CORBYS project. These include a robotic human follower and a mobile gait rehabilitation robotic system. We present an overview of the structure and functionality of the Situation Assessment Architecture for robotic systems with results and observations as collected from initial validation on the two CORBYS Demonstrators.
Resumo:
This paper reviews the literature concerning the practice of using Online Analytical Processing (OLAP) systems to recall information stored by Online Transactional Processing (OLTP) systems. Such a review provides a basis for discussion on the need for the information that are recalled through OLAP systems to maintain the contexts of transactions with the data captured by the respective OLTP system. The paper observes an industry trend involving the use of OLTP systems to process information into data, which are then stored in databases without the business rules that were used to process information and data stored in OLTP databases without associated business rules. This includes the necessitation of a practice, whereby, sets of business rules are used to extract, cleanse, transform and load data from disparate OLTP systems into OLAP databases to support the requirements for complex reporting and analytics. These sets of business rules are usually not the same as business rules used to capture data in particular OLTP systems. The paper argues that, differences between the business rules used to interpret these same data sets, risk gaps in semantics between information captured by OLTP systems and information recalled through OLAP systems. Literature concerning the modeling of business transaction information as facts with context as part of the modelling of information systems were reviewed to identify design trends that are contributing to the design quality of OLTP and OLAP systems. The paper then argues that; the quality of OLTP and OLAP systems design has a critical dependency on the capture of facts with associated context, encoding facts with contexts into data with business rules, storage and sourcing of data with business rules, decoding data with business rules into the facts with the context and recall of facts with associated contexts. The paper proposes UBIRQ, a design model to aid the co-design of data with business rules storage for OLTP and OLAP purposes. The proposed design model provides the opportunity for the implementation and use of multi-purpose databases, and business rules stores for OLTP and OLAP systems. Such implementations would enable the use of OLTP systems to record and store data with executions of business rules, which will allow for the use of OLTP and OLAP systems to query data with business rules used to capture the data. Thereby ensuring information recalled via OLAP systems preserves the contexts of transactions as per the data captured by the respective OLTP system.
Resumo:
Cover crops are sown to provide a number of ecosystem services including nutrient management, mitigation of diffuse pollution, improving soil structure and organic matter content, weed suppression, nitrogen fixation and provision of resources for biodiversity. Although the decision to sow a cover crop may be driven by a desire to achieve just one of these objectives, the diversity of cover crops species and mixtures available means that there is potential to combine a number of ecosystem services within the same crop and growing season. Designing multi-functional cover crops would potentially help to reconcile the often conflicting agronomic and environmental agendas and contribute to the optimal use of land. We present a framework for integrating multiple ecosystem services delivered by cover crops that aims to design a mixture of species with complementary growth habit and functionality. The optimal number and identity of species will depend on the services included in the analysis, the functional space represented by the available species pool and the community dynamics of the crop in terms of dominance and co-existence. Experience from a project that applied the framework to fertility building leys in organic systems demonstrated its potential and emphasised the importance of the initial choice of species to include in the analysis
Resumo:
Understanding the origin of the properties of metal-supported metal thin films is important for the rational design of bimetallic catalysts and other applications, but it is generally difficult to separate effects related to strain from those arising from interface interactions. Here we use density functional (DFT) theory to examine the structure and electronic behavior of few-layer palladium films on the rhenium (0001) surface, where there is negligible interfacial strain and therefore other effects can be isolated. Our DFT calculations predict stacking sequences and interlayer separations in excellent agreement with quantitative low-energy electron diffraction experiments. By theoretically simulating the Pd core-level X-ray photoemission spectra (XPS) of the films, we are able to interpret and assign the basic features of both low-resolution and high-resolution XPS measurements. The core levels at the interface shift to more negative energies, rigidly following the shifts in the same direction of the valence d-band center. We demonstrate that the valence band shift at the interface is caused by charge transfer from Re to Pd, which occurs mainly to valence states of hybridized s-p character rather than to the Pd d-band. Since the d-band filling is roughly constant, there is a correlation between the d-band center shift and its bandwidth. The resulting effect of this charge transfer on the valence d-band is thus analogous to the application of a lateral compressive strain on the adlayers. Our analysis suggests that charge transfer should be considered when describing the origin of core and valence band shifts in other metal / metal adlayer systems.
Resumo:
The concentrations of sulfate, black carbon (BC) and other aerosols in the Arctic are characterized by high values in late winter and spring (so-called Arctic Haze) and low values in summer. Models have long been struggling to capture this seasonality and especially the high concentrations associated with Arctic Haze. In this study, we evaluate sulfate and BC concentrations from eleven different models driven with the same emission inventory against a comprehensive pan-Arctic measurement data set over a time period of 2 years (2008–2009). The set of models consisted of one Lagrangian particle dispersion model, four chemistry transport models (CTMs), one atmospheric chemistry-weather forecast model and five chemistry climate models (CCMs), of which two were nudged to meteorological analyses and three were running freely. The measurement data set consisted of surface measurements of equivalent BC (eBC) from five stations (Alert, Barrow, Pallas, Tiksi and Zeppelin), elemental carbon (EC) from Station Nord and Alert and aircraft measurements of refractory BC (rBC) from six different campaigns. We find that the models generally captured the measured eBC or rBC and sulfate concentrations quite well, compared to previous comparisons. However, the aerosol seasonality at the surface is still too weak in most models. Concentrations of eBC and sulfate averaged over three surface sites are underestimated in winter/spring in all but one model (model means for January–March underestimated by 59 and 37 % for BC and sulfate, respectively), whereas concentrations in summer are overestimated in the model mean (by 88 and 44 % for July–September), but with overestimates as well as underestimates present in individual models. The most pronounced eBC underestimates, not included in the above multi-site average, are found for the station Tiksi in Siberia where the measured annual mean eBC concentration is 3 times higher than the average annual mean for all other stations. This suggests an underestimate of BC sources in Russia in the emission inventory used. Based on the campaign data, biomass burning was identified as another cause of the modeling problems. For sulfate, very large differences were found in the model ensemble, with an apparent anti-correlation between modeled surface concentrations and total atmospheric columns. There is a strong correlation between observed sulfate and eBC concentrations with consistent sulfate/eBC slopes found for all Arctic stations, indicating that the sources contributing to sulfate and BC are similar throughout the Arctic and that the aerosols are internally mixed and undergo similar removal. However, only three models reproduced this finding, whereas sulfate and BC are weakly correlated in the other models. Overall, no class of models (e.g., CTMs, CCMs) performed better than the others and differences are independent of model resolution.
Resumo:
We present a data-driven mathematical model of a key initiating step in platelet activation, a central process in the prevention of bleeding following Injury. In vascular disease, this process is activated inappropriately and causes thrombosis, heart attacks and stroke. The collagen receptor GPVI is the primary trigger for platelet activation at sites of injury. Understanding the complex molecular mechanisms initiated by this receptor is important for development of more effective antithrombotic medicines. In this work we developed a series of nonlinear ordinary differential equation models that are direct representations of biological hypotheses surrounding the initial steps in GPVI-stimulated signal transduction. At each stage model simulations were compared to our own quantitative, high-temporal experimental data that guides further experimental design, data collection and model refinement. Much is known about the linear forward reactions within platelet signalling pathways but knowledge of the roles of putative reverse reactions are poorly understood. An initial model, that includes a simple constitutively active phosphatase, was unable to explain experimental data. Model revisions, incorporating a complex pathway of interactions (and specifically the phosphatase TULA-2), provided a good description of the experimental data both based on observations of phosphorylation in samples from one donor and in those of a wider population. Our model was used to investigate the levels of proteins involved in regulating the pathway and the effect of low GPVI levels that have been associated with disease. Results indicate a clear separation in healthy and GPVI deficient states in respect of the signalling cascade dynamics associated with Syk tyrosine phosphorylation and activation. Our approach reveals the central importance of this negative feedback pathway that results in the temporal regulation of a specific class of protein tyrosine phosphatases in controlling the rate, and therefore extent, of GPVI-stimulated platelet activation.
Resumo:
The paper explores pollination from a multi level policy perspective and analyses the institutional fit and inter play of multi-faceted pollination-related policies. First, it asks what the major policies are that frame pollination at the EU level. Second, it explores the relationship between the EU policies and localised ways of understanding pollination. Addressed third is how the concept of ecosystem services can aid in under- standing the various ways of framing and governing the situation. The results show that the policy systems affecting pollination are abundant and that these systems create different kinds of pressure on stakeholders, at several levels of society. The local-level concerns are more about the loss of pollination services than about loss of pollinators. This points to the problem of fit between local activity driven by economic reasoning and biodiversity-driven EU policies. Here we see the concept of ecosystem services having some potential, since its operationalisation can combine economic and environmental considerations. Further- more, the analysis shows how, instead of formal institutions, it seems that social norms, habits, and motivation are the key to understanding and developing effective and attractive governance measures.
Resumo:
Floods are the most frequent of natural disasters, affecting millions of people across the globe every year. The anticipation and forecasting of floods at the global scale is crucial to preparing for severe events and providing early awareness where local flood models and warning services may not exist. As numerical weather prediction models continue to improve, operational centres are increasingly using the meteorological output from these to drive hydrological models, creating hydrometeorological systems capable of forecasting river flow and flood events at much longer lead times than has previously been possible. Furthermore, developments in, for example, modelling capabilities, data and resources in recent years have made it possible to produce global scale flood forecasting systems. In this paper, the current state of operational large scale flood forecasting is discussed, including probabilistic forecasting of floods using ensemble prediction systems. Six state-of-the-art operational large scale flood forecasting systems are reviewed, describing similarities and differences in their approaches to forecasting floods at the global and continental scale. Currently, operational systems have the capability to produce coarse-scale discharge forecasts in the medium-range and disseminate forecasts and, in some cases, early warning products, in real time across the globe, in support of national forecasting capabilities. With improvements in seasonal weather forecasting, future advances may include more seamless hydrological forecasting at the global scale, alongside a move towards multi-model forecasts and grand ensemble techniques, responding to the requirement of developing multi-hazard early warning systems for disaster risk reduction.
Resumo:
The first multi-model study to estimate the predictability of a boreal Sudden Stratospheric Warming (SSW) is performed using five NWP systems. During the 2012-2013 boreal winter, anomalous upward propagating planetary wave activity was observed towards the end of December, which followed by a rapid deceleration of the westerly circulation around 2 January 2013, and on 7 January 2013 the zonal mean zonal wind at 60°N and 10 hPa reversed to easterly. This stratospheric dynamical activity was followed by an equatorward shift of the tropospheric jet stream and by a high pressure anomaly over the North Atlantic, which resulted in severe cold conditions in the UK and Northern Europe. In most of the five models, the SSW event was predicted 10 days in advance. However, only some ensemble members in most of the models predicted weakening of westerly wind when the models were initialized 15 days in advance of the SSW. Further dynamical analysis of the SSW shows that this event was characterized by the anomalous planetary wave-1 amplification followed by the anomalous wave-2 amplification in the stratosphere, which resulted in a split vortex occurring between 6 January 2013 and 8 January 2013. The models have some success in reproducing wave-1 activity when initialized 15 days in advance, they but generally failed to produce the wave-2 activity during the final days of the event. Detailed analysis shows that models have reasonably good skill in forecasting tropospheric blocking features that stimulate wave-2 amplification in the troposphere, but they have limited skill in reproducing wave-2 amplification in the stratosphere.