81 resultados para Model-driven Architecture, Goal-Oriented design, usability
Resumo:
A simple polynya flux model driven by standard atmospheric forcing is used to investigate the ice formation that took place during an exceptionally strong and consistent western New Siberian (WNS) polynya event in 2004 in the Laptev Sea. Whether formation rates are high enough to erode the stratification of the water column beneath is examined by adding the brine released during the 2004 polynya event to the average winter density stratification of the water body, preconditioned by summers with a cyclonic atmospheric forcing (comparatively weakly stratified water column). Beforehand, the model performance is tested through a simulation of a well‐documented event in April 2008. Neglecting the replenishment of water masses by advection into the polynya area, we find the probability for the occurrence of density‐driven convection down to the bottom to be low. Our findings can be explained by the distinct vertical density gradient that characterizes the area of the WNS polynya and the apparent lack of extreme events in the eastern Laptev Sea. The simple approach is expected to be sufficiently rigorous, since the simulated event is exceptionally strong and consistent, the ice production and salt rejection rates are likely to be overestimated, and the amount of salt rejected is distrusted over a comparatively weakly stratified water column. We conclude that the observed erosion of the halocline and formation of vertically mixed water layers during a WNS polynya event is therefore predominantly related to wind‐ and tidally driven turbulent mixing processes.
Resumo:
Three supramolecular complexes of Co(II) using SCN-/SeCN- in combination with 4,4'-dipyridyl-N,N'-dioxide (dpyo), i.e., {[Co(SCN)(2)(dpyo)(2)].(dpyo)}(n) ( 1), {[Co(SCN)(2)(dpyo)(H2O)(2)].(H2O)}(n) ( 2), {[Co(SeCN)(2)(dpyo)(H2O)(2)]center dot(H2O)}(n) ( 3), have been synthesized and characterized by single-crystal X-ray analysis. Complex 1 is a rare example of a dpyo bridged two-dimensional (2D) coordination polymer, and pi-stacked dpyo supramolecular rods are generated by the lattice dpyo, passing through the rhombic grid of stacked layers, resulting in a three-dimensional (3D) superstructure. Complexes 2 and 3 are isomorphous one-dimensional (1D) coordination polymers [-Co-dpyo-Co-] that undergo self-assembly leading to a bilayer architecture derived through an R-2(2)(8) H-bonding synthon between coordinated water and dpyo oxygen. A reinvestigation of coordination polymers [Mn(SCN)(2)(dpyo)( H2O)(MeOH)](n) ( 4) and {[Fe(SCN)(2)(dpyo)(H2O)(2)]center dot(H2O)}(n) ( 5) reported recently by our group [ Manna et al. Indian J. Chem. 2006, 45A, 1813] reveals brick wall topology rather than bilayer architecture is due to the decisive role of S center dot center dot center dot S/Se center dot center dot center dot Se interactions in determining the helical nature in 4 and 5 as compared to zigzag polymeric chains in 2 and 3, although the same R-2(2)(8) synthon is responsible for supramolecular assembly in these complexes.
Resumo:
This paper discusses the problems inherent within traditional supply chain management's forecast and inventory management processes arising when tackling demand driven supply chain. A demand driven supply chain management architecture developed by Orchestr8 Ltd., U.K. is described to demonstrate its advantages over traditional supply chain management. Within this architecture, a metrics reporting system is designed by adopting business intelligence technology that supports users for decision making and planning supply activities over supply chain health.
Resumo:
In this chapter we described how the inclusion of a model of a human arm, combined with the measurement of its neural input and a predictor, can provide to a previously proposed teleoperator design robustness under time delay. Our trials gave clear indications of the superiority of the NPT scheme over traditional as well as the modified Yokokohji and Yoshikawa architectures. Its fundamental advantages are: the time-lead of the slave, the more efficient, and providing a more natural feeling manipulation, and the fact that incorporating an operator arm model leads to more credible stability results. Finally, its simplicity allows less likely to fail local control techniques to be employed. However, a significant advantage for the enhanced Yokokohji and Yoshikawa architecture results from the very fact that it’s a conservative modification of current designs. Under large prediction errors, it can provide robustness through directing the master and slave states to their means and, since it relies on the passivity of the mechanical part of the system, it would not confuse the operator. An experimental implementation of the techniques will provide further evidence for the performance of the proposed architectures. The employment of neural networks and fuzzy logic, which will provide an adaptive model of the human arm and robustifying control terms, is scheduled for the near future.
Resumo:
Current methods and techniques used in designing organisational performance measurement systems do not consider the multiple aspects of business processes or the semantics of data generated during the lifecycle of a product. In this paper, we propose an organisational performance measurement systems design model that is based on the semantics of an organisation, business process and products lifecycle. Organisational performance measurement is examined from academic and practice disciplines. The multi-discipline approach is used as a research tool to explore the weaknesses of current models that are used to design organisational performance measurement systems. This helped in identifying the gaps in research and practice concerning the issues and challenges in designing information systems for measuring the performance of an organisation. The knowledge sources investigated include on-going and completed research project reports; scientific and management literature; and practitioners’ magazines.
Resumo:
The complexity of current and emerging high performance architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven performance modelling approach is outlined that is appro- priate for modern multicore architectures. The approach is demonstrated by constructing a model of a simple shallow water code on a Cray XE6 system, from application-specific benchmarks that illustrate precisely how architectural char- acteristics impact performance. The model is found to recre- ate observed scaling behaviour up to 16K cores, and used to predict optimal rank-core affinity strategies, exemplifying the type of problem such a model can be used for.
Resumo:
The idea of buildings in harmony with nature can be traced back to ancient times. The increasing concerns on sustainability oriented buildings have added new challenges in building architectural design and called for new design responses. Sustainable design integrates and balances the human geometries and the natural ones. As the language of nature, it is, therefore, natural to assume that fractal geometry could play a role in developing new forms of aesthetics and sustainable architectural design. This paper gives a brief description of fractal geometry theory and presents its current status and recent developments through illustrative review of some fractal case studies in architecture design, which provides a bridge between fractal geometry and architecture design.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
Background In the UK occupational therapy pre-discharge home visits are routinely carried out as a means of facilitating safe transfer from the hospital to home. Whilst they are an integral part of practice, there is little evidence to demonstrate they have a positive outcome on the discharge process. Current issues for patients are around the speed of home visits and the lack of shared decision making in the process, resulting in less than 50 % of the specialist equipment installed actually being used by patients on follow-up. To improve practice there is an urgent need to examine other ways of conducting home visits to facilitate safe discharge. We believe that Computerised 3D Interior Design Applications (CIDAs) could be a means to support more efficient, effective and collaborative practice. A previous study explored practitioners perceptions of using CIDAs; however it is important to ascertain older adult’s views about the usability of technology and to compare findings. This study explores the perceptions of community dwelling older adults with regards to adopting and using CIDAs as an assistive tool for the home adaptations process. Methods Ten community dwelling older adults participated in individual interactive task-focused usability sessions with a customised CIDA, utilising the think-aloud protocol and individual semi-structured interviews. Template analysis was used to carry out both deductive and inductive analysis of the think-aloud and interview data. Initially, a deductive stance was adopted, using the three pre-determined high-level themes of the technology acceptance model (TAM): Perceived Usefulness (PU), Perceived Ease of Use (PEOU), Actual Use (AU). Inductive template analysis was then carried out on the data within these themes, from which a number of sub-thmes emerged. Results Regarding PU, participants believed CIDAs served as a useful visual tool and saw clear potential to facilitate shared understanding and partnership in care delivery. For PEOU, participants were able to create 3D home environments however a number of usability issues must still be addressed. The AU theme revealed the most likely usage scenario would be collaborative involving both patient and practitioner, as many participants did not feel confident or see sufficient value in using the application autonomously. Conclusions This research found that older adults perceived that CIDAs were likely to serve as a valuable tool which facilitates and enhances levels of patient/practitioner collaboration and empowerment. Older adults also suggested a redesign of the interface so that less sophisticated dexterity and motor functions are required. However, older adults were not confident, or did not see sufficient value in using the application autonomously. Future research is needed to further customise the CIDA software, in line with the outcomes of this study, and to explore the potential of collaborative application patient/practitioner-based deployment.
Resumo:
This paper is an initial work towards developing an e-Government benchmarking model that is user-centric. To achieve the goal then, public service delivery is discussed first including the transition to online public service delivery and the need for providing public services using electronic media. Two major e-Government benchmarking methods are critically discussed and the need to develop a standardized benchmarking model that is user-centric is presented. To properly articulate user requirements in service provision, an organizational semiotic method is suggested.
Resumo:
The formulation of a new process-based crop model, the general large-area model (GLAM) for annual crops is presented. The model has been designed to operate on spatial scales commensurate with those of global and regional climate models. It aims to simulate the impact of climate on crop yield. Procedures for model parameter determination and optimisation are described, and demonstrated for the prediction of groundnut (i.e. peanut; Arachis hypogaea L.) yields across India for the period 1966-1989. Optimal parameters (e.g. extinction coefficient, transpiration efficiency, rate of change of harvest index) were stable over space and time, provided the estimate of the yield technology trend was based on the full 24-year period. The model has two location-specific parameters, the planting date, and the yield gap parameter. The latter varies spatially and is determined by calibration. The optimal value varies slightly when different input data are used. The model was tested using a historical data set on a 2.5degrees x 2.5degrees grid to simulate yields. Three sites are examined in detail-grid cells from Gujarat in the west, Andhra Pradesh towards the south, and Uttar Pradesh in the north. Agreement between observed and modelled yield was variable, with correlation coefficients of 0.74, 0.42 and 0, respectively. Skill was highest where the climate signal was greatest, and correlations were comparable to or greater than correlations with seasonal mean rainfall. Yields from all 35 cells were aggregated to simulate all-India yield. The correlation coefficient between observed and simulated yields was 0.76, and the root mean square error was 8.4% of the mean yield. The model can be easily extended to any annual crop for the investigation of the impacts of climate variability (or change) on crop yield over large areas. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Presented herein is an experimental design that allows the effects of several radiative forcing factors on climate to be estimated as precisely as possible from a limited suite of atmosphere-only general circulation model (GCM) integrations. The forcings include the combined effect of observed changes in sea surface temperatures, sea ice extent, stratospheric (volcanic) aerosols, and solar output, plus the individual effects of several anthropogenic forcings. A single linear statistical model is used to estimate the forcing effects, each of which is represented by its global mean radiative forcing. The strong colinearity in time between the various anthropogenic forcings provides a technical problem that is overcome through the design of the experiment. This design uses every combination of anthropogenic forcing rather than having a few highly replicated ensembles, which is more commonly used in climate studies. Not only is this design highly efficient for a given number of integrations, but it also allows the estimation of (nonadditive) interactions between pairs of anthropogenic forcings. The simulated land surface air temperature changes since 1871 have been analyzed. The changes in natural and oceanic forcing, which itself contains some forcing from anthropogenic and natural influences, have the most influence. For the global mean, increasing greenhouse gases and the indirect aerosol effect had the largest anthropogenic effects. It was also found that an interaction between these two anthropogenic effects in the atmosphere-only GCM exists. This interaction is similar in magnitude to the individual effects of changing tropospheric and stratospheric ozone concentrations or to the direct (sulfate) aerosol effect. Various diagnostics are used to evaluate the fit of the statistical model. For the global mean, this shows that the land temperature response is proportional to the global mean radiative forcing, reinforcing the use of radiative forcing as a measure of climate change. The diagnostic tests also show that the linear model was suitable for analyses of land surface air temperature at each GCM grid point. Therefore, the linear model provides precise estimates of the space time signals for all forcing factors under consideration. For simulated 50-hPa temperatures, results show that tropospheric ozone increases have contributed to stratospheric cooling over the twentieth century almost as much as changes in well-mixed greenhouse gases.