974 resultados para user requirements


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Healthcare information systems have the potential to enhance productivity, lower costs, and reduce medication errors by automating business processes. However, various issues such as system complexity and system abilities in a relation to user requirements as well as rapid changes in business needs have an impact on the use of these systems. In many cases failure of a system to meet business process needs has pushed users to develop alternative work processes (workarounds) to fill this gap. Some research has been undertaken on why users are motivated to perform and create workarounds. However, very little research has assessed the consequences on patient safety. Moreover, the impact of performing these workarounds on the organisation and how to quantify risks and benefits is not well analysed. Generally, there is a lack of rigorous understanding and qualitative and quantitative studies on healthcare IS workarounds and their outcomes. This project applies A Normative Approach for Modelling Workarounds to develop A Model of Motivation, Constraints, and Consequences. It aims to understand the phenomenon in-depth and provide guidelines to organisations on how to deal with workarounds. Finally the method is demonstrated on a case study example and its relative merits discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The NeuroHub project aims to develop a research information system for neuroscientists at three different partner institutions: Oxford, Reading and Southampton. Each research group has different working practices, research methodologies and user requirements, which have lead to the development of a system that supports a wide variety of tasks in the neuroscience research life cycle. In this paper, we present how these user requirements have been translated in a research information environment that supports a community of over 70 researchers using the system for day-to-day research tasks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As new buildings are constructed in response to changes in technology or user requirements, the value of the existing stock will decline in relative terms. This is termed economic depreciation and it may be influenced by the age and quality of buildings, amount and timing of expenditure, and wider market and economic conditions. This study tests why individual assets experience different depreciation rates, applying panel regression techniques to 375 UK office and industrial assets. Results suggest that rental value depreciation rates reduce as buildings get older, while a composite measure of age and quality provides more explanation of depreciation than age alone. Furthermore, economic and local real estate market conditions are significant in explaining how depreciation rates change over time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

World Wide Web has brought us a lot of challenges, such as infinite contents, resource diversity, and maintenance and update of contents. Web-based database (WBDB) is one of the answers to these challenges. Currently the most commonly used WBDB architecture is three-tier architecture, which is still somehow lack of flexibility to adapt to frequently changed user requirements. In this paper, we propose a hybrid interactive architecture for WBDB based on the reactive system concepts. In this architecture, we use sensors to catch users frequently changed requirements and use a decision making manager agent to process them and generate SQL commands dynamically. Hence the efficiency and flexibility are gained from this architecture, and the performance of WBDB is enhanced accordingly.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A distributed database system is subject to site failure and link failure. This paper presents a reactive system approach to achieving fault tolerance in such a system. The reactive system concepts are an attractive paradigm for system design, development and maintenance because it separates policies from mechanisms. In the paper we give a solution using different reactive modules to implement the fault tolerant policies and the failure detection mechanisms. The solution shows that they can be separated without impact on each other; thus the system can adapt to constant changes in environments and user requirements.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A distributed database system is subject to site failure and link failure. This paper presents a reactive system approach to achieving the fault-tolerance in such a system. The reactive system concepts are an attractive paradigm for system design, development and maintenance because it separates policies from mechanisms. In the paper we give a solution using different reactive modules to implement the fault-tolerant policies and the failure detection mechanisms. The solution shows that they can be separated without impact on each other thus the system can adapt to constant changes in user requirements.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most fault-tolerant application programs cannot cope with constant changes in their environments and user requirements because they embed policies and mechanisms together so that if the policies or mechanisms are  changed the whole programs have to be changed as well. This paper presents a reactive system approach to overcoming this limitation. The reactive system concepts are an attractive paradigm for system design, development and maintenance because they separate policies from mechanisms. In the paper we propose a generic reactive system architecture and use group communication primitives to model it. We then implement it as a generic package which can be applied in any distributed applications. The system performance shows that it can be used in a distributed environment effectively.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Presents a case study of design management within an Australian design-construct organization on a large residential apartment project, with the purpose of identifying and analysing issues associated with the organization, responsibilities and stages of development in a typical design-construct project. Discusses the nature of introspection in the Australian construction industry, the shift in procurement methods, the design and build approach, whole life issues, the need for a design manager, and the role of the facilities manager. Profiles the case study organization and its contracts and procurement methods, before focusing on weaknesses in the company, the role of the project design development manager in leading the design team, managing the design consultants, and interacting and advising the developer in relation to design decisions. Suggests from the exercise that: the project manager should remain the overall project leader, manager and interface between design, cost, programme, buildability, construction and user requirements; the design manager should be responsible for issuing all documentation; and the design cost manager should be responsible for verifying that the design developed accords with project budgets, project brief and quality requirements in conjunction with the design manager.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The adoption of simulation as a powerful enabling method for knowledge management is hampered by the relatively high cost of model construction and maintenance. A two-step procedure, based on a divide and conquer strategy, is proposed in this paper. First, a simulation program is partitioned based on a reinterpretation of the model-view-controller architecture. Individual parts are then connected, in terms of abstraction, to guard against possible changes that resulted from shifting user requirements. We explore the applicability of these design principles through a detailed discussion of an industry case study. The knowledge-based perspective guides the design of architecture to accommodate the need of emulation without compromising the integrity of the simulation program. The synergy between simulation and a knowledge management perspective, as shown in the case study, has the potential to achieve the objectives of rapid development of models, with low maintenance cost. This could, in turn, facilitate an extension of the use of simulation in the knowledge management domain.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Parameter-Driven Systems (PDS) are widely used in commerce for large-scale applications. Reusability is achieved with a PDS design by relocating implicit control structures in the software and the storage of explicit data in database files. This approach can accommodate various user requirements without tedious modification of the software. In order to specify appropriate parameters in a system, knowledge of both business activities and system behaviour are required. For large, complex software packages, this task becomes time consuming and requires specialist knowledge, yet the consistency and correctness still cannot be guaranteed. My research studied the types of knowledge required and agents involved in the PDS customisation. The work also identified the associated problems and constraints. A solution is proposed and implemented as an Intelligent Assistant prototype than a manual approach. Three areas of achievement have been highlighted: 1. The characteristics and problems of maintaining parameter instances in a PDS are defined. It is found that the verification is not complete with the technical/structural knowledge alone, but a context is necessary to provide semantic information and related business activities (thus the implemented parameters) so that mainline functions can relate with each other. 2. A knowledge-based modelling approach has been proposed and demonstrated via a practical implementation. A Specification Language was designed which can model various types of knowledge in a PDS and encapsulate relationships. The Knowledge-Based System (KBS) developed verifies parameters based on the interpreted model of a given context. 3. The performance of the Intelligent Assistant prototype was well received by the domain specialist from the participating organisation. The modelling and KBS approach developed in my research offers considerable promise in solving practical problems in the software industry.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The development of fault-tolerant computing systems is a very difficult task. Two reasons contributed to this difficulty can be described as follows. The First is that, in normal practice, fault-tolerant computing policies and mechanisms are deeply embedded into most application programs, so that these application programs cannot cope with changes in environments, policies and mechanisms. These factors may change frequently in a distributed environment, especially in a heterogeneous environment. Therefore, in order to develop better fault-tolerant systems that can cope with constant changes in environments and user requirements, it is essential to separate the fault tolerant computing policies and mechanisms in application programs. The second is, on the other hand, a number of techniques have been proposed for the construction of reliable and fault-tolerant computing systems. Many computer systems are being developed to tolerant various hardware and software failures. However, most of these systems are to be used in specific application areas, since it is extremely difficult to develop systems that can be used in general-purpose fault-tolerant computing. The motivation of this thesis is based on these two aspects. The focus of the thesis is on developing a model based on the reactive system concepts for building better fault-tolerant computing applications. The reactive system concepts are an attractive paradigm for system design, development and maintenance because it separates policies from mechanisms. The stress of the model is to provide flexible system architecture for the general-purpose fault-tolerant application development, and the model can be applied in many specific applications. With this reactive system model, we can separate fault-tolerant computing polices and mechanisms in the applications, so that the development and maintenance of fault-tolerant computing systems can be made easier.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Internet websites and smartphone apps have become a popular resource to guide parents in their children’s feeding and nutrition. Given the diverse range of websites and apps on infant feeding, the quality of information in these resources should be assessed to identify whether consumers have access to credible and reliable information.

Objective: This systematic analysis provides perspectives on the information available about infant feeding on websites and smartphone apps.

Methods: A systematic analysis was conducted to assess the quality, comprehensibility, suitability, and readability of websites and apps on infant feeding using a developed tool. Google and Bing were used to search for websites from Australia, while the App Store for iOS and Google Play for Android were used to search for apps. Specified key words including baby feeding, breast feeding, formula feeding and introducing solids were used to assess websites and apps addressing feeding advice. Criteria for assessing the accuracy of the content were developed using the Australian Infant Feeding Guidelines.

Results: A total of 600 websites and 2884 apps were screened, and 44 websites and 46 apps met the selection criteria and were analyzed. Most of the websites (26/44) and apps (43/46) were noncommercial, some websites (10/44) and 1 app were commercial and there were 8 government websites; 2 apps had university endorsement. The majority of the websites and apps were rated poor quality. There were two websites that had 100% coverage of information compared to those rated as fair or poor that had low coverage. Two-thirds of the websites (65%) and almost half of the apps (47%) had a readability level above the 8th grade level.

Conclusions: The findings of this unique analysis highlight the potential for website and app developers to merge user requirements with evidence-based content to ensure that information on infant feeding is of high quality. There are currently no apps available to consumers that address a variety of infant feeding topics. To keep up with the rapid turnover of the evolving technology, health professionals need to consider developing an app that will provide consumers with a credible and reliable source of information about infant feeding, using quality assessment tools and evidence-based content.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Today, third generation networks are consolidated realities, and user expectations on new applications and services are becoming higher and higher. Therefore, new systems and technologies are necessary to move towards the market needs and the user requirements. This has driven the development of fourth generation networks. ”Wireless network for the fourth generation” is the expression used to describe the next step in wireless communications. There is no formal definition for what these fourth generation networks are; however, we can say that the next generation networks will be based on the coexistence of heterogeneous networks, on the integration with the existing radio access network (e.g. GPRS, UMTS, WIFI, ...) and, in particular, on new emerging architectures that are obtaining more and more relevance, as Wireless Ad Hoc and Sensor Networks (WASN). Thanks to their characteristics, fourth generation wireless systems will be able to offer custom-made solutions and applications personalized according to the user requirements; they will offer all types of services at an affordable cost, and solutions characterized by flexibility, scalability and reconfigurability. This PhD’s work has been focused on WASNs, autoconfiguring networks which are not based on a fixed infrastructure, but are characterized by being infrastructure less, where devices have to automatically generate the network in the initial phase, and maintain it through reconfiguration procedures (if nodes’ mobility, or energy drain, etc..., cause disconnections). The main part of the PhD activity has been focused on an analytical study on connectivity models for wireless ad hoc and sensor networks, nevertheless a small part of my work was experimental. Anyway, both the theoretical and experimental activities have had a common aim, related to the performance evaluation of WASNs. Concerning the theoretical analysis, the objective of the connectivity studies has been the evaluation of models for the interference estimation. This is due to the fact that interference is the most important performance degradation cause in WASNs. As a consequence, is very important to find an accurate model that allows its investigation, and I’ve tried to obtain a model the most realistic and general as possible, in particular for the evaluation of the interference coming from bounded interfering areas (i.e. a WiFi hot spot, a wireless covered research laboratory, ...). On the other hand, the experimental activity has led to Throughput and Packet Error Rare measurements on a real IEEE802.15.4 Wireless Sensor Network.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Currently, observations of space debris are primarily performed with ground-based sensors. These sensors have a detection limit at some centimetres diameter for objects in Low Earth Orbit (LEO) and at about two decimetres diameter for objects in Geostationary Orbit (GEO). The few space-based debris observations stem mainly from in-situ measurements and from the analysis of returned spacecraft surfaces. Both provide information about mostly sub-millimetre-sized debris particles. As a consequence the population of centimetre- and millimetre-sized debris objects remains poorly understood. The development, validation and improvement of debris reference models drive the need for measurements covering the whole diameter range. In 2003 the European Space Agency (ESA) initiated a study entitled “Space-Based Optical Observation of Space Debris”. The first tasks of the study were to define user requirements and to develop an observation strategy for a space-based instrument capable of observing uncatalogued millimetre-sized debris objects. Only passive optical observations were considered, focussing on mission concepts for the LEO, and GEO regions respectively. Starting from the requirements and the observation strategy, an instrument system architecture and an associated operations concept have been elaborated. The instrument system architecture covers the telescope, camera and onboard processing electronics. The proposed telescope is a folded Schmidt design, characterised by a 20 cm aperture and a large field of view of 6°. The camera design is based on the use of either a frame-transfer charge coupled device (CCD), or on a cooled hybrid sensor with fast read-out. A four megapixel sensor is foreseen. For the onboard processing, a scalable architecture has been selected. Performance simulations have been executed for the system as designed, focussing on the orbit determination of observed debris particles, and on the analysis of the object detection algorithms. In this paper we present some of the main results of the study. A short overview of the user requirements and observation strategy is given. The architectural design of the instrument is discussed, and the main tradeoffs are outlined. An insight into the results of the performance simulations is provided.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In 2005, the International Ocean Colour Coordinating Group (IOCCG) convened a working group to examine the state of the art in ocean colour data merging, which showed that the research techniques had matured sufficiently for creating long multi-sensor datasets (IOCCG, 2007). As a result, ESA initiated and funded the DUE GlobColour project (http://www.globcolour.info/) to develop a satellite based ocean colour data set to support global carbon-cycle research. It aims to satisfy the scientific requirement for a long (10+ year) time-series of consistently calibrated global ocean colour information with the best possible spatial coverage. This has been achieved by merging data from the three most capable sensors: SeaWiFS on GeoEye's Orbview-2 mission, MODIS on NASA's Aqua mission and MERIS on ESA's ENVISAT mission. In setting up the GlobColour project, three user organisations were invited to help. Their roles are to specify the detailed user requirements, act as a channel to the broader end user community and to provide feedback and assessment of the results. The International Ocean Carbon Coordination Project (IOCCP) based at UNESCO in Paris provides direct access to the carbon cycle modelling community's requirements and to the modellers themselves who will use the final products. The UK Met Office's National Centre for Ocean Forecasting (NCOF) in Exeter, UK, provides an understanding of the requirements of oceanography users, and the IOCCG bring their understanding of the global user needs and valuable advice on best practice within the ocean colour science community. The three year project kicked-off in November 2005 under the leadership of ACRI-ST (France). The first year was a feasibility demonstration phase that was successfully concluded at a user consultation workshop organised by the Laboratoire d'Océanographie de Villefranche, France, in December 2006. Error statistics and inter-sensor biases were quantified by comparison with insitu measurements from moored optical buoys and ship based campaigns, and used as an input to the merging. The second year was dedicated to the production of the time series. In total, more than 25 Tb of input (level 2) data have been ingested and 14 Tb of intermediate and output products created, with 4 Tb of data distributed to the user community. Quality control (QC) is provided through the Diagnostic Data Sets (DDS), which are extracted sub-areas covering locations of in-situ data collection or interesting oceanographic phenomena. This Full Product Set (FPS) covers global daily merged ocean colour products in the time period 1997-2006 and is also freely available for use by the worldwide science community at http://www.globcolour.info/data_access_full_prod_set.html. The GlobColour service distributes global daily, 8-day and monthly data sets at 4.6 km resolution for, chlorophyll-a concentration, normalised water-leaving radiances (412, 443, 490, 510, 531, 555 and 620 nm, 670, 681 and 709 nm), diffuse attenuation coefficient, coloured dissolved and detrital organic materials, total suspended matter or particulate backscattering coefficient, turbidity index, cloud fraction and quality indicators. Error statistics from the initial sensor characterisation are used as an input to the merging methods and propagate through the merging process to provide error estimates for the output merged products. These error estimates are a key component of GlobColour as they are invaluable to the users; particularly the modellers who need them in order to assimilate the ocean colour data into ocean simulations. An intensive phase of validation has been undertaken to assess the quality of the data set. In addition, inter-comparisons between the different merged datasets will help in further refining the techniques used. Both the final products and the quality assessment were presented at a second user consultation in Oslo on 20-22 November 2007 organised by the Norwegian Institute for Water Research (NIVA); presentations are available on the GlobColour WWW site. On request of the ESA Technical Officer for the GlobColour project, the FPS data set was mirrored in the PANGAEA data library.