930 resultados para Design Management


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The latest buzz phrase to enter the world of design research is “Design Thinking”. But is this anything new and does it really have any practical or theoretical relevance to the design world? Many sceptics believe the term has more to do with business strategy and little to do with the complex process of designing products, services and systems. Moreover, many view the term as misleading and a cheap attempt to piggyback the world of business management onto design. This paper seeks to ask is design thinking anything new? Several authors have explicitly or implicitly articulated the term “Design Thinking” before, such as Peter Rowe’s seminal book “Design Thinking” [1] first published in 1987 and Herbert Simon’s “The Sciences of the Artificial” [2] first published in 1969. In Tim Brown’s “Change by Design” [3], design thinking is thought of as a system of three overlapping spaces rather than a sequence of orderly steps namely inspiration – the problem or opportunity that motivates the search for solutions; ideation – the process of generating, developing and testing ideas; and implementation – the path that leads from the design studio, lab and factory to the market. This paper seeks to examine and critically analyse the tenets of this new design thinking manifesto set against three case studies of modern design practice. As such, the paper will compare design thinking theory with the reality of design in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To develop sedation, pain, and agitation quality measures using process control methodology and evaluate their properties in clinical practice. Design: A Sedation Quality Assessment Tool was developed and validated to capture data for 12-hour periods of nursing care. Domains included pain/discomfort and sedation-agitation behaviors; sedative, analgesic, and neuromuscular blocking drug administration; ventilation status; and conditions potentially justifying deep sedation. Predefined sedation-related adverse events were recorded daily. Using an iterative process, algorithms were developed to describe the proportion of care periods with poor limb relaxation, poor ventilator synchronization, unnecessary deep sedation, agitation, and an overall optimum sedation metric. Proportion charts described processes over time (2 monthly intervals) for each ICU. The numbers of patients treated between sedation-related adverse events were described with G charts. Automated algorithms generated charts for 12 months of sequential data. Mean values for each process were calculated, and variation within and between ICUs explored qualitatively. Setting: Eight Scottish ICUs over a 12-month period. Patients: Mechanically ventilated patients. Interventions: None. Measurements and Main Results: The Sedation Quality Assessment Tool agitation-sedation domains correlated with the Richmond Sedation Agitation Scale score (Spearman [rho] = 0.75) and were reliable in clinician-clinician (weighted kappa; [kappa] = 0.66) and clinician-researcher ([kappa] = 0.82) comparisons. The limb movement domain had fair correlation with Behavioral Pain Scale ([rho] = 0.24) and was reliable in clinician-clinician ([kappa] = 0.58) and clinician-researcher ([kappa] = 0.45) comparisons. Ventilator synchronization correlated with Behavioral Pain Scale ([rho] = 0.54), and reliability in clinician-clinician ([kappa] = 0.29) and clinician-researcher ([kappa] = 0.42) comparisons was fair-moderate. Eight hundred twenty-five patients were enrolled (range, 59-235 across ICUs), providing 12,385 care periods for evaluation (range 655-3,481 across ICUs). The mean proportion of care periods with each quality metric varied between ICUs: excessive sedation 12-38%; agitation 4-17%; poor relaxation 13-21%; poor ventilator synchronization 8-17%; and overall optimum sedation 45-70%. Mean adverse event intervals ranged from 1.5 to 10.3 patients treated. The quality measures appeared relatively stable during the observation period. Conclusions: Process control methodology can be used to simultaneously monitor multiple aspects of pain-sedation-agitation management within ICUs. Variation within and between ICUs could be used as triggers to explore practice variation, improve quality, and monitor this over time

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For a given TCP flow, exogenous losses are those occurring on links other than the flow's bottleneck link. Exogenous losses are typically viewed as introducing undesirable "noise" into TCP's feedback control loop, leading to inefficient network utilization and potentially severe global unfairness. This has prompted much research on mechanisms for hiding such losses from end-points. In this paper, we show through analysis and simulations that low levels of exogenous losses are surprisingly beneficial in that they improve stability and convergence, without sacrificing efficiency. Based on this, we argue that exogenous loss awareness should be taken into account in any AQM design that aims to achieve global fairness. To that end, we propose an exogenous-loss aware Queue Management (XQM) that actively accounts for and leverages exogenous losses. We use an equation based approach to derive the quiescent loss rate for a connection based on the connection's profile and its global fair share. In contrast to other queue management techniques, XQM ensures that a connection sees its quiescent loss rate, not only by complementing already existing exogenous losses, but also by actively hiding exogenous losses, if necessary, to achieve global fairness. We establish the advantages of exogenous-loss awareness using extensive simulations in which, we contrast the performance of XQM to that of a host of traditional exogenous-loss unaware AQM techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The proliferation of inexpensive workstations and networks has created a new era in distributed computing. At the same time, non-traditional applications such as computer-aided design (CAD), computer-aided software engineering (CASE), geographic-information systems (GIS), and office-information systems (OIS) have placed increased demands for high-performance transaction processing on database systems. The combination of these factors gives rise to significant challenges in the design of modern database systems. In this thesis, we propose novel techniques whose aim is to improve the performance and scalability of these new database systems. These techniques exploit client resources through client-based transaction management. Client-based transaction management is realized by providing logging facilities locally even when data is shared in a global environment. This thesis presents several recovery algorithms which utilize client disks for storing recovery related information (i.e., log records). Our algorithms work with both coarse and fine-granularity locking and they do not require the merging of client logs at any time. Moreover, our algorithms support fine-granularity locking with multiple clients permitted to concurrently update different portions of the same database page. The database state is recovered correctly when there is a complex crash as well as when the updates performed by different clients on a page are not present on the disk version of the page, even though some of the updating transactions have committed. This thesis also presents the implementation of the proposed algorithms in a memory-mapped storage manager as well as a detailed performance study of these algorithms using the OO1 database benchmark. The performance results show that client-based logging is superior to traditional server-based logging. This is because client-based logging is an effective way to reduce dependencies on server CPU and disk resources and, thus, prevents the server from becoming a performance bottleneck as quickly when the number of clients accessing the database increases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pervasiveness of personal computing platforms offers an unprecedented opportunity to deploy large-scale services that are distributed over wide physical spaces. Two major challenges face the deployment of such services: the often resource-limited nature of these platforms, and the necessity of preserving the autonomy of the owner of these devices. These challenges preclude using centralized control and preclude considering services that are subject to performance guarantees. To that end, this thesis advances a number of new distributed resource management techniques that are shown to be effective in such settings, focusing on two application domains: distributed Field Monitoring Applications (FMAs), and Message Delivery Applications (MDAs). In the context of FMA, this thesis presents two techniques that are well-suited to the fairly limited storage and power resources of autonomously mobile sensor nodes. The first technique relies on amorphous placement of sensory data through the use of novel storage management and sample diffusion techniques. The second approach relies on an information-theoretic framework to optimize local resource management decisions. Both approaches are proactive in that they aim to provide nodes with a view of the monitored field that reflects the characteristics of queries over that field, enabling them to handle more queries locally, and thus reduce communication overheads. Then, this thesis recognizes node mobility as a resource to be leveraged, and in that respect proposes novel mobility coordination techniques for FMAs and MDAs. Assuming that node mobility is governed by a spatio-temporal schedule featuring some slack, this thesis presents novel algorithms of various computational complexities to orchestrate the use of this slack to improve the performance of supported applications. The findings in this thesis, which are supported by analysis and extensive simulations, highlight the importance of two general design principles for distributed systems. First, a-priori knowledge (e.g., about the target phenomena of FMAs and/or the workload of either FMAs or DMAs) could be used effectively for local resource management. Second, judicious leverage and coordination of node mobility could lead to significant performance gains for distributed applications deployed over resource-impoverished infrastructures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical Rate Monotonic Scheduling (SRMS) is a generalization of the classical RMS results of Liu and Layland [LL73] for periodic tasks with highly variable execution times and statistical QoS requirements. The main tenet of SRMS is that the variability in task resource requirements could be smoothed through aggregation to yield guaranteed QoS. This aggregation is done over time for a given task and across multiple tasks for a given period of time. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. SRMS feasibility test ensures that it is possible for a given periodic task set to share a given resource without violating any of the statistical QoS constraints imposed on each task in the set. The SRMS scheduling algorithm consists of two parts: a job admission controller and a scheduler. The SRMS scheduler is a simple, preemptive, fixed-priority scheduler. The SRMS job admission controller manages the QoS delivered to the various tasks through admit/reject and priority assignment decisions. In particular, it ensures the important property of task isolation, whereby tasks do not infringe on each other. In this paper we present the design and implementation of SRMS within the KURT Linux Operating System [HSPN98, SPH 98, Sri98]. KURT Linux supports conventional tasks as well as real-time tasks. It provides a mechanism for transitioning from normal Linux scheduling to a mixed scheduling of conventional and real-time tasks, and to a focused mode where only real-time tasks are scheduled. We overview the technical issues that we had to overcome in order to integrate SRMS into KURT Linux and present the API we have developed for scheduling periodic real-time tasks using SRMS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In [previous papers] we presented the design, specification and proof of correctness of a fully distributed location management scheme for PCS networks and argued that fully replicating location information is both appropriate and efficient for small PCS networks. In this paper, we analyze the performance of this scheme. Then, we extend the scheme in a hierarchical environment so as to scale to large PCS networks. Through extensive numerical results, we show the superiority of our scheme compared to the current IS-41 standard.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a wireless sensor network mote hardware design and implementation are introduced for building deployment application. The core of the mote design is based on the 8 bit AVR microcontroller, Atmega1281 and 2.4 GHz wireless communication chip, CC2420. The module PCB fabrication is using the stackable technology providing powerful configuration capability. Three main layers of size 25 mm2 are structured to form the mote; these are RF, sensor and power layers. The sensors were selected carefully to meet both the building monitoring and design requirements. Beside the sensing capability, actuation and interfacing to external meters/sensors are provided to perform different management control and data recording tasks. Experiments show that the developed mote works effectively in giving stable data acquisition and owns good communication and power performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Buildings consume 40% of Ireland's total annual energy translating to 3.5 billion (2004). The EPBD directive (effective January 2003) places an onus on all member states to rate the energy performance of all buildings in excess of 50m2. Energy and environmental performance management systems for residential buildings do not exist and consist of an ad-hoc integration of wired building management systems and Monitoring & Targeting systems for non-residential buildings. These systems are unsophisticated and do not easily lend themselves to cost effective retrofit or integration with other enterprise management systems. It is commonly agreed that a 15-40% reduction of building energy consumption is achievable by efficiently operating buildings when compared with typical practice. Existing research has identified that the level of information available to Building Managers with existing Building Management Systems and Environmental Monitoring Systems (BMS/EMS) is insufficient to perform the required performance based building assessment. The cost of installing additional sensors and meters is extremely high, primarily due to the estimated cost of wiring and the needed labour. From this perspective wireless sensor technology provides the capability to provide reliable sensor data at the required temporal and spatial granularity associated with building energy management. In this paper, a wireless sensor network mote hardware design and implementation is presented for a building energy management application. Appropriate sensors were selected and interfaced with the developed system based on user requirements to meet both the building monitoring and metering requirements. Beside the sensing capability, actuation and interfacing to external meters/sensors are provided to perform different management control and data recording tasks associated with minimisation of energy consumption in the built environment and the development of appropriate Building information models(BIM)to enable the design and development of energy efficient spaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comfort is, in essence, satisfaction with the environment, and with respect to the indoor environment it is primarily satisfaction with the thermal conditions and air quality. Improving comfort has social, health and economic benefits, and is more financially significant than any other building cost. Despite this, comfort is not strictly managed throughout the building lifecycle. This is mainly due to the lack of an appropriate system to adequately manage comfort knowledge through the construction process into operation. Previous proposals to improve knowledge management have not been successfully adopted by the construction industry. To address this, the BabySteps approach was devised. BabySteps is an approach, proposed by this research, which states that for an innovation to be adopted into the industry it must be implementable through a number of small changes. This research proposes that improving the management of comfort knowledge will improve comfort. ComMet is a new methodology proposed by this research that manages comfort knowledge. It enables comfort knowledge to be captured, stored and accessed throughout the building life-cycle and so allowing it to be re-used in future stages of the building project and in future projects. It does this using the following: Comfort Performances – These are simplified numerical representations of the comfort of the indoor environment. Comfort Performances quantify the comfort at each stage of the building life-cycle using standard comfort metrics. Comfort Ratings - These are a means of classifying the comfort conditions of the indoor environment according to an appropriate standard. Comfort Ratings are generated by comparing different Comfort Performances. Comfort Ratings provide additional information relating to the comfort conditions of the indoor environment, which is not readily determined from the individual Comfort Performances. Comfort History – This is a continuous descriptive record of the comfort throughout the project, with a focus on documenting the items and activities, proposed and implemented, which could potentially affect comfort. Each aspect of the Comfort History is linked to the relevant comfort entity it references. These three components create a comprehensive record of the comfort throughout the building lifecycle. They are then stored and made available in a common format in a central location which allows them to be re-used ad infinitum. The LCMS System was developed to implement the ComMet methodology. It uses current and emerging technologies to capture, store and allow easy access to comfort knowledge as specified by ComMet. LCMS is an IT system that is a combination of the following six components: Building Standards; Modelling & Simulation; Physical Measurement through the specially developed Egg-Whisk (Wireless Sensor) Network; Data Manipulation; Information Recording; Knowledge Storage and Access.Results from a test case application of the LCMS system - an existing office room at a research facility - highlighted that while some aspects of comfort were being maintained, the building’s environment was not in compliance with the acceptable levels as stipulated by the relevant building standards. The implementation of ComMet, through LCMS, demonstrates how comfort, typically only considered during early design, can be measured and managed appropriately through systematic application of the methodology as means of ensuring a healthy internal environment in the building.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks (WSN) are becoming widely adopted for many applications including complicated tasks like building energy management. However, one major concern for WSN technologies is the short lifetime and high maintenance cost due to the limited battery energy. One of the solutions is to scavenge ambient energy, which is then rectified to power the WSN. The objective of this thesis was to investigate the feasibility of an ultra-low energy consumption power management system suitable for harvesting sub-mW photovoltaic and thermoelectric energy to power WSNs. To achieve this goal, energy harvesting system architectures have been analyzed. Detailed analysis of energy storage units (ESU) have led to an innovative ESU solution for the target applications. Battery-less, long-lifetime ESU and its associated power management circuitry, including fast-charge circuit, self-start circuit, output voltage regulation circuit and hybrid ESU, using a combination of super-capacitor and thin film battery, were developed to achieve continuous operation of energy harvester. Low start-up voltage DC/DC converters have been developed for 1mW level thermoelectric energy harvesting. The novel method of altering thermoelectric generator (TEG) configuration in order to match impedance has been verified in this work. Novel maximum power point tracking (MPPT) circuits, exploring the fractional open circuit voltage method, were particularly developed to suit the sub-1mW photovoltaic energy harvesting applications. The MPPT energy model has been developed and verified against both SPICE simulation and implemented prototypes. Both indoor light and thermoelectric energy harvesting methods proposed in this thesis have been implemented into prototype devices. The improved indoor light energy harvester prototype demonstrates 81% MPPT conversion efficiency with 0.5mW input power. This important improvement makes light energy harvesting from small energy sources (i.e. credit card size solar panel in 500lux indoor lighting conditions) a feasible approach. The 50mm × 54mm thermoelectric energy harvester prototype generates 0.95mW when placed on a 60oC heat source with 28% conversion efficiency. Both prototypes can be used to continuously power WSN for building energy management applications in typical office building environment. In addition to the hardware development, a comprehensive system energy model has been developed. This system energy model not only can be used to predict the available and consumed energy based on real-world ambient conditions, but also can be employed to optimize the system design and configuration. This energy model has been verified by indoor photovoltaic energy harvesting system prototypes in long-term deployed experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern information systems (ISs) are becoming increasingly complex. Simultaneously, organizational changes are occurring more often and more rapidly. Therefore, emergent behavior and organic adaptivity are key advantages of ISs. In this paper, a design science research (DSR) question for design-oriented information systems research (DISR) is proposed: Can the application of biomimetic principles to IS design result in the creation of value by innovation? Accordingly, the properties of biological IS are analyzed, and these insights are crystallized into a theoretical framework to address the three major aspects of biomimetic ISs: user experience, information processing, and management cybernetics. On this basis, the research question is elaborated together with a starting point for a research methodology in biomimetic information systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our research follows a design science approach to develop a method that supports the initialization of ES implementation projects – the chartering phase. This project phase is highly relevant for implementation success, but is understudied in IS research. In this paper, we derive design principles for a chartering method based on a systematic review of ES implementation literature and semi-structured expert interviews. Our analysis identifies differences in the importance of certain success factors depending on the system type. The proposed design principles are built on these factors and are linked to chartering key activities. We specifically consider system-type-specific chartering aspects for process-centric Business Intelligence & Analytics (BI&A) systems, which are an emerging class of systems at the intersection of BI&A and business process management. In summary, this paper proposes design principles for a chartering method – considering specifics of process-centric BI&A.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: Side-effects of standard pain medications can limit their use. Therefore, nonpharmacologic pain relief techniques such as auriculotherapy may play an important role in pain management. Our aim was to conduct a systematic review and meta-analysis of studies evaluating auriculotherapy for pain management. DESIGN: MEDLINE,(®) ISI Web of Science, CINAHL, AMED, and Cochrane Library were searched through December 2008. Randomized trials comparing auriculotherapy to sham, placebo, or standard-of-care control were included that measured outcomes of pain or medication use and were published in English. Two (2) reviewers independently assessed trial eligibility, quality, and abstracted data to a standardized form. Standardized mean differences (SMD) were calculated for studies using a pain score or analgesic requirement as a primary outcome. RESULTS: Seventeen (17) studies met inclusion criteria (8 perioperative, 4 acute, and 5 chronic pain). Auriculotherapy was superior to controls for studies evaluating pain intensity (SMD, 1.56 [95% confidence interval (CI): 0.85, 2.26]; 8 studies). For perioperative pain, auriculotherapy reduced analgesic use (SMD, 0.54 [95% CI: 0.30, 0.77]; 5 studies). For acute pain and chronic pain, auriculotherapy reduced pain intensity (SMD for acute pain, 1.35 [95% CI: 0.08, 2.64], 2 studies; SMD for chronic pain, 1.84 [95% CI: 0.60, 3.07], 5 studies). Removal of poor quality studies did not alter the conclusions. Significant heterogeneity existed among studies of acute and chronic pain, but not perioperative pain. CONCLUSIONS: Auriculotherapy may be effective for the treatment of a variety of types of pain, especially postoperative pain. However, a more accurate estimate of the effect will require further large, well-designed trials.