932 resultados para system design


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Data deduplication describes a class of approaches that reduce the storage capacity needed to store data or the amount of data that has to be transferred over a network. These approaches detect coarse-grained redundancies within a data set, e.g. a file system, and remove them.rnrnOne of the most important applications of data deduplication are backup storage systems where these approaches are able to reduce the storage requirements to a small fraction of the logical backup data size.rnThis thesis introduces multiple new extensions of so-called fingerprinting-based data deduplication. It starts with the presentation of a novel system design, which allows using a cluster of servers to perform exact data deduplication with small chunks in a scalable way.rnrnAfterwards, a combination of compression approaches for an important, but often over- looked, data structure in data deduplication systems, so called block and file recipes, is introduced. Using these compression approaches that exploit unique properties of data deduplication systems, the size of these recipes can be reduced by more than 92% in all investigated data sets. As file recipes can occupy a significant fraction of the overall storage capacity of data deduplication systems, the compression enables significant savings.rnrnA technique to increase the write throughput of data deduplication systems, based on the aforementioned block and file recipes, is introduced next. The novel Block Locality Caching (BLC) uses properties of block and file recipes to overcome the chunk lookup disk bottleneck of data deduplication systems. This chunk lookup disk bottleneck either limits the scalability or the throughput of data deduplication systems. The presented BLC overcomes the disk bottleneck more efficiently than existing approaches. Furthermore, it is shown that it is less prone to aging effects.rnrnFinally, it is investigated if large HPC storage systems inhibit redundancies that can be found by fingerprinting-based data deduplication. Over 3 PB of HPC storage data from different data sets have been analyzed. In most data sets, between 20 and 30% of the data can be classified as redundant. According to these results, future work in HPC storage systems should further investigate how data deduplication can be integrated into future HPC storage systems.rnrnThis thesis presents important novel work in different area of data deduplication re- search.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The rapid development in the field of lighting and illumination allows low energy consumption and a rapid growth in the use, and development of solid-state sources. As the efficiency of these devices increases and their cost decreases there are predictions that they will become the dominant source for general illumination in the short term. The objective of this thesis is to study, through extensive simulations in realistic scenarios, the feasibility and exploitation of visible light communication (VLC) for vehicular ad hoc networks (VANETs) applications. A brief introduction will introduce the new scenario of smart cities in which visible light communication will become a fundamental enabling technology for the future communication systems. Specifically, this thesis focus on the acquisition of several, frequent, and small data packets from vehicles, exploited as sensors of the environment. The use of vehicles as sensors is a new paradigm to enable an efficient environment monitoring and an improved traffic management. In most cases, the sensed information must be collected at a remote control centre and one of the most challenging aspects is the uplink acquisition of data from vehicles. My thesis discusses the opportunity to take advantage of short range vehicle-to-vehicle (V2V) and vehicle-to-roadside (V2R) communications to offload the cellular networks. More specifically, it discusses the system design and assesses the obtainable cellular resource saving, by considering the impact of the percentage of vehicles equipped with short range communication devices, of the number of deployed road side units, and of the adopted routing protocol. When short range communications are concerned, WAVE/IEEE 802.11p is considered as standard for VANETs. Its use together with VLC will be considered in urban vehicular scenarios to let vehicles communicate without involving the cellular network. The study is conducted by simulation, considering both a simulation platform (SHINE, simulation platform for heterogeneous interworking networks) developed within the Wireless communication Laboratory (Wilab) of the University of Bologna and CNR, and network simulator (NS3). trying to realistically represent all the wireless network communication aspects. Specifically, simulation of vehicular system was performed and introduced in ns-3, creating a new module for the simulator. This module will help to study VLC applications in VANETs. Final observations would enhance and encourage potential research in the area and optimize performance of VLC systems applications in the future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paperwork compares the a numerical validation of the finite element model (FEM) with respect the experimental tests of a new generation wind turbine blade designed by TPI Composites Inc. called BSDS (Blade System Design Study). The research is focused on the analysis by finite element (FE) of the BSDS blade and its comparison with respect the experimental data from static and dynamic investigations. The goal of the research is to create a general procedure which is based on a finite element model and will be used to create an accurate digital copy for any kind of blade. The blade prototype was created in SolidWorks and the blade of Sandia National Laboratories Blade System Design Study was accurately reproduced. At a later stage the SolidWorks model was imported in Ansys Mechanical APDL where the shell geometry was created and modal, static and fatigue analysis were carried out. The outcomes of the FEM analysis were compared with the real test on the BSDS blade at Clarkson University laboratory carried out by a new procedures called Blade Test Facility that includes different methods for both the static and dynamic test of the wind turbine blade. The outcomes from the FEM analysis reproduce the real behavior of the blade subjected to static loads in a very satisfying way. A most detailed study about the material properties could improve the accuracy of the analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis explores system performance for reconfigurable distributed systems and provides an analytical model for determining throughput of theoretical systems based on the OpenSPARC FPGA Board and the SIRC Communication Framework. This model was developed by studying a small set of variables that together determine a system¿s throughput. The importance of this model is in assisting system designers to make decisions as to whether or not to commit to designing a reconfigurable distributed system based on the estimated performance and hardware costs. Because custom hardware design and distributed system design are both time consuming and costly, it is important for designers to make decisions regarding system feasibility early in the development cycle. Based on experimental data the model presented in this paper shows a close fit with less than 10% experimental error on average. The model is limited to a certain range of problems, but it can still be used given those limitations and also provides a foundation for further development of modeling reconfigurable distributed systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis develops high performance real-time signal processing modules for direction of arrival (DOA) estimation for localization systems. It proposes highly parallel algorithms for performing subspace decomposition and polynomial rooting, which are otherwise traditionally implemented using sequential algorithms. The proposed algorithms address the emerging need for real-time localization for a wide range of applications. As the antenna array size increases, the complexity of signal processing algorithms increases, making it increasingly difficult to satisfy the real-time constraints. This thesis addresses real-time implementation by proposing parallel algorithms, that maintain considerable improvement over traditional algorithms, especially for systems with larger number of antenna array elements. Singular value decomposition (SVD) and polynomial rooting are two computationally complex steps and act as the bottleneck to achieving real-time performance. The proposed algorithms are suitable for implementation on field programmable gated arrays (FPGAs), single instruction multiple data (SIMD) hardware or application specific integrated chips (ASICs), which offer large number of processing elements that can be exploited for parallel processing. The designs proposed in this thesis are modular, easily expandable and easy to implement. Firstly, this thesis proposes a fast converging SVD algorithm. The proposed method reduces the number of iterations it takes to converge to correct singular values, thus achieving closer to real-time performance. A general algorithm and a modular system design are provided making it easy for designers to replicate and extend the design to larger matrix sizes. Moreover, the method is highly parallel, which can be exploited in various hardware platforms mentioned earlier. A fixed point implementation of proposed SVD algorithm is presented. The FPGA design is pipelined to the maximum extent to increase the maximum achievable frequency of operation. The system was developed with the objective of achieving high throughput. Various modern cores available in FPGAs were used to maximize the performance and details of these modules are presented in detail. Finally, a parallel polynomial rooting technique based on Newton’s method applicable exclusively to root-MUSIC polynomials is proposed. Unique characteristics of root-MUSIC polynomial’s complex dynamics were exploited to derive this polynomial rooting method. The technique exhibits parallelism and converges to the desired root within fixed number of iterations, making this suitable for polynomial rooting of large degree polynomials. We believe this is the first time that complex dynamics of root-MUSIC polynomial were analyzed to propose an algorithm. In all, the thesis addresses two major bottlenecks in a direction of arrival estimation system, by providing simple, high throughput, parallel algorithms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As engineers, we are trained to use logical, rational problem solving to insure our mines operate at maximum efficiency. We tend to use the same technical approach to design safety into all mining systems. This works well for machines, but not so much for the human component. Recent insights in the field of behavioral economics provide useful ideas for addressing the fact that we are driven by emotions more often than by rational thought. Understanding the nonrational aspect of human behavior is an important piece of any safety system design.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we analyze a dynamic agency problem where contracting parties do not know the agent's future productivity at the beginning of the relationship. We consider a two-period model where both the agent and the principal observe the agent's second-period productivity at the end of the first period. This observation is assumed to be non-verifiable information. We compare long-term contracts with short-term contracts with respect to their suitability to motivate effort in both periods. On the one hand, short-term contracts allow for a better fine-tuning of second-period incentives as they can be aligned with the agent's second-period productivity. On the other hand, in short-term contracts first-period effort incentives might be distorted as contracts have to be sequentially optimal. Hence, the difference between long-term and short-term contracts is characterized by a trade-off between inducing effort in the first and in the second period. We analyze the determinants of this trade-off and demonstrate its implications for performance measurement and information system design.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Energieeffizienz und Leistungsfähigkeit von fördertechnischen Anlagen sind in den vergangenen Jahren immer mehr in den Fokus der Betreiber und Hersteller gerückt. Im Rahmen einer Untersuchung am Fraunhofer-Institut für Materialfluss und Logistik IML wurden diese beiden Aspekte an magnetisch erregten und piezoerregten Schwingförderern untersucht. Ein Systemvergleich stellt dabei die wesentlichen Vor- und Nachteile der jeweiligen Antriebsart dar und unterstützt sowohl bei der Auswahl als auch bei der Optimierung von Transportsystemen im logistischen Umfeld.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Utilizing advanced information technology, Intensive Care Unit (ICU) remote monitoring allows highly trained specialists to oversee a large number of patients at multiple sites on a continuous basis. In the current research, we conducted a time-motion study of registered nurses’ work in an ICU remote monitoring facility. Data were collected on seven nurses through 40 hours of observation. The results showed that nurses’ essential tasks were centered on three themes: monitoring patients, maintaining patients’ health records, and managing technology use. In monitoring patients, nurses spent 52% of the time assimilating information embedded in a clinical information system and 15% on monitoring live vitals. System-generated alerts frequently interrupted nurses in their task performance and redirected them to manage suddenly appearing events. These findings provide insight into nurses’ workflow in a new, technology-driven critical care setting and have important implications for system design, work engineering, and personnel selection and training.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background. Excess weight and obesity are at epidemic proportions in the United States and place individuals at increased risk for a variety of chronic conditions. Rates of diabetes, high blood pressure, coronary artery disease, stroke, cancer, and arthritis are all influenced by the presence of obesity. Small reductions in excess weight can produce significant positive clinical outcomes. Healthcare organizations have a vital role to play in the identification and management of obesity. Currently, healthcare providers do not adequately diagnose and manage excess weight in patients. Lack of skill, time, and knowledge are commonly cited as reasons for non-adherence to recommended standards of care. The Chronic Care Model offers an approach to healthcare organizations for chronic disease management. The model consists of six elements that work together to empower both providers and patients to have more productive interactions: the community, the health system itself, self-management support, delivery system design, decision support, and clinical information systems. The model and its elements may offer a framework through which healthcare organizations can adapt to support, educate, and empower providers and patients in the management of excess weight and obesity. Successful management of excess weight will reduce morbidity and mortality of many chronic conditions. Purpose. The purpose of this review is to synthesize existing research on the effectiveness of the Chronic Care Model and its elements as they relate to weight management and behaviors associated with maintaining a healthy weight. Methods: A narrative review of the literature between November 1998 and November 2008 was conducted. The review focused on clinical trials, systematic reviews, and reports related to the chronic care model or its elements and weight management, physical activity, nutrition, or diabetes. Fifty-nine articles are included in the review. Results. This review highlights the use of the Chronic Care Model and its elements that can result in improved quality of care and clinical outcomes related to weight management, physical activity, nutrition, and diabetes. Conclusions. Healthcare organizations can use the Chronic Care Model framework to implement changes within their systems to successfully address overweight and obesity in their patient populations. Specific recommendations for operationalizing the Chronic Care Model elements for weight management are presented.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: The failure rate of health information systems is high, partially due to fragmented, incomplete, or incorrect identification and description of specific and critical domain requirements. In order to systematically transform the requirements of work into real information system, an explicit conceptual framework is essential to summarize the work requirements and guide system design. Recently, Butler, Zhang, and colleagues proposed a conceptual framework called Work Domain Ontology (WDO) to formally represent users’ work. This WDO approach has been successfully demonstrated in a real world design project on aircraft scheduling. However, as a top level conceptual framework, this WDO has not defined an explicit and well specified schema (WDOS) , and it does not have a generalizable and operationalized procedure that can be easily applied to develop WDO. Moreover, WDO has not been developed for any concrete healthcare domain. These limitations hinder the utility of WDO in real world information system in general and in health information system in particular. Objective: The objective of this research is to formalize the WDOS, operationalize a procedure to develop WDO, and evaluate WDO approach using Self-Nutrition Management (SNM) work domain. Method: Concept analysis was implemented to formalize WDOS. Focus group interview was conducted to capture concepts in SNM work domain. Ontology engineering methods were adopted to model SNM WDO. Part of the concepts under the primary goal “staying healthy” for SNM were selected and transformed into a semi-structured survey to evaluate the acceptance, explicitness, completeness, consistency, experience dependency of SNM WDO. Result: Four concepts, “goal, operation, object and constraint”, were identified and formally modeled in WDOS with definitions and attributes. 72 SNM WDO concepts under primary goal were selected and transformed into semi-structured survey questions. The evaluation indicated that the major concepts of SNM WDO were accepted by 41 overweight subjects. SNM WDO is generally independent of user domain experience but partially dependent on SNM application experience. 23 of 41 paired concepts had significant correlations. Two concepts were identified as ambiguous concepts. 8 extra concepts were recommended towards the completeness of SNM WDO. Conclusion: The preliminary WDOS is ready with an operationalized procedure. SNM WDO has been developed to guide future SNM application design. This research is an essential step towards Work-Centered Design (WCD).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Metamodels have proven be very useful when it comes to reducing the computational requirements of Evolutionary Algorithm-based optimization by acting as quick-solving surrogates for slow-solving fitness functions. The relationship between metamodel scope and objective function varies between applications, that is, in some cases the metamodel acts as a surrogate for the whole fitness function, whereas in other cases it replaces only a component of the fitness function. This paper presents a formalized qualitative process to evaluate a fitness function to determine the most suitable metamodel scope so as to increase the likelihood of calibrating a high-fidelity metamodel and hence obtain good optimization results in a reasonable amount of time. The process is applied to the risk-based optimization of water distribution systems; a very computationally-intensive problem for real-world systems. The process is validated with a simple case study (modified New York Tunnels) and the power of metamodelling is demonstrated on a real-world case study (Pacific City) with a computational speed-up of several orders of magnitude.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Due to the fact that a metro network market is very cost sensitive, direct modulated schemes appear attractive. In this paper a CWDM (Coarse Wavelength Division Multiplexing) system is studied in detail by means of an Optical Communication System Design Software; a detailed study of the modulated current shape (exponential, sine and gaussian) for 2.5 Gb/s CWDM Metropolitan Area Networks is performed to evaluate its tolerance to linear impairments such as signal-to-noise-ratio degradation and dispersion. Point-to-point links are investigated and optimum design parameters are obtained. Through extensive sets of simulation results, it is shown that some of these shape pulses are more tolerant to dispersion when compared with conventional gaussian shape pulses. In order to achieve a low Bit Error Rate (BER), different types of optical transmitters are considered including strongly adiabatic and transient chirp dominated Directly Modulated Lasers (DMLs). We have used fibers with different dispersion characteristics, showing that the system performance depends, strongly, on the chosen DML?fiber couple.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Modern FPGAs with run-time reconfiguration allow the implementation of complex systems offering both the flexibility of software-based solutions combined with the performance of hardware. This combination of characteristics, together with the development of new specific methodologies, make feasible to reach new points of the system design space, and make embedded systems built on these platforms acquire more and more importance. However, the practical exploitation of this technique in fields that traditionally have relied on resource restricted embedded systems, is mainly limited by strict power consumption requirements, the cost and the high dependence of DPR techniques with the specific features of the device technology underneath. In this work, we tackle the previously reported problems, designing a reconfigurable platform based on the low-cost and low-power consuming Spartan-6 FPGA family. The full process to develop the platform will be detailed in the paper from scratch. In addition, the implementation of the reconfiguration mechanism, including two profiles, is reported. The first profile is a low-area and low-speed reconfiguration engine based mainly on software functions running on the embedded processor, while the other one is a hardware version of the same engine, implemented in the FPGA logic. This reconfiguration hardware block has been originally designed to the Virtex-5 family, and its porting process will be also described in this work, facing the interoperability problem among different families.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current nanometer technologies are subjected to several adverse effects that seriously impact the yield and performance of integrated circuits. Such is the case of within-die parameters uncertainties, varying workload conditions, aging, temperature, etc. Monitoring, calibration and dynamic adaptation have appeared as promising solutions to these issues and many kinds of monitors have been presented recently. In this scenario, where systems with hundreds of monitors of different types have been proposed, the need for light-weight monitoring networks has become essential. In this work we present a light-weight network architecture based on digitization resource sharing of nodes that require a time-to-digital conversion. Our proposal employs a single wire interface, shared among all the nodes in the network, and quantizes the time domain to perform the access multiplexing and transmit the information. It supposes a 16% improvement in area and power consumption compared to traditional approaches.