971 resultados para Parallel design multicenter
Resumo:
Wireless sensor networks (WSNs) have shown wide applicability to many fields including monitoring of environmental, civil, and industrial settings. WSNs however are resource constrained by many competing factors that span their hardware, software, and networking. One of the central resource constrains is the charge consumption of WSN nodes. With finite energy supplies, low charge consumption is needed to ensure long lifetimes and success of WSNs. This thesis details the design of a power system to support long-term operation of WSNs. The power system’s development occurs in parallel with a custom WSN from the Queen’s MEMS Lab (QML-WSN), with the goal of supporting a 1+ year lifetime without sacrificing functionality. The final power system design utilizes a TPS62740 DC-DC converter with AA alkaline batteries to efficiently supply the nodes while providing battery monitoring functionality and an expansion slot for future development. Testing tools for measuring current draw and charge consumption were created along with analysis and processing software. Through their use charge consumption of the power system was drastically lowered and issues in QML-WSN were identified and resolved including the proper shutdown of accelerometers, and incorrect microcontroller unit (MCU) power pin connection. Controlled current profiling revealed unexpected behaviour of nodes and detailed current-voltage relationships. These relationships were utilized with a lifetime projection model to estimate a lifetime between 521-551 days, depending on the mode of operation. The power system and QML-WSN were tested over a long term trial lasting 272+ days in an industrial testbed to monitor an air compressor pump. Environmental factors were found to influence the behaviour of nodes leading to increased charge consumption, while a node in an office setting was still operating at the conclusion of the trail. This agrees with the lifetime projection and gives a strong indication that a 1+ year lifetime is achievable. Additionally, a light-weight charge consumption model was developed which allows charge consumption information of nodes in a distributed WSN to be monitored. This model was tested in a laboratory setting demonstrating +95% accuracy for high packet reception rate WSNs across varying data rates, battery supply capacities, and runtimes up to full battery depletion.
Resumo:
BACKGROUND One-lung ventilation during thoracic surgery is associated with hypoxia-reoxygenation injury in the deflated and subsequently reventilated lung. Numerous studies have reported volatile anesthesia-induced attenuation of inflammatory responses in such scenarios. If the effect also extends to clinical outcome is yet undetermined. We hypothesized that volatile anesthesia is superior to intravenous anesthesia regarding postoperative complications. METHODS Five centers in Switzerland participated in the randomized controlled trial. Patients scheduled for lung surgery with one-lung ventilation were randomly assigned to one of two parallel arms to receive either propofol or desflurane as general anesthetic. Patients and surgeons were blinded to group allocation. Time to occurrence of the first major complication according to the Clavien-Dindo score was defined as primary (during hospitalization) or secondary (6-month follow-up) endpoint. Cox regression models were used with adjustment for prestratification variables and age. RESULTS Of 767 screened patients, 460 were randomized and analyzed (n = 230 for each arm). Demographics, disease and intraoperative characteristics were comparable in both groups. Incidence of major complications during hospitalization was 16.5% in the propofol and 13.0% in the desflurane groups (hazard ratio for desflurane vs. propofol, 0.75; 95% CI, 0.46 to 1.22; P = 0.24). Incidence of major complications within 6 months from surgery was 40.4% in the propofol and 39.6% in the desflurane groups (hazard ratio for desflurane vs. propofol, 0.95; 95% CI, 0.71 to 1.28; P = 0.71). CONCLUSIONS This is the first multicenter randomized controlled trial addressing the effect of volatile versus intravenous anesthetics on major complications after lung surgery. No difference between the two anesthesia regimens was evident.
Resumo:
Thesis (M. S.)--University of Illinois at Urbana-Champaign.
Resumo:
French & English text in parallel columns.
Resumo:
Crime Prevention Through Environmental Design (CPTED) has considerable support among the built environment professions. Yet the underlying assumptions on which it is based have rarely been, effectiveness or efficacy. This paper reports the development and use of a evaluated to assess their of scale that measured the actual levels of incidental CPTED in two residential areas in Gold Coast, Australia. The scale was administered in parallel with a victimization and social attitude survey. Analysis based on the combination of the two suggests that CPTED measures may have some effect on reducing victimization, particularly the kind of CPTED measures that apply to the group of dwellings on a single street, but the effect on fear of crime is surprisingly limited. It also indicates that there is on a single street, but the of potential in the application of such a scale in a wider assessment of the effectiveness of operationalizing CPTED design measures.
Resumo:
A new transceive system for chest imaging for MRI applications is presented. A focused, eight-element transceive torso phased array coil is designed to investigate transmitting a focused radiofrequency field deep within the torso and to enhance signal homogeneity in the heart region. The system is used in conjunction with the SENSE reconstruction technique to enable focused parallel imaging. A hybrid finite-difference-time-domain/method-of-moments method is used to accurately predict the radiofrequency behavior inside the human torso. The simulation results reported herein demonstrate the feasibility of the design concept, which shows that radiofrequency field focusing with SENSE reconstruction is theoretically achievable. (c) 2005 Wiley-Liss, Inc.
Resumo:
This paper presents a finite-difference time-domain (FDTD) simulator for electromagnetic analysis and design applications in MRI. It is intended to be a complete FDTD model of an MRI system including all RF and low-frequency field generating units and electrical models of the patient. The pro-ram has been constructed in an object-oriented framework. The design procedure is detailed and the numerical solver has been verified against analytical solutions for simple cases and also applied to various field calculation problems. In particular, the simulator is demonstrated for inverse RF coil design, optimized source profile generation, and parallel imaging in high-frequency situations. The examples show new developments enabled by the simulator and demonstrate that the proposed FDTD framework can be used to analyze large-scale computational electromagnetic problems in modern MRI engineering. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
A new method for ameliorating high-field image distortion caused by radio frequency/tissue interaction is presented and modeled, The proposed method uses, but is not restricted to, a shielded four-element transceive phased array coil and involves performing two separate scans of the same slice with each scan using different excitations during transmission. By optimizing the amplitudes and phases for each scan, antipodal signal profiles can be obtained, and by combining both images together, the image distortion can be reduced several-fold. A hybrid finite-difference time-domain/method-of-moments method is used to theoretically demonstrate the method and also to predict the radio frequency behavior inside the human head. in addition, the proposed method is used in conjunction with the GRAPPA reconstruction technique to enable rapid imaging. Simulation results reported herein for IIT (470 MHz) brain imaging applications demonstrate the feasibility of the concept where multiple acquisitions using parallel imaging elements with GRAPPA reconstruction results in improved image quality. (c) 2006 Wiley Periodicals, Inc.
Resumo:
Cold roll forming is an extremely important but little studied sheet metal forming process. In this thesis, the process of cold roll forming is introduced and it is seen that form roll design is central to the cold roll forming process. The conventional design and manufacture of form rolls is discussed and it is observed that surrounding the design process are a number of activities which although peripheral are time consuming and a possible source of error. A CAD/CAM system is described which alleviates many of the problems traditional to form roll design. New techniques for the calculation of strip length and controlling the means of forming bends are detailed. The CAD/CAM system's advantages and limitations are discussed and, whilst the system has numerous significant advantages, its principal limitation can be said to be the need to manufacture form rolls and test them on a mill before a design can be stated satisfactory. A survey of the previous theoretical and experimental analysis of cold roll forming is presented and is found to be limited. By considering the previous work, a method of numerical analysis of the cold roll forming process is proposed based on a minimum energy approach. Parallel to the numerical analysis, a comprehensive range of software has been developed to enhance the designer's visualisation of the effects of his form roll design. A complementary approach to the analysis of form roll design is the generation of form roll design, a method for the partial generation of designs is described. It is suggested that the two approaches should continue in parallel and that the limitation of each approach is knowledge of the cold roll forming process. Hence, an initial experimental investigation of the rolling of channel sections is described. Finally, areas of potential future work are discussed.
Resumo:
Image segmentation is one of the most computationally intensive operations in image processing and computer vision. This is because a large volume of data is involved and many different features have to be extracted from the image data. This thesis is concerned with the investigation of practical issues related to the implementation of several classes of image segmentation algorithms on parallel architectures. The Transputer is used as the basic building block of hardware architectures and Occam is used as the programming language. The segmentation methods chosen for implementation are convolution, for edge-based segmentation; the Split and Merge algorithm for segmenting non-textured regions; and the Granlund method for segmentation of textured images. Three different convolution methods have been implemented. The direct method of convolution, carried out in the spatial domain, uses the array architecture. The other two methods, based on convolution in the frequency domain, require the use of the two-dimensional Fourier transform. Parallel implementations of two different Fast Fourier Transform algorithms have been developed, incorporating original solutions. For the Row-Column method the array architecture has been adopted, and for the Vector-Radix method, the pyramid architecture. The texture segmentation algorithm, for which a system-level design is given, demonstrates a further application of the Vector-Radix Fourier transform. A novel concurrent version of the quad-tree based Split and Merge algorithm has been implemented on the pyramid architecture. The performance of the developed parallel implementations is analysed. Many of the obtained speed-up and efficiency measures show values close to their respective theoretical maxima. Where appropriate comparisons are drawn between different implementations. The thesis concludes with comments on general issues related to the use of the Transputer system as a development tool for image processing applications; and on the issues related to the engineering of concurrent image processing applications.
Resumo:
The present scarcity of operational knowledge-based systems (KBS) has been attributed, in part, to an inadequate consideration shown to user interface design during development. From a human factors perspective the problem has stemmed from an overall lack of user-centred design principles. Consequently the integration of human factors principles and techniques is seen as a necessary and important precursor to ensuring the implementation of KBS which are useful to, and usable by, the end-users for whom they are intended. Focussing upon KBS work taking place within commercial and industrial environments, this research set out to assess both the extent to which human factors support was presently being utilised within development, and the future path for human factors integration. The assessment consisted of interviews conducted with a number of commercial and industrial organisations involved in KBS development; and a set of three detailed case studies of individual KBS projects. Two of the studies were carried out within a collaborative Alvey project, involving the Interdisciplinary Higher Degrees Scheme (IHD) at the University of Aston in Birmingham, BIS Applied Systems Ltd (BIS), and the British Steel Corporation. This project, which had provided the initial basis and funding for the research, was concerned with the application of KBS to the design of commercial data processing (DP) systems. The third study stemmed from involvement on a KBS project being carried out by the Technology Division of the Trustees Saving Bank Group plc. The preliminary research highlighted poor human factors integration. In particular, there was a lack of early consideration of end-user requirements definition and user-centred evaluation. Instead concentration was given to the construction of the knowledge base and prototype evaluation with the expert(s). In response to this identified problem, a set of methods was developed that was aimed at encouraging developers to consider user interface requirements early on in a project. These methods were then applied in the two further projects, and their uptake within the overall development process was monitored. Experience from the two studies demonstrated that early consideration of user interface requirements was both feasible, and instructive for guiding future development work. In particular, it was shown a user interface prototype could be used as a basis for capturing requirements at the functional (task) level, and at the interface dialogue level. Extrapolating from this experience, a KBS life-cycle model is proposed which incorporates user interface design (and within that, user evaluation) as a largely parallel, rather than subsequent, activity to knowledge base construction. Further to this, there is a discussion of several key elements which can be seen as inhibiting the integration of human factors within KBS development. These elements stem from characteristics of present KBS development practice; from constraints within the commercial and industrial development environments; and from the state of existing human factors support.
Resumo:
The focus of this study is development of parallelised version of severely sequential and iterative numerical algorithms based on multi-threaded parallel platform such as a graphics processing unit. This requires design and development of a platform-specific numerical solution that can benefit from the parallel capabilities of the chosen platform. Graphics processing unit was chosen as a parallel platform for design and development of a numerical solution for a specific physical model in non-linear optics. This problem appears in describing ultra-short pulse propagation in bulk transparent media that has recently been subject to several theoretical and numerical studies. The mathematical model describing this phenomenon is a challenging and complex problem and its numerical modeling limited on current modern workstations. Numerical modeling of this problem requires a parallelisation of an essentially serial algorithms and elimination of numerical bottlenecks. The main challenge to overcome is parallelisation of the globally non-local mathematical model. This thesis presents a numerical solution for elimination of numerical bottleneck associated with the non-local nature of the mathematical model. The accuracy and performance of the parallel code is identified by back-to-back testing with a similar serial version.
Resumo:
Purpose – The purpose of this paper is to investigate an underexplored aspect of outsourcing involving a mixed strategy in which parallel production is continued in-house at the same time as outsourcing occurs. Design/methodology/approach – The study applied a multiple case study approach and drew on qualitative data collected through in-depth interviews with wood product manufacturing companies. Findings – The paper posits that there should be a variety of mixed strategies between the two governance forms of “make” or “buy.” In order to address how companies should consider the extent to which they outsource, the analysis was structured around two ends of a continuum: in-house dominance or outsourcing dominance. With an in-house-dominant strategy, outsourcing complements an organization's own production to optimize capacity utilization and outsource less cost-efficient production, or is used as a tool to learn how to outsource. With an outsourcing-dominant strategy, in-house production helps maintain complementary competencies and avoids lock-in risk. Research limitations/implications – This paper takes initial steps toward an exploration of different mixed strategies. Additional research is required to understand the costs of different mixed strategies compared with insourcing and outsourcing, and to study parallel production from a supplier viewpoint. Practical implications – This paper suggests that managers should think twice before rushing to a “me too” outsourcing strategy in which in-house capacities are completely closed. It is important to take a dynamic view of outsourcing that maintains a mixed strategy as an option, particularly in situations that involve an underdeveloped supplier market and/or as a way to develop resources over the long term. Originality/value – The concept of combining both “make” and “buy” is not new. However, little if any research has focussed explicitly on exploring the variety of different types of mixed strategies that exist on the continuum between insourcing and outsourcing.
Resumo:
In the field of Transition P systems implementation, it has been determined that it is very important to determine in advance how long takes evolution rules application in membranes. Moreover, to have time estimations of rules application in membranes makes possible to take important decisions related to hardware / software architectures design. The work presented here introduces an algorithm for applying active evolution rules in Transition P systems, which is based on active rules elimination. The algorithm complies the requisites of being nondeterministic, massively parallel, and what is more important, it is time delimited because it is only dependant on the number of membrane evolution rules.
Resumo:
This work was partially supported by the Bulgarian National Science Fund under Contract No MM 1405. Part of the results were announced at the Fifth International Workshop on Optimal Codes and Related Topics (OCRT), White Lagoon, June 2007, Bulgaria