857 resultados para Design tool
Resumo:
Software-based control of life-critical embedded systems has become increasingly complex, and to a large extent has come to determine the safety of the human being. For example, implantable cardiac pacemakers have over 80,000 lines of code which are responsible for maintaining the heart within safe operating limits. As firmware-related recalls accounted for over 41% of the 600,000 devices recalled in the last decade, there is a need for rigorous model-driven design tools to generate verified code from verified software models. To this effect, we have developed the UPP2SF model-translation tool, which facilitates automatic conversion of verified models (in UPPAAL) to models that may be simulated and tested (in Simulink/Stateflow). We describe the translation rules that ensure correct model conversion, applicable to a large class of models. We demonstrate how UPP2SF is used in themodel-driven design of a pacemaker whosemodel is (a) designed and verified in UPPAAL (using timed automata), (b) automatically translated to Stateflow for simulation-based testing, and then (c) automatically generated into modular code for hardware-level integration testing of timing-related errors. In addition, we show how UPP2SF may be used for worst-case execution time estimation early in the design stage. Using UPP2SF, we demonstrate the value of integrated end-to-end modeling, verification, code-generation and testing process for complex software-controlled embedded systems. © 2014 ACM.
Resumo:
Purpose – To present key challenges associated with the evolution of system-in-package technologies and present technical work in reliability modeling and embedded test that contributes to these challenges. Design/methodology/approach – Key challenges have been identified from the electronics and integrated MEMS industrial sectors. Solutions to optimising the reliability of a typical assembly process and reducing the cost of production test have been studied through simulation and modelling studies based on technology data released by NXP and in collaboration with EDA tool vendors Coventor and Flomerics. Findings – Characterised models that deliver special and material dependent reliability data that can be used to optimize robustness of SiP assemblies together with results that indicate relative contributions of various structural variables. An initial analytical model for solder ball reliability and a solution for embedding a low cost test for a capacitive RF-MEMS switch identified as an SiP component presenting a key test challenge. Research limitations/implications – Results will contribute to the further development of NXP wafer level system-in-package technology. Limitations are that feedback on the implementation of recommendations and the physical characterisation of the embedded test solution. Originality/value – Both the methodology and associated studies on the structural reliability of an industrial SiP technology are unique. The analytical model for solder ball life is new as is the embedded test solution for the RF-MEMS switch.
Resumo:
The intrinsic independent features of the optimal codebook cubes searching process in fractal video compression systems are examined and exploited. The design of a suitable parallel algorithm reflecting the concept is presented. The Message Passing Interface (MPI) is chosen to be the communication tool for the implementation of the parallel algorithm on distributed memory parallel computers. Experimental results show that the parallel algorithm is able to reduce the compression time and achieve a high speed-up without changing the compression ratio and the quality of the decompressed image. A scalability test was also performed, and the results show that this parallel algorithm is scalable.
Resumo:
With appropriate planning and design, Olympic urban development has the potential to leave positive environmental legacies to the host city and contribute to environmental sustainability. This book explains how a modern Olympic games can successfully develop a more sustainable design approach by learning from the lessons of the past and by taking account of the latest developments. It offers an assessment tool that can be tailored to individual circumstances – a tool which emerges from the analysis of previous summer games host cities and from techniques in environmental analysis and assessment.
Resumo:
This paper uses a case study approach to consider the effectiveness of the electronic survey as a research tool to measure the learner voice about experiences of e-learning in a particular institutional case. Two large scale electronic surveys were carried out for the Student Experience of e-Learning (SEEL) project at the University of Greenwich in 2007 and 2008, funded by the UK Higher Education Academy (HEA). The paper considers this case to argue that, although the electronic web-based survey is a convenient method of quantitative and qualitative data collection, enabling higher education institutions swiftly to capture multiple views of large numbers of students regarding experiences of e-learning, for more robust analysis, electronic survey research is best combined with other methods of in-depth qualitative data collection. The advantages and disadvantages of the electronic survey as a research method to capture student experiences of e-learning are the focus of analysis in this short paper, which reports an overview of large-scale data collection (1,000+ responses) from two electronic surveys administered to students using surveymonkey as a web-based survey tool as part of the SEEL research project. Advantages of web-based electronic survey design include flexibility, ease of design, high degree of designer control, convenience, low costs, data security, ease of access and guarantee of confidentiality combined with researcher ability to identify users through email addresses. Disadvantages of electronic survey design include the self-selecting nature of web-enabled respondent participation, which tends to skew data collection towards students who respond effectively to email invitations. The relative inadequacy of electronic surveys to capture in-depth qualitative views of students is discussed with regard to prior recommendations from the JISC-funded Learners' Experiences of e-Learning (LEX) project, in consideration of the results from SEEL in-depth interviews with students. The paper considers the literature on web-based and email electronic survey design, summing up the relative advantages and disadvantages of electronic surveys as a tool for student experience of e-learning research. The paper concludes with a range of recommendations for designing future electronic surveys to capture the learner voice on e-learning, contributing to evidence-based learning technology research development in higher education.
Resumo:
Purpose – The purpose of this paper is to develop a quality control tool based on rheological test methods for solder paste and flux media. Design/methodology/approach – The rheological characterisation of solder pastes and flux media was carried out through the creep-recovery, thixotropy and viscosity test methods. A rheometer with a parallel plate measuring geometry of 40mm diameter and a gap height of 1mm was used to characterise the paste and associated flux media. Findings – The results from the study showed that the creep-recovery test can be used to study the deformation and recovery of the pastes, which can be used to understand the slump behaviour in solder pastes. In addition, the results from the thixotropic and viscosity test were unsuccessful in determining the differences in the rheological flow behaviour in the solder pastes and the flux medium samples. Research limitations/implications – More extensive rheological and printing testing is needed in order to correlate the findings from this study with the printing performance of the pastes. Practical implications – The rheological test method presented in the paper will provide important information for research and development, quality control and production staff to facilitate the manufacture of solder pastes and flux media. Originality/value – The paper explains how the rheological test can be used as a quality control tool to identify the suitability of a developmental solder paste and flux media used for the printing process.
Resumo:
Fisheries closures are rapidly being developed to protect vulnerable marine ecosystems worldwide. Satellite monitoring of fishing vessel activity indicates that these closures can work effectively with good compliance by international fleets even in remote areas. Here we summarise how remote fisheries closures were designed to protect Lophelia pertusa habitat in a region of the NE Atlantic that straddles the EU fishing zone and the high seas. We show how scientific records, fishers' knowledge and surveillance data on fishing activity can be combined to provide a powerful tool for the design of Marine Protected Areas.
Resumo:
In this paper, we propose for the first time, an analytical model for short channel effects in nanoscale source/drain extension region engineered double gate (DG) SOI MOSFETs. The impact of (i) lateral source/drain doping gradient (d), (ii) spacer width (s), (iii) spacer to doping gradient ratio (s/d) and (iv) silicon film thickness (T-si), on short channel effects - threshold voltage (V-th) and subthreshold slope (S), on-current (I-on), off-current (I-on) and I-on/I-off is extensively analysed by using the analytical model and 2D device simulations. The results of the analytical model confirm well with simulated data over the entire range of spacer widths, doping gradients and effective channel lengths. Results show that lateral source/drain doping gradient along with spacer width can not only effectively control short channel effects, thus presenting low off-current, but can also be optimised to achieve high values of on-currents. The present work provides valuable design insights in the performance of nanoscale DG Sol devices with optimal source/drain engineering and serves as a tool to optimise important device and technological parameters for 65 nm technology node and below. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
In this study, we used optical coherence tomography (OCT) to extensively investigate, for the first time, the effect that microneedle (MN) geometry (MN height, and MN interspacing) and force of application have upon penetration characteristics of soluble poly(methylvinylether-co-maleic anhydride, PMVE/MA) MN arrays into neonatal porcine skin in vitro. The results from OCT investigations were then used to design optimal and suboptimal MN-based drug delivery systems and evaluate their drug delivery profiles cross full thickness and dermatomed neonatal porcine skin in vitro. It was found that increasing the force used for MN application resulted in a significant increase in the depth of penetration achieved within neonatal porcine skin. For example, MN of 600 µm height penetrated to a depth of 330 µm when inserted at a force of 4.4 N/array, while the penetration increased significantly to a depth of 520 µm, when the force of application was increased to 16.4 N/array. At an application force of 11.0 N/array it was found that, in each case, increasing MN height from 350 to 600 µm to 900 µm led to a significant increase in the depth of MN penetration achieved. Moreover, alteration of MN interspacing had no effect upon depth of penetration achieved, at a constant MN height and force of application. With respect to MN dissolution, an approximate 34% reduction in MN height occurred in the first 15 min, with only 17% of the MN height remaining after a 3-hour period. Across both skin models, there was a significantly greater cumulative amount of theophylline delivered after 24 h from an MN array of 900 µm height (292.23 ± 16.77 µg), in comparison to an MN array of 350 µm height (242.62 ± 14.81 µg) (p < 0.001). Employing full thickness skin significantly reduced drug permeation in both cases. Importantly, this study has highlighted the effect that MN geometry and application force have upon the depth of penetration into skin. While it has been shown that MN height has an important role in the extent of drug delivered across neonatal porcine skin from a soluble MN array, further studies to evaluate the full significance of MN geometry on MN mediated drug delivery are now underway. The successful use of OCT in this study could prove to be a key development for polymeric MN research, accelerating their commercial exploitation.
Resumo:
Traditional static analysis fails to auto-parallelize programs with a complex control and data flow. Furthermore, thread-level parallelism in such programs is often restricted to pipeline parallelism, which can be hard to discover by a programmer. In this paper we propose a tool that, based on profiling information, helps the programmer to discover parallelism. The programmer hand-picks the code transformations from among the proposed candidates which are then applied by automatic code transformation techniques.
This paper contributes to the literature by presenting a profiling tool for discovering thread-level parallelism. We track dependencies at the whole-data structure level rather than at the element level or byte level in order to limit the profiling overhead. We perform a thorough analysis of the needs and costs of this technique. Furthermore, we present and validate the belief that programs with complex control and data flow contain significant amounts of exploitable coarse-grain pipeline parallelism in the program’s outer loops. This observation validates our approach to whole-data structure dependencies. As state-of-the-art compilers focus on loops iterating over data structure members, this observation also explains why our approach finds coarse-grain pipeline parallelism in cases that have remained out of reach for state-of-the-art compilers. In cases where traditional compilation techniques do find parallelism, our approach allows to discover higher degrees of parallelism, allowing a 40% speedup over traditional compilation techniques. Moreover, we demonstrate real speedups on multiple hardware platforms.
Resumo:
This paper presents the design of a single chip adaptive beamformer which contains 5 million transistors and can perform 50 GigaFlops. The core processor of the adaptive beamformer is a QR-array processor implemented on a fully efficient linear systolic architecture. The paper highlights a number of rapid design techniques that have been used to realize the design. These include an architecture synthesis tool for quickly developing the circuit architecture and the utilization of a library of parameterizable silicon intellectual property (IP) cores, to rapidly develop the circuit layouts.
Resumo:
Dual-rail encoding, return-to-spacer protocol, and hazard-free logic can be used to resist power analysis attacks by making energy consumed per clock cycle independent of processed data. Standard dual-rail logic uses a protocol with a single spacer, e.g., all-zeros, which gives rise to energy balancing problems. We address these problems by incorporating two spacers; the spacers alternate between adjacent clock cycles. This guarantees that all gates switch in every clock cycle regardless of the transmitted data values. To generate these dual-rail circuits, an automated tool has been developed. It is capable of converting synchronous netlists into dual-rail circuits and it is interfaced to industry CAD tools. Dual-rail and single-rail benchmarks based upon the advanced encryption standard (AES) have been simulated and compared in order to evaluate the method and the tool.
Resumo:
Background: Clinical supervision takes place once the newly qualified nurse is employed in clinical practice. However, often the variety and diversity of nursing jobs can result in a hit and miss delivery of supervision training. By introducing training uniformly at undergraduate stage a more seamless transition may occur (McColgan K, Rice C. 2012).
There is an increased interest in higher education in the use of online learning resources for students. As part completion of a DNP an App. for training students in clinical supervision was developed.
Aim: The creation of a clinical supervision training App. for use in undergraduate nursing.
Objectives:
•To develop a teaching tool that is up to date, current and easily accessible to students.
•To introduce supervision training for undergraduate nursing students
•To motivate the undergraduate nursing student to identify examples from their clinical experience to encourage change and promote professional development.
Approach:
Stage 1
In 2010/11 informal inquiries with senior nurses regarding the introduction of supervision training in undergraduate nursing
Stage 2
A review of UK supervision training.
Stage 3
Template production of teaching tool.
Stage 4
Collaboration with a computer technician to transfer multimedia outputs onto an App.
Stage 5
App. piloted with lecturers (n=4) and post registration students (n=20).
Stage 6
Minor alterations made to App. design template
Stage 7
App. included in an experimental study looking at online learning versus blended learning June 2013 (n=61, n=63)
Conclusion: A collaborative approach to the development of any educational programme is essential to ensure the success of the final teaching product (McCutcheon 2013). The end result is that this App. could be:
•Made available to nurses in the UK.
•Adapted to suit other healthcare professionals and students.
•Used as a prototype for other healthcare related subjects.
McColgan K., Rice C. (2012) An online training resource for clinical supervision. Nursing Standard, 26(24) 35-39.
McCutcheon K. (2013) Development of a multi-media book for clinical supervision training in an undergraduate nursing programme. Journal of Nursing Education and Practice, 3(5) 31-38.
Resumo:
Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bitwidth and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on OpenCL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. OpenCL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified OpenCL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3x faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.
Resumo:
Continuous research endeavors on hard turning (HT), both on machine tools and cutting tools, have made the previously reported daunting limits easily attainable in the modern scenario. This presents an opportunity for a systematic investigation on finding the current attainable limits of hard turning using a CNC turret lathe. Accordingly, this study aims to contribute to the existing literature by providing the latest experimental results of hard turning of AISI 4340 steel (69 HRC) using a CBN cutting tool. An orthogonal array was developed using a set of judiciously chosen cutting parameters. Subsequently, the longitudinal turning trials were carried out in accordance with a well-designed full factorial-based Taguchi matrix. The speculation indeed proved correct as a mirror finished optical quality machined surface (an average surface roughness value of 45 nm) was achieved by the conventional cutting method. Furthermore, Signal-to-noise (S/N) ratio analysis, Analysis of variance (ANOVA), and Multiple regression analysis were carried out on the experimental datasets to assert the dominance of each machining variable in dictating the machined surface roughness and to optimize the machining parameters. One of the key findings was that when feed rate during hard turning approaches very low (about 0.02mm/rev), it could alone be most significant (99.16%) parameter in influencing the machined surface roughness (Ra). This has, however also been shown that low feed rate results in high tool wear, so the selection of machining parameters for carrying out hard turning must be governed by a trade-off between the cost and quality considerations.