94 resultados para Basic problematization units
Resumo:
This is the editorial opening paper to a special issue of the International Journal of Training and Development focusing on basic skills and employability
Resumo:
This paper investigates sub-integer implementations of the adaptive Gaussian mixture model (GMM) for background/foreground segmentation to allow the deployment of the method on low cost/low power processors that lack Floating Point Unit (FPU). We propose two novel integer computer arithmetic techniques to update Gaussian parameters. Specifically, the mean value and the variance of each Gaussian are updated by a redefined and generalised "round'' operation that emulates the original updating rules for a large set of learning rates. Weights are represented by counters that are updated following stochastic rules to allow a wider range of learning rates and the weight trend is approximated by a line or a staircase. We demonstrate that the memory footprint and computational cost of GMM are significantly reduced, without significantly affecting the performance of background/foreground segmentation.
Resumo:
In this paper, we explore various arithmetic units for possible use in high-speed, high-yield ALUs operated at scaled supply voltage with adaptive clock stretching. We demonstrate that careful logic optimization of the existing arithmetic units (to create hybrid units) indeed make them further amenable to supply voltage scaling. Such hybrid units result from mixing right amount of fast arithmetic into the slower ones. Simulations on different hybrid adder and multipliers in BPTM 70 nm technology show 18%-50% improvements in power compared to standard adders with only 2%-8% increase in die-area at iso-yield. These optimized datapath units can be used to construct voltage scalable robust ALUs that can operate at high clock frequency with minimal performance degradation due to occasional clock stretching. © 2009 IEEE.
Resumo:
In this paper we propose a design methodology for low-power high-performance, process-variation tolerant architecture for arithmetic units. The novelty of our approach lies in the fact that possible delay failures due to process variations and/or voltage scaling are predicted in advance and addressed by employing an elastic clocking technique. The prediction mechanism exploits the dependence of delay of arithmetic units upon input data patterns and identifies specific inputs that activate the critical path. Under iso-yield conditions, the proposed design operates at a lower scaled down Vdd without any performance degradation, while it ensures a superlative yield under a design style employing nominal supply and transistor threshold voltage. Simulation results show power savings of upto 29%, energy per computation savings of upto 25.5% and yield enhancement of upto 11.1% compared to the conventional adders and multipliers implemented in the 70nm BPTM technology. We incorporated the proposed modules in the execution unit of a five stage DLX pipeline to measure performance using SPEC2000 benchmarks [9]. Maximum area and throughput penalty obtained were 10% and 3% respectively.
Resumo:
Background
The power of the randomised controlled trial depends upon its capacity to operate in a closed system whereby the intervention is the only causal force acting upon the experimental group and absent in the control group, permitting a valid assessment of intervention efficacy. Conversely, clinical arenas are open systems where factors relating to context, resources, interpretation and actions of individuals will affect implementation and effectiveness of interventions. Consequently, the comparator (usual care) can be difficult to define and variable in multi-centre trials. Hence outcomes cannot be understood without considering usual care and factors that may affect implementation and impact on the intervention.
Methods
Using a fieldwork approach, we describe PICU context, ‘usual’ practice in sedation and weaning from mechanical ventilation, and factors affecting implementation prior to designing a trial involving a sedation and ventilation weaning intervention. We collected data from 23 UK PICUs between June and November 2014 using observation, individual and multi-disciplinary group interviews with staff.
Results
Pain and sedation practices were broadly similar in terms of drug usage and assessment tools. Sedation protocols linking assessment to appropriate titration of sedatives and sedation holds were rarely used (9 % and 4 % of PICUs respectively). Ventilator weaning was primarily a medical-led process with 39 % of PICUs engaging senior nurses in the process: weaning protocols were rarely used (9 % of PICUs). Weaning methods were variably based on clinician preference. No formal criteria or use of spontaneous breathing trials were used to test weaning readiness. Seventeen PICUs (74 %) had prior engagement in multi-centre trials, but limited research nurse availability. Barriers to previous trial implementation were intervention complexity, lack of belief in the evidence and inadequate training. Facilitating factors were senior staff buy-in and dedicated research nurse provision.
Conclusions
We examined and identified contextual and organisational factors that may impact on the implementation of our intervention. We found usual practice relating to sedation, analgesia and ventilator weaning broadly similar, yet distinctively different from our proposed intervention, providing assurance in our ability to evaluate intervention effects. The data will enable us to develop an implementation plan; considering these factors we can more fully understand their impact on study outcomes.
Sedation and weaning practices in paediatric intensive care units (PICUS) in the United Kingdom (UK)
Resumo:
We show that if E is an atomic Banach lattice with an ordercontinuous norm, A, B ∈ Lr(E) and MA,B is the operator on Lr(E) defined by MA,B(T) = AT B then ||MA,B||r = ||A||r||B||r but that there is no real α > 0 such that ||MA,B || ≥ α ||A||r||B ||r.
Resumo:
Energy efficiency is an essential requirement for all contemporary computing systems. We thus need tools to measure the energy consumption of computing systems and to understand how workloads affect it. Significant recent research effort has targeted direct power measurements on production computing systems using on-board sensors or external instruments. These direct methods have in turn guided studies of software techniques to reduce energy consumption via workload allocation and scaling. Unfortunately, direct energy measurements are hampered by the low power sampling frequency of power sensors. The coarse granularity of power sensing limits our understanding of how power is allocated in systems and our ability to optimize energy efficiency via workload allocation.
We present ALEA, a tool to measure power and energy consumption at the granularity of basic blocks, using a probabilistic approach. ALEA provides fine-grained energy profiling via sta- tistical sampling, which overcomes the limitations of power sens- ing instruments. Compared to state-of-the-art energy measurement tools, ALEA provides finer granularity without sacrificing accuracy. ALEA achieves low overhead energy measurements with mean error rates between 1.4% and 3.5% in 14 sequential and paral- lel benchmarks tested on both Intel and ARM platforms. The sampling method caps execution time overhead at approximately 1%. ALEA is thus suitable for online energy monitoring and optimization. Finally, ALEA is a user-space tool with a portable, machine-independent sampling method. We demonstrate two use cases of ALEA, where we reduce the energy consumption of a k-means computational kernel by 37% and an ocean modelling code by 33%, compared to high-performance execution baselines, by varying the power optimization strategy between basic blocks.