108 resultados para set based design


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Following the UK Medical Research Council’s (MRC) guidelines for the development and evaluation of complex interventions, this study aimed to design, develop and optimise an educational intervention about young men and unintended teenage pregnancy based around an interactive film. The process involved identification of the relevant evidence base, development of a theoretical understanding of the phenomenon of unintended teenage pregnancy in relation to young men, and exploratory mixed methods research. The result was an evidence-based, theory-informed, user-endorsed intervention designed to meet the much neglected pregnancy education needs of teenage men and intended to increase both boys’ and girls’ intentions to avoid an unplanned pregnancy during adolescence. In prioritising the development phase, this paper addresses a gap in the literature on the processes of research-informed intervention design. It illustrates the application of the MRC guidelines in practice while offering a critique and additional guidance to programme developers on the MRC prescribed processes of developing interventions. Key lessons learned were: 1) know and engage the target population and engage gatekeepers in addressing contextual complexities; 2) know the targeted behaviours and model a process of change; and 3) look beyond development to evaluation and implementation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The paper presents IPPro which is a high performance, scalable soft-core processor targeted for image processing applications. It has been based on the Xilinx DSP48E1 architecture using the ZYNQ Field Programmable Gate Array and is a scalar 16-bit RISC processor that operates at 526MHz, giving 526MIPS of performance. Each IPPro core uses 1 DSP48, 1 Block RAM and 330 Kintex-7 slice-registers, thus making the processor as compact as possible whilst maintaining flexibility and programmability. A key aspect of the approach is in reducing the application design time and implementation effort by using multiple IPPro processors in a SIMD mode. For different applications, this allows us to exploit different levels of parallelism and mapping for the specified processing architecture with the supported instruction set. In this context, a Traffic Sign Recognition (TSR) algorithm has been prototyped on a Zedboard with the colour and morphology operations accelerated using multiple IPPros. Simulation and experimental results demonstrate that the processing platform is able to achieve a speedup of 15 to 33 times for colour filtering and morphology operations respectively, with a reduced design effort and time.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bitwidth and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on OpenCL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. OpenCL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified OpenCL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3x faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we propose a system level design approach considering voltage over-scaling (VOS) that achieves error resiliency using unequal error protection of different computation elements, while incurring minor quality degradation. Depending on user specifications and severity of process variations/channel noise, the degree of VOS in each block of the system is adaptively tuned to ensure minimum system power while providing "just-the-right" amount of quality and robustness. This is achieved, by taking into consideration block level interactions and ensuring that under any change of operating conditions, only the "less-crucial" computations, that contribute less to block/system output quality, are affected. The proposed approach applies unequal error protection to various blocks of a system-logic and memory-and spans multiple layers of design hierarchy-algorithm, architecture and circuit. The design methodology when applied to a multimedia subsystem shows large power benefits ( up to 69% improvement in power consumption) at reasonable image quality while tolerating errors introduced due to VOS, process variations, and channel noise.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper the tracking system used to perform a scaled vehicle-barrier crash test is reported. The scaled crash test was performed as part of a wider project aimed at designing a new safety barrier making use of natural building materials. The scaled crash test was designed and performed as a proof of concept of the new mass-based safety barriers and the study was composed of two parts: the scaling technique and of a series of performed scaled crash tests. The scaling method was used for 1) setting the scaled test impact velocity so that energy dissipation and momentum transferring, from the car to the barrier, can be reproduced and 2) predicting the acceleration, velocity and displacement values occurring in the full-scale impact from the results obtained in a scaled test. To achieve this goal the vehicle and barrier displacements were to be recorded together with the vehicle accelerations and angular velocities. These quantities were measured during the tests using acceleration sensors and a tracking system. The tracking system was composed of a high speed camera and a set of targets to measure the vehicle linear and angular velocities. A code was developed to extract the target velocities from the videos and the velocities obtained were then compared with those obtained integrating the accelerations provided by the sensors to check the reliability of the method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Continuous research endeavors on hard turning (HT), both on machine tools and cutting tools, have made the previously reported daunting limits easily attainable in the modern scenario. This presents an opportunity for a systematic investigation on finding the current attainable limits of hard turning using a CNC turret lathe. Accordingly, this study aims to contribute to the existing literature by providing the latest experimental results of hard turning of AISI 4340 steel (69 HRC) using a CBN cutting tool. An orthogonal array was developed using a set of judiciously chosen cutting parameters. Subsequently, the longitudinal turning trials were carried out in accordance with a well-designed full factorial-based Taguchi matrix. The speculation indeed proved correct as a mirror finished optical quality machined surface (an average surface roughness value of 45 nm) was achieved by the conventional cutting method. Furthermore, Signal-to-noise (S/N) ratio analysis, Analysis of variance (ANOVA), and Multiple regression analysis were carried out on the experimental datasets to assert the dominance of each machining variable in dictating the machined surface roughness and to optimize the machining parameters. One of the key findings was that when feed rate during hard turning approaches very low (about 0.02mm/rev), it could alone be most significant (99.16%) parameter in influencing the machined surface roughness (Ra). This has, however also been shown that low feed rate results in high tool wear, so the selection of machining parameters for carrying out hard turning must be governed by a trade-off between the cost and quality considerations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Best concrete research paper by a student - Research has shown that the cost of managing structures puts high strain on the infrastructure budget, with
estimates of over 50% of the European construction budget being dedicated to repair and maintenance. If reinforced concrete
structures are not suitably designed and adequately maintained, their service life is compromised, resulting in the full economic
value of the investment not realised. The issue is more prevalent in coastal structures as a result of combinations of aggressive
actions, such as those caused by chlorides, sulphates and cyclic freezing and thawing.
It is a common practice nowadays to ensure durability of reinforced concrete structures by specifying a concrete mix and a
nominal cover at the design stage to cater for the exposure environment. This in theory should produce the performance required
to achieve a specified service life. Although the European Standard EN 206-1 specifies variations in the exposure environment,
it does not take into account the macro and micro climates surrounding structures, which have a significant influence on their
performance and service life. Therefore, in order to construct structures which will perform satisfactorily in different exposure
environments, the following two aspects need to be developed: a performance based specification to supplement EN 206-1
which will outline the expected performance of the structure in a given environment; and a simple yet transferrable procedure
for assessing the performance of structures in service termed KPI Theory. This will allow the asset managers not only to design
structures for the intended service life, but also to take informed maintenance decisions should the performance in service fall
short of what was specified. This paper aims to discuss this further.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a surrogate-model based optimization of a doubly-fed induction generator (DFIG) machine winding design for maximizing power yield. Based on site-specific wind profile data and the machine’s previous operational performance, the DFIG’s stator and rotor windings are optimized to match the maximum efficiency with operating conditions for rewinding purposes. The particle swarm optimization (PSO)-based surrogate optimization techniques are used in conjunction with the finite element method (FEM) to optimize the machine design utilizing the limited available information for the site-specific wind profile and generator operating conditions. A response surface method in the surrogate model is developed to formulate the design objectives and constraints. Besides, the machine tests and efficiency calculations follow IEEE standard 112-B. Numerical and experimental results validate the effectiveness of the proposed technologies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Heat sinks are widely used for cooling electronic devices and systems. Their thermal performance is usually determined by the material, shape, and size of the heat sink. With the assistance of computational fluid dynamics (CFD) and surrogate-based optimization, heat sinks can be designed and optimized to achieve a high level of performance. In this paper, the design and optimization of a plate-fin-type heat sink cooled by impingement jet is presented. The flow and thermal fields are simulated using the CFD simulation; the thermal resistance of the heat sink is then estimated. A Kriging surrogate model is developed to approximate the objective function (thermal resistance) as a function of design variables. Surrogate-based optimization is implemented by adaptively adding infill points based on an integrated strategy of the minimum value, the maximum mean square error approach, and the expected improvement approaches. The results show the influence of design variables on the thermal resistance and give the optimal heat sink with lowest thermal resistance for given jet impingement conditions. 

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Despite the popularity of the Theory of Planned Behaviour (TPB) a lack of research assessing the efficacy of the model in understanding the health behaviour of children exists, with those studies that have been conducted reporting problems with questionnaire formulation and low to moderate internal consistencies for TPB constructs. The aim of this study was to develop and test a TPB-based measure suitable for use with primary school children aged 9 to 10 years. A mixed method sequential design was employed. In Stage 1, 7 semi-structured focus group discussions (N=56) were conducted to elicit the underlying beliefs specific to tooth brushing. Using content thematic analysis the beliefs were identified and a TPB measure was developed. A repeated measures design was employed in Stage 2 using test re-test reliability analysis in order to assess its psychometric properties. In all, 184 children completed the questionnaire. Test-retest reliabilities support the validity and reliability of the TPB measure for assessing the tooth brushing beliefs of children. Pearson’s product moment correlations were calculated for all of the TPB beliefs, achieving substantial to almost perfect agreement levels. Specifically, a significant relationship between all 10 of the direct and indirect TPB constructs at the 0.01 level was achieved. This paper will discuss the design and development of the measure so could serve as a guide to fellow researchers and health psychologists interested in using theoretical models to investigate the health and well-being of children.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The design, fabrication, and measured results are presented for a reconfigurable reflectarray antenna based on liquid crystals (LCs)which operates above 100 GHz. The antenna has been designed to provide beam scanning capabilities over a wide angular range, a large bandwidth,and reduced side-lobe level (SLL). Measured radiation patterns are in good agreement with simulations, and show that the antenna generates an electronically steerable beam in one plane over an angular range of 55◦ in the frequency band from 96 to 104 GHz. The SLL is lower than −13 dB for all the scan angles and −18 dB is obtained over 16% of the scan range. The measured performance is significantly better than previously published results for this class of electronically tunable antenna, and moreover, veri-fies the accuracy of the proposed procedure for LC modeling and antenna design.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: To establish an international patient-reported outcomes (PROMs) study among prostate cancer survivors, up to 18 years postdiagnosis, in two countries with different healthcare systems and ethical frameworks. Design: A cross-sectional, postal survey of prostate cancer survivors sampled and recruited via two population-based cancer registries. Healthcare professionals (HCPs) evaluated patients for eligibility to participate. Questionnaires contained validated instruments to assess health-related quality of life and psychological well-being, including QLQ-C30, QLQPR-25, EQ-5D-5L, 21-question Depression, Anxiety and Stress Scale (DASS-21) and the Decisional Regret Scale. Setting: Republic of Ireland (RoI) and Northern Ireland (NI). Primary outcome measures: Registration completeness, predictors of eligibility and response, data missingness, unweighted and weighted PROMs. Results: Prostate cancer registration was 80% (95% CI 75% to 84%) and 91% (95% CI 89% to 93%) complete 2 years postdiagnosis in NI and RoI, respectively. Of 12 322 survivors sampled from registries, 53% (n=6559) were classified as eligible following HCP screening. In the multivariate analysis, significant predictors of eligibility were: being ≤59 years of age at diagnosis (p<0.001), short-term survivor (<5 years postdiagnosis; p<0.001) and from RoI (p<0.001). 3348 completed the questionnaire, yielding a 54% adjusted response rate. 13% of men or their families called the study freephone with queries for assistance with questionnaire completion or to talk about their experience. Significant predictors of response in multivariate analysis were: being ≤59 years at diagnosis (p<0.001) and from RoI (p=0.016). Mean number of missing questions in validated instruments ranged from 0.12 (SD 0.71; EQ-5D-5L) to 3.72 (SD 6.30; QLQ-PR25). Weighted and unweighted mean EQ-5D-5L, QLQ-C30 and QLQ-PR25 scores were similar, as were the weighted and unweighted prevalences of depression, anxiety and distress. Conclusions: It was feasible to perform PROMs studies across jurisdictions, using cancer registries as sampling frames; we amassed one of the largest, international, population-based data set of prostate cancer survivors. We highlight improvements which could inform future PROMs studies, including utilising general practitioners to assess eligibility and providing a freephone service.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, a multiloop robust control strategy is proposed based on H∞ control and a partial least squares (PLS) model (H∞_PLS) for multivariable chemical processes. It is developed especially for multivariable systems in ill-conditioned plants and non-square systems. The advantage of PLS is to extract the strongest relationship between the input and the output variables in the reduced space of the latent variable model rather than in the original space of the highly dimensional variables. Without conventional decouplers, the dynamic PLS framework automatically decomposes the MIMO process into multiple single-loop systems in the PLS subspace so that the controller design can be simplified. Since plant/model mismatch is almost inevitable in practical applications, to enhance the robustness of this control system, the controllers based on the H∞ mixed sensitivity problem are designed in the PLS latent subspace. The feasibility and the effectiveness of the proposed approach are illustrated by the simulation results of a distillation column and a mixing tank process. Comparisons between H∞_PLS control and conventional individual control (either H∞ control or PLS control only) are also made