43 resultados para D-optimal design


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we propose for the first time, an analytical model for short channel effects in nanoscale source/drain extension region engineered double gate (DG) SOI MOSFETs. The impact of (i) lateral source/drain doping gradient (d), (ii) spacer width (s), (iii) spacer to doping gradient ratio (s/d) and (iv) silicon film thickness (T-si), on short channel effects - threshold voltage (V-th) and subthreshold slope (S), on-current (I-on), off-current (I-on) and I-on/I-off is extensively analysed by using the analytical model and 2D device simulations. The results of the analytical model confirm well with simulated data over the entire range of spacer widths, doping gradients and effective channel lengths. Results show that lateral source/drain doping gradient along with spacer width can not only effectively control short channel effects, thus presenting low off-current, but can also be optimised to achieve high values of on-currents. The present work provides valuable design insights in the performance of nanoscale DG Sol devices with optimal source/drain engineering and serves as a tool to optimise important device and technological parameters for 65 nm technology node and below. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVE
To assess the relationship between glycemic control, pre-eclampsia, and gestational hypertension in women with type 1 diabetes.

RESEARCH DESIGN AND METHODS
Pregnancy outcome (pre-eclampsia or gestational hypertension) was assessed prospectively in 749 women from the randomized controlled Diabetes and Pre-eclampsia Intervention Trial (DAPIT). HbA1c (A1C) values were available up to 6 months before pregnancy (n = 542), at the first antenatal visit (median 9 weeks) (n = 721), at 26 weeks’ gestation (n = 592), and at 34 weeks’ gestation (n = 519) and were categorized as optimal (<6.1%: referent), good (6.1–6.9%), moderate (7.0–7.9%), and poor (=8.0%) glycemic control, respectively.

RESULTS
Pre-eclampsia and gestational hypertension developed in 17 and 11% of pregnancies, respectively. Women who developed pre-eclampsia had significantly higher A1C values before and during pregnancy compared with women who did not develop pre-eclampsia (P < 0.05, respectively). In early pregnancy, A1C =8.0% was associated with a significantly increased risk of pre-eclampsia (odds ratio 3.68 [95% CI 1.17–11.6]) compared with optimal control. At 26 weeks’ gestation, A1C values =6.1% (good: 2.09 [1.03–4.21]; moderate: 3.20 [1.47–7.00]; and poor: 3.81 [1.30–11.1]) and at 34 weeks’ gestation A1C values =7.0% (moderate: 3.27 [1.31–8.20] and poor: 8.01 [2.04–31.5]) significantly increased the risk of pre-eclampsia compared with optimal control. The adjusted odds ratios for pre-eclampsia for each 1% decrement in A1C before pregnancy, at the first antenatal visit, at 26 weeks’ gestation, and at 34 weeks’ gestation were 0.88 (0.75–1.03), 0.75 (0.64–0.88), 0.57 (0.42–0.78), and 0.47 (0.31–0.70), respectively. Glycemic control was not significantly associated with gestational hypertension.

CONCLUSIONS
Women who developed pre-eclampsia had significantly higher A1C values before and during pregnancy. These data suggest that optimal glycemic control both early and throughout pregnancy may reduce the risk of pre-eclampsia in women with type 1 diabetes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

2'-Beta-D-arabinouridine (AraU), the uridine analogue of the anticancer agent AraC, was synthesized and evaluated for antiviral activity and cytotoxicity. In addition, a series of AraU monophosphate prodrugs in the form of triester phosphoramidates (ProTides) were also synthesized and tested against a range of viruses, leukaemia and solid tumour cell lines. Unfortunately, neither the parent compound (AraU) nor any of its ProTides showed antiviral activity, nor potent inhibitory activity against any of the cancer cell lines. Therefore, the metabolism of AraU phosphoramidates to release AraU monophosphate was investigated. The results showed carboxypeptidase Y, hog liver esterase and crude CEM tumor cell extracts to hydrolyse the ester motif of phosphoramidates with subsequent loss of the aryl group, while molecular modelling studies suggested that the AraU l-alanine aminoacyl phosphate derivative might not be a good substrate for the phosphoramidase enzyme Hint-1. These findings are in agreement with the observed disappearance of intact prodrug and concomitant appearance of the corresponding phosphoramidate intermediate derivative in CEM cell extracts without measurable formation of araU monophosphate. These findings may explain the poor antiviral/cytostatic potential of the prodrugs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In essence, optimal software engineering means creating the right product, through the right process, to the overall satisfaction of everyone involved. Adopting the agile approach to software development appears to have helped many companies make substantial progress towards that goal. The purpose of this paper is to clarify that contribution from comparative survey information gathered in 2010 and 2012. The surveys were undertaken in software development companies across Northern Ireland. The paper describes the design of the surveys and discusses optimality in relation to the results obtained. Both surveys aimed to achieve comprehensive coverage of a single region rather than rely on a voluntary sample. The main outcome from the work is a collection of insights into the nature and advantages of agile development, suggesting how further progress towards optimality might be achieved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we propose a novel finite impulse response (FIR) filter design methodology that reduces the number of operations with a motivation to reduce power consumption and enhance performance. The novelty of our approach lies in the generation of filter coefficients such that they conform to a given low-power architecture, while meeting the given filter specifications. The proposed algorithm is formulated as a mixed integer linear programming problem that minimizes chebychev error and synthesizes coefficients which consist of pre-specified alphabets. The new modified coefficients can be used for low-power VLSI implementation of vector scaling operations such as FIR filtering using computation sharing multiplier (CSHM). Simulations in 0.25um technology show that CSHM FIR filter architecture can result in 55% power and 34% speed improvement compared to carry save multiplier (CSAM) based filters.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bitwidth and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on OpenCL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. OpenCL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified OpenCL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3x faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The design optimization of a cold-formed steel portal frame building is considered in this paper. The proposed genetic algorithm (GA) optimizer considers both topology (i.e., frame spacing and pitch) and cross-sectional sizes of the main structural members as the decision variables. Previous GAs in the literature were characterized by poor convergence, including slow progress, that usually results in excessive computation times and/or frequent failure to achieve an optimal or near-optimal solution. This is the main issue addressed in this paper. In an effort to improve the performance of the conventional GA, a niching strategy is presented that is shown to be an effective means of enhancing the dissimilarity of the solutions in each generation of the GA. Thus, population diversity is maintained and premature convergence is reduced significantly. Through benchmark examples, it is shown that the efficient GA proposed generates optimal solutions more consistently. A parametric study was carried out, and the results included. They show significant variation in the optimal topology in terms of pitch and frame spacing for a range of typical column heights. They also show that the optimized design achieved large savings based on the cost of the main structural elements; the inclusion of knee braces at the eaves yield further savings in cost, that are significant.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Biodegradable polymers, such as PLA (Polylactide), come from renewable resources like corn starch and if disposed of correctly, degrade and become harmless to the ecosystem making them attractive alternatives to petroleum based polymers. PLA in particular is used in a variety of applications including medical devices, food packaging and waste disposal packaging. However, the industry faces challenges in melt processing of PLA due to its poor thermal stability which is influenced by processing temperatures and shearing.
Identification and control of suitable processing conditions is extremely challenging, usually relying on trial and error, and often sensitive to batch to batch variations. Off-line assessment in a lab environment can result in high scrap rates, long lead times and lengthy and expensive process development. Scrap rates are typically in the region of 25-30% for medical grade PLA costing between €2000-€5000/kg.
Additives are used to enhance material properties such as mechanical properties and may also have a therapeutic role in the case of bioresorbable medical devices, for example the release of calcium from orthopaedic implants such as fixation screws promotes healing. Additives can also reduce the costs involved as less of the polymer resin is required.
This study investigates the scope for monitoring, modelling and optimising processing conditions for twin screw extrusion of PLA and PLA w/calcium carbonate to achieve desired material properties. A DAQ system has been constructed to gather data from a bespoke measurement die comprising melt temperature; pressure drop along the length of the die; and UV-Vis spectral data which is shown to correlate to filler dispersion. Trials were carried out under a range of processing conditions using a Design of Experiments approach and samples were tested for mechanical properties, degradation rate and the release rate of calcium. Relationships between recorded process data and material characterisation results are explored.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A structural design optimisation has been carried out to allow for asymmetry and fully tapered portal frames. The additional weight of an asymmetric structural shape was found to be on average 5–13% with additional photovoltaic (PV) loading having a negligible effect on the optimum design. It was also shown that fabricated and tapered frames achieved an average percentage weight reduction of 9% and 11%, respectively, as compared to comparable hot-rolled steel frames. When the deflection limits recommended by the Steel Construction Institute were used, frames were shown to be deflection controlled with industrial limits yielding up to 40% saving.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The conversion of biomass for the production of liquid fuels can help reduce the greenhouse gas (GHG) emissions that are predominantly generated by the combustion of fossil fuels. Oxymethylene ethers (OMEs) are a series of liquid fuel additives that can be obtained from syngas, which is produced from the gasification of biomass. The blending of OMEs in conventional diesel fuel can reduce soot formation during combustion in a diesel engine. In this research, a process for the production of OMEs from woody biomass has been simulated. The process consists of several unit operations including biomass gasifi- cation, syngas cleanup, methanol production, and conversion of methanol to OMEs. The methodology involved the development of process models, the identification of the key process parameters affecting OME production based on the process model, and the development of an optimal process design for high OME yields. It was found that up to 9.02 tonnes day1 of OME3, OME4, and OME5 (which are suitable as diesel additives) can be produced from 277.3 tonnes day1 of wet woody biomass. Furthermore, an optimal combination of the parameters, which was generated from the developed model, can greatly enhance OME production and thermodynamic efficiency. This model can further be used in a techno- economic assessment of the whole biomass conversion chain to produce OMEs. The results of this study can be helpful for petroleum-based fuel producers and policy makers in determining the most attractive pathways of converting bio-resources into liquid fuels.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper proposes a JPEG-2000 compliant architecture capable of computing the 2 -D Inverse Discrete Wavelet Transform. The proposed architecture uses a single processor and a row-based schedule to minimize control and routing complexity and to ensure that processor utilization is kept at 100%. The design incorporates the handling of borders through the use of symmetric extension. The architecture has been implemented on the Xilinx Virtex2 FPGA.