917 resultados para Statistical mixture-design optimization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is considerable interest in creating embedded, speech recognition hardware using the weighted finite state transducer (WFST) technique but there are performance and memory usage challenges. Two system optimization techniques are presented to address this; one approach improves token propagation by removing the WFST epsilon input arcs; another one-pass, adaptive pruning algorithm gives a dramatic reduction in active nodes to be computed. Results for memory and bandwidth are given for a 5,000 word vocabulary giving a better practical performance than conventional WFST; this is then exploited in an adaptive pruning algorithm that reduces the active nodes from 30,000 down to 4,000 with only a 2 percent sacrifice in speech recognition accuracy; these optimizations lead to a more simplified design with deterministic performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Efficiently exploring exponential-size architectural design spaces with many interacting parameters remains an open problem: the sheer number of experiments required renders detailed simulation intractable.We attack this via an automated approach that builds accurate predictive models. We simulate sampled points, using results to teach our models the function describing relationships among design parameters. The models can be queried and are very fast, enabling efficient design tradeoff discovery. We validate our approach via two uniprocessor sensitivity studies, predicting IPC with only 1–2% error. In an experimental study using the approach, training on 1% of a 250-K-point CMP design space allows our models to predict performance with only 4–5% error. Our predictive modeling combines well with techniques that reduce the time taken by each simulation experiment, achieving net time savings of three-four orders of magnitude.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study was carried out to investigate whether the electronic portal imaging (EPI) acquisition process could be optimized, and as a result tolerance and action levels be set for the PIPSPro QC-3V phantom image quality assessment. The aim of the optimization process was to reduce the dose delivered to the patient while maintaining a clinically acceptable image quality. This is of interest when images are acquired in addition to the planned patient treatment, rather than images being acquired using the treatment field during a patient's treatment. A series of phantoms were used to assess image quality for different acquisition settings relative to the baseline values obtained following acceptance testing. Eight Varian aS500 EPID systems on four matched Varian 600C/D linacs and four matched Varian 2100C/D linacs were compared for consistency of performance and images were acquired at the four main orthogonal gantry angles. Images were acquired using a 6 MV beam operating at 100 MU min(-1) and the low-dose acquisition mode. Doses used in the comparison were measured using a Farmer ionization chamber placed at d(max) in solid water. The results demonstrated that the number of reset frames did not have any influence on the image contrast, but the number of frame averages did. The expected increase in noise with corresponding decrease in contrast was also observed when reducing the number of frame averages. The optimal settings for the low-dose acquisition mode with respect to image quality and dose were found to be one reset frame and three frame averages. All patients at the Northern Ireland Cancer Centre are now imaged using one reset frame and three frame averages in the 6 MV 100 MU min(-1) low-dose acquisition mode. Routine EPID QC contrast tolerance (+/-10) and action (+/-20) levels using the PIPSPro phantom based around expected values of 190 (Varian 600C/D) and 225 (Varian 2100C/D) have been introduced. The dose at dmax from electronic portal imaging has been reduced by approximately 28%, and while the image quality has been reduced, the images produced are still clinically acceptable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this research, a preliminary study was done to find out the initial parameter window to obtain the full-penetrated NiTi weldment. A L27 Taguchi experiment was then carried out to statistically study the effects of the welding parameters and their possible interactions on the weld bead aspect ratio (or penetration over fuse-zone width ratio), and to determine the optimized parameter settings to produce the full-penetrated weldment with desirable aspect ratio. From the statistical results in the Taguchi experiment, the laser mode was found to be the most important factor that substantially affects the aspect ratio. Strong interaction between the power and focus position was found in the Taguchi experiment. The optimized weldment was mainly of columnar dendritic structure in the weld zone (WZ), while the HAZ exhibited equiaxed grain structure. The XRD and DSC results showed that the WZ remained the B2 austenite structure without any precipitates, but with a significant decrease of phase transformation temperatures. The results in the micro-hardness and tensile tests indicated that the mechanical properties of NiTi were decreased to a certain extent after fibre laser welding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The inference of gene regulatory networks gained within recent years a considerable interest in the biology and biomedical community. The purpose of this paper is to investigate the influence that environmental conditions can exhibit on the inference performance of network inference algorithms. Specifically, we study five network inference methods, Aracne, BC3NET, CLR, C3NET and MRNET, and compare the results for three different conditions: (I) observational gene expression data: normal environmental condition, (II) interventional gene expression data: growth in rich media, (III) interventional gene expression data: normal environmental condition interrupted by a positive spike-in stimulation. Overall, we find that different statistical inference methods lead to comparable, but condition-specific results. Further, our results suggest that non-steady-state data enhance the inferability of regulatory networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of hot-rolled steel portal frames can be sensitive to serviceability deflection limits. In such cases, in order to reduce frame deflections, practitioners increase the size of the eaves haunch and / or the sizes of the steel sections used for the column and rafter members of the frame. This paper investigates the effect of such deflection limits using a real-coded niching genetic algorithm (RC-NGA) that optimizes frame weight, taking into account both ultimate as well as serviceability limit states. The results show that the proposed GA is efficient and reliable. Two different sets of serviceability deflection limits are then considered: deflection limits recommended by the Steel Construction Institute (SCI), which is based on control of differential deflections, and other deflection limits based on suggestions by industry. Parametric studies are carried out on frames with spans ranging between 15 m to 50 m and column heights between 5 m to 10 m. It is demonstrated that for a 50 m span frame, use of the SCI recommended deflection limits can lead to frame weights that are around twice as heavy as compared to designs without these limits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lovastatin biosynthesis depends on the relative concentrations of dissolved oxygen and the carbon and nitrogen resources. An elucidation of the underlying relationship would facilitate the derivation of a controller for the improvement of lovastatin yield in bioprocesses. To achieve this goal, batch submerged cultivation experiments of lovastatin production by Aspergillus flavipus BICC 5174, using both lactose and glucose as carbon sources, were performed in a 7 liter bioreactor and the data used to determine how the relative concentrations of lactose, glucose, glutamine and oxygen affected lovastatin yield. A model was developed based on these results and its prediction was validated using an independent set of batch data obtained from a 15-liter bioreactor using five statistical measures, including the Willmott index of agreement. A nonlinear controller was designed considering that dissolved oxygen and lactose concentrations could be measured online, and using the lactose feed rate and airflow rate as process inputs. Simulation experiments were performed to demonstrate that a practical implementation of the nonlinear controller would result in satisfactory outcomes. This is the first model that correlates lovastatin biosynthesis to carbon-nitrogen proportion and possesses a structure suitable for implementing a strategy for controlling lovastatin production.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A near-isothermal micro-trickle bed reactor operated under radio frequency heating was developed. The reactor bed was packed with nickel ferrite micro-particles of 110. μm diameter, generating heat by the application of RF field at 180. kHz. Hydrodynamics in a co-current configuration was analysed and heat transfer rates were determined at temperature ranging from 55 to 100. °C. A multi-zone reactor bed of several heating and catalytic zones was proposed in order to achieve near-isothermal operations. Exact positioning, number of the heating zones and length of the heating zones composed of a mixture of nickel ferrite and a catalyst were determined by solving a one dimensional model of heat transfer by conduction and convection. The conductive losses contributed up to 30% in the total thermal losses from the reactor. Three heating zones were required to obtain an isothermal length of 50. mm with a temperature non-uniformity of 2. K. A good agreement between the modelling and experimental results was obtained for temperature profiles of the reactor. © 2013 Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aircraft design is a complex, long and iterative process that requires the use of various specialties and optimization tools. However these tools and specialities do not include manufacturing, which is often considered later in the product development process leading to higher cost and time delays. This work focuses on the development of an automated design tool that accounts for manufacture during the design process focusing on early geometry definition which in turn informs assembly planning. To accomplish this task the design process needs to be open to any variation in structural configuration while maintaining the design intent. Redefining design intent as a map which links a set of requirements to a set of functions using a numerical approach enables the design process itself to be considered as a mathematical function. This definition enables the design process to utilise captured design knowledge and translate it into a set of mathematical equations that design the structure. This process is articulated in this paper using the structural design and definition for an aircraft fuselage section as an exemplar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a novel finite impulse response (FIR) filter design methodology that reduces the number of operations with a motivation to reduce power consumption and enhance performance. The novelty of our approach lies in the generation of filter coefficients such that they conform to a given low-power architecture, while meeting the given filter specifications. The proposed algorithm is formulated as a mixed integer linear programming problem that minimizes chebychev error and synthesizes coefficients which consist of pre-specified alphabets. The new modified coefficients can be used for low-power VLSI implementation of vector scaling operations such as FIR filtering using computation sharing multiplier (CSHM). Simulations in 0.25um technology show that CSHM FIR filter architecture can result in 55% power and 34% speed improvement compared to carry save multiplier (CSAM) based filters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives

A P-value <0.05 is one metric used to evaluate the results of a randomized controlled trial (RCT). We wondered how often statistically significant results in RCTs may be lost with small changes in the numbers of outcomes.

Study Design and Setting

A review of RCTs in high-impact medical journals that reported a statistically significant result for at least one dichotomous or time-to-event outcome in the abstract. In the group with the smallest number of events, we changed the status of patients without an event to an event until the P-value exceeded 0.05. We labeled this number the Fragility Index; smaller numbers indicated a more fragile result.

Results

The 399 eligible trials had a median sample size of 682 patients (range: 15-112,604) and a median of 112 events (range: 8-5,142); 53% reported a P-value <0.01. The median Fragility Index was 8 (range: 0-109); 25% had a Fragility Index of 3 or less. In 53% of trials, the Fragility Index was less than the number of patients lost to follow-up.

Conclusion

The statistically significant results of many RCTs hinge on small numbers of events. The Fragility Index complements the P-value and helps identify less robust results. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a stressed-skin diaphragm approach to the optimal design of the internal frame of a cold-formed steel portal framing system, in conjunction with the effect of semi-rigid joints. Both ultimate and serviceability limit states are considered. Wind load combinations are included. The designs are optimized using a real-coded niching genetic algorithm, in which both discrete and continuous decision variables are processed. For a building with two internal frames, it is shown that the material cost of the internal frame can be reduced by as much as 53%, compared with a design that ignores stressed-skin action.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several one-dimensional design methods have been used to predict the off-design performance of three modern centrifugal compressors for automotive turbocharging. The three methods used are single-zone, two-zone, and a more recent statistical method. The predicted results from each method are compared against empirical data taken from standard hot gas stand tests for each turbocharger. Each of the automotive turbochargers considered in this study have notably different geometries and are of varying application. Due to the non-adiabatic test conditions, the empirical data has been corrected for the effect of heat transfer to ensure comparability with the 1D models. Each method is evaluated for usability and accuracy in both pressure ratio and efficiency prediction. The paper presents an insight into the limitations of each of these models when applied to one-dimensional automotive turbocharger design, and proposes that a corrected single-zone modelling approach has the greatest potential for further development, whilst the statistical method could be immediately introduced to a design process where design variations are limited.