964 resultados para D-optimal design


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The depth of focus (DOF) can be defined as the variation in image distance of a lens or an optical system which can be tolerated without incurring an objectionable lack of sharpness of focus. The DOF of the human eye serves a mechanism of blur tolerance. As long as the target image remains within the depth of focus in the image space, the eye will still perceive the image as being clear. A large DOF is especially important for presbyopic patients with partial or complete loss of accommodation (presbyopia), since this helps them to obtain an acceptable retinal image when viewing a target moving through a range of near to intermediate distances. The aim of this research was to investigate the DOF of the human eye and its association with the natural wavefront aberrations, and how higher order aberrations (HOAs) can be used to expand the DOF, in particular by inducing spherical aberrations ( 0 4 Z and 0 6 Z ). The depth of focus of the human eye can be measured using a variety of subjective and objective methods. Subjective measurements based on a Badal optical system have been widely adopted, through which the retinal image size can be kept constant. In such measurements, the subject.s tested eye is normally cyclopleged. Objective methods without the need of cycloplegia are also used, where the eye.s accommodative response is continuously monitored. Generally, the DOF measured by subjective methods are slightly larger than those measured objectively. In recent years, methods have also been developed to estimate DOF from retinal image quality metrics (IQMs) derived from the ocular wavefront aberrations. In such methods, the DOF is defined as the range of defocus error that degrades the retinal image quality calculated from the IQMs to a certain level of the possible maximum value. In this study, the effect of different amounts of HOAs on the DOF was theoretically evaluated by modelling and comparing the DOF of subjects from four different clinical groups, including young emmetropes (20 subjects), young myopes (19 subjects), presbyopes (32 subjects) and keratoconics (35 subjects). A novel IQM-based through-focus algorithm was developed to theoretically predict the DOF of subjects with their natural HOAs. Additional primary spherical aberration ( 0 4 Z ) was also induced in the wavefronts of myopes and presbyopes to simulate the effect of myopic refractive correction (e.g. LASIK) and presbyopic correction (e.g. progressive power IOL) on the subject.s DOF. Larger amounts of HOAs were found to lead to greater values of predicted DOF. The introduction of primary spherical aberration was found to provide moderate increase of DOF while slightly deteriorating the image quality at the same time. The predicted DOF was also affected by the IQMs and the threshold level adopted. We then investigated the influence of the chosen threshold level of the IQMs on the predicted DOF, and how it relates to the subjectively measured DOF. The subjective DOF was measured in a group of 17 normal subjects, and we used through-focus visual Strehl ratio based on optical transfer function (VSOTF) derived from their wavefront aberrations as the IQM to estimate the DOF. The results allowed comparison of the subjective DOF with the estimated DOF and determination of a threshold level for DOF estimation. Significant correlation was found between the subject.s estimated threshold level for the estimated DOF and HOA RMS (Pearson.s r=0.88, p<0.001). The linear correlation can be used to estimate the threshold level for each individual subject, subsequently leading to a method for estimating individual.s DOF from a single measurement of their wavefront aberrations. A subsequent study was conducted to investigate the DOF of keratoconic subjects. Significant increases of the level of HOAs, including spherical aberration, coma and trefoil, can be observed in keratoconic eyes. This population of subjects provides an opportunity to study the influence of these HOAs on DOF. It was also expected that the asymmetric aberrations (coma and trefoil) in the keratoconic eye could interact with defocus to cause regional blur of the target. A dual-Badal-channel optical system with a star-pattern target was used to measure the subjective DOF in 10 keratoconic eyes and compared to those from a group of 10 normal subjects. The DOF measured in keratoconic eyes was significantly larger than that in normal eyes. However there was not a strong correlation between the large amount of HOA RMS and DOF in keratoconic eyes. Among all HOA terms, spherical aberration was found to be the only HOA that helped to significantly increase the DOF in the studied keratoconic subjects. Through the first three studies, a comprehensive understanding of DOF and its association to the HOAs in the human eye had been achieved. An adaptive optics system was then designed and constructed. The system was capable of measuring and altering the wavefront aberrations in the subject.s eye and measuring the resulting DOF under the influence of different combination of HOAs. Using the AO system, we investigated the concept of extending the DOF through optimized combinations of 0 4 Z and 0 6 Z . Systematic introduction of a targeted amount of both 0 4 Z and 0 6 Z was found to significantly improve the DOF of healthy subjects. The use of wavefront combinations of 0 4 Z and 0 6 Z with opposite signs can further expand the DOF, rather than using 0 4 Z or 0 6 Z alone. The optimal wavefront combinations to expand the DOF were estimated using the ratio of increase in DOF and loss of retinal image quality defined by VSOTF. In the experiment, the optimal combinations of 0 4 Z and 0 6 Z were found to provide a better balance of DOF expansion and relatively smaller decreases in VA. Therefore, the optimal combinations of 0 4 Z and 0 6 Z provides a more efficient method to expand the DOF rather than 0 4 Z or 0 6 Z alone. This PhD research has shown that there is a positive correlation between the DOF and the eye.s wavefront aberrations. More aberrated eyes generally have a larger DOF. The association of DOF and the natural HOAs in normal subjects can be quantified, which allows the estimation of DOF directly from the ocular wavefront aberration. Among the Zernike HOA terms, spherical aberrations ( 0 4 Z and 0 6 Z ) were found to improve the DOF. Certain combinations of 0 4 Z and 0 6 Z provide a more effective method to expand DOF than using 0 4 Z or 0 6 Z alone, and this could be useful in the optimal design of presbyopic optical corrections such as multifocal contact lenses, intraocular lenses and laser corneal surgeries.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the problem of how to efficiently and safely design dose finding studies. Both current and novel utility functions are explored using Bayesian adaptive design methodology for the estimation of a maximum tolerated dose (MTD). In particular, we explore widely adopted approaches such as the continual reassessment method and minimizing the variance of the estimate of an MTD. New utility functions are constructed in the Bayesian framework and are evaluated against current approaches. To reduce computing time, importance sampling is implemented to re-weight posterior samples thus avoiding the need to draw samples using Markov chain Monte Carlo techniques. Further, as such studies are generally first-in-man, the safety of patients is paramount. We therefore explore methods for the incorporation of safety considerations into utility functions to ensure that only safe and well-predicted doses are administered. The amalgamation of Bayesian methodology, adaptive design and compound utility functions is termed adaptive Bayesian compound design (ABCD). The performance of this amalgamation of methodology is investigated via the simulation of dose finding studies. The paper concludes with a discussion of results and extensions that could be included into our approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Monitoring stream networks through time provides important ecological information. The sampling design problem is to choose locations where measurements are taken so as to maximise information gathered about physicochemical and biological variables on the stream network. This paper uses a pseudo-Bayesian approach, averaging a utility function over a prior distribution, in finding a design which maximizes the average utility. We use models for correlations of observations on the stream network that are based on stream network distances and described by moving average error models. Utility functions used reflect the needs of the experimenter, such as prediction of location values or estimation of parameters. We propose an algorithmic approach to design with the mean utility of a design estimated using Monte Carlo techniques and an exchange algorithm to search for optimal sampling designs. In particular we focus on the problem of finding an optimal design from a set of fixed designs and finding an optimal subset of a given set of sampling locations. As there are many different variables to measure, such as chemical, physical and biological measurements at each location, designs are derived from models based on different types of response variables: continuous, counts and proportions. We apply the methodology to a synthetic example and the Lake Eacham stream network on the Atherton Tablelands in Queensland, Australia. We show that the optimal designs depend very much on the choice of utility function, varying from space filling to clustered designs and mixtures of these, but given the utility function, designs are relatively robust to the type of response variable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study proposes an optimized approach of designing in which a model specially shaped composite tank for spacecrafts is built by applying finite element analysis. The composite layers are preliminarily designed by combining quasi-network design method with numerical simulation, which determines the ratio between the angle and the thickness of layers as the initial value of the optimized design. By adopting an adaptive simulated annealing algorithm, the angles and the numbers of layers at each angle are optimized to minimize the weight of structure. Based on this, the stacking sequence of composite layers is formulated according to the number of layers in the optimized structure by applying the enumeration method and combining the general design parameters. Numerical simulation is finally adopted to calculate the buckling limit of tanks in different designing methods. This study takes a composite tank with a cone-shaped cylinder body as example, in which ellipsoid head section and outer wall plate are selected as the object to validate this method. The result shows that the quasi-network design method can improve the design quality of composite material layer in tanks with complex preliminarily loading conditions. The adaptive simulated annealing algorithm can reduce the initial design weight by 30%, which effectively probes the global optimal solution and optimizes the weight of structure. It can be therefore proved that, this optimization method is capable of designing and optimizing specially shaped composite tanks with complex loading conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the global construction context, the Best Value or Most Economically Advantageous Tender is becoming a widespread approach for contractor selection, as an alternative to other traditional awarding criteria such as the Lowest Price. In these multi-attribute tenders, the owner or auctioneer solicits proposals containing both a price bid and additional technical features. Once the proposals are received, each bidder's price bid is given an economic score according to a scoring rule, generally called an Economic Scoring Formula (ESF) and a technical score according to pre-specified criteria. Eventually, the contract is awarded to the bidder with the highest weighted overall score (economic + technical). However, Economic Scoring Formula selection by auctioneers is invariably and paradoxically a highly intuitive process in practice, involving few theoretical or empirical considerations, despite having being considered traditionally and mistakenly as objective, due to its mathematical nature. This paper provides a taxonomic classification of a wide variety of ESF and Abnormally Low Bid Criteria (ALBC) gathered in several countries with different tendering approaches. Practical implications concern the optimal design of price scoring rules in construction contract tenders, as well as future analyses of the effects of ESF and ALBC on competitive bidding behaviour.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rail joints are provided with a gap to account for thermal movement and to maintain electrical insulation for the control of signals and/or broken rail detection circuits. The gap in the rail joint is regarded as a source of significant problems for the rail industry since it leads to a very short rail service life compared with other track components due to the various, and difficult to predict, failure modes – thus increasing the risk for train operations. Many attempts to improve the life of rail joints have led to a large number of patents around the world; notable attempts include strengthening through larger-sized joint bars, an increased number of bolts and the use of high yield materials. Unfortunately, no design to date has shown the ability to prolong the life of the rail joints to values close to those for continuously welded rail (CWR). This paper reports the results of a fundamental study that has revealed that the wheel contact at the free edge of the railhead is a major problem since it generates a singularity in the contact pressure and railhead stresses. A design was therefore developed using an optimisation framework that prevents wheel contact at the railhead edge. Finite element modelling of the design has shown that the contact pressure and railhead stress singularities are eliminated, thus increasing the potential to work as effectively as a CWR that does not have a geometric gap. An experimental validation of the finite element results is presented through an innovative non-contact measurement of strains. Some practical issues related to grinding rails to the optimal design are also discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A mathematical model is developed to simulate oxygen consumption, heat generation and cell growth in solid state fermentation (SSF). The fungal growth on the solid substrate particles results in the increase of the cell film thickness around the particles. The model incorporates this increase in the biofilm size which leads to decrease in the porosity of the substrate bed and diffusivity of oxygen in the bed. The model also takes into account the effect of steric hindrance limitations in SSF. The growth of cells around single particle and resulting expansion of biofilm around the particle is analyzed for simplified zero and first order oxygen consumption kinetics. Under conditions of zero order kinetics, the model predicts upper limit on cell density. The model simulations for packed bed of solid particles in tray bioreactor show distinct limitations on growth due to simultaneous heat and mass transport phenomena accompanying solid state fermentation process. The extent of limitation due to heat and/or mass transport phenomena is analyzed during different stages of fermentation. It is expected that the model will lead to better understanding of the transport processes in SSF, and therefore, will assist in optimal design of bioreactors for SSF.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fuel cells are emerging as alternate green power producers for both large power production and for use in automobiles. Hydrogen is seen as the best option as a fuel; however, hydrogen fuel cells require recirculation of unspent hydrogen. A supersonic ejector is an apt device for recirculation in the operating regimes of a hydrogen fuel cell. Optimal ejectors have to be designed to achieve best performances. The use of the vector evaluated particle swarm optimization technique to optimize supersonic ejectors with a focus on its application for hydrogen recirculation in fuel cells is presented here. Two parameters, compression ratio and efficiency, have been identified as the objective functions to be optimized. Their relation to operating and design parameters of ejector is obtained by control volume based analysis using a constant area mixing approximation. The independent parameters considered are the area ratio and the exit Mach number of the nozzle. The optimization is carried out at a particularentrainment ratio and results in a set of nondominated solutions, the Pareto front. A set of such curves can be used for choosing the optimal design parameters of the ejector.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A performance prediction procedure is presented for low specific speed submersible pumps with a review of loss models given in the literature. Most of the loss theories discussed in this paper are one dimensional and improvements are made with good empiricism for the prediction to cover the entire range of operation of the low specific speed pumps. Loss correlations, particularly in the low flow range, are discussed. Prediction of the shape of efficiency-capacity and total head-capacity curves agrees well with the experimental results in almost the full range of operating conditions. The approach adopted in the present analysis, of estimating the losses in the individual components of a pump, provides means for improving the performance and identifying the problem areas in existing designs of the pumps. The investigation also provides a basis for selection of parameters for the optimal design of the pumps in which the maximum efficiency is an important design parameter. The scope for improvement in the prediction procedure with the nature of flow phenomena in the low flow region has been discussed in detail.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Optimizing a shell and tube heat exchanger for a given duty is an important and relatively difficult task. There is a need for a simple, general and reliable method for realizing this task. The authors present here one such method for optimizing single phase shell-and-tube heat exchangers with given geometric and thermohydraulic constraints. They discuss the problem in detail. Then they introduce a basic algorithm for optimizing the exchanger. This algorithm is based on data from an earlier study of a large collection of feasible designs generated for different process specifications. The algorithm ensures a near-optimal design satisfying the given heat duty and geometric constraints. The authors also provide several sub-algorithms to satisfy imposed velocity limitations. They illustrate how useful these sub-algorithms are with several examples where the exchanger weight is minimized.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present the concept, prototypes, and an optimal design method for a compliant mechanism kit as a parallel to the kits available for rigid-body mechanisms. The kit consists of flexible beams and connectors that can be easily hand-assembled using snap fits. It enables users, using their creativity and mechanics intuition, to quickly realize a compliant mechanism. The mechanisms assembled in this manner accurately capture the essential behavior of the topology, shape, size and material aspects and thereby can lead the way for a real compliant mechanism for practical use. Also described in this paper are the design of the connector to which flexible beams can be added in eight different directions; and prototyping of the spring steel connectors as well as beams using wire-cut electro discharge machining. It is noted in this paper that the concept of the kit also resolves a discrepancy in the finite element (FE) modeling of beam-based compliant mechanisms. The discrepancy arises when two or more beams are joining at one point and thus leading to increased stiffness. After resolving this discrepancy, this work extends the topology optimization to automatically generate designs that can be assembled with the kit. Thus, the kit and the accompanying analysis and optimal synthesis procedures comprise a self-contained educational as well as a research and pragmatic toolset for compliant mechanisms. The paper also illustrates how human creativity finds new ways of using the kit beyond the original intended use and how it is useful even for a novice to design compliant mechanisms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences critical system design objectives like area, power and performance. Hence the embedded system designer performs a complete memory architecture exploration to custom design a memory architecture for a given set of applications. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of exhaustive-search based memory exploration at the outer level and a two step based integrated data layout for SPRAM-Cache based architectures at the inner level. We present a two step integrated approach for data layout for SPRAM-Cache based hybrid architectures with the first step as data-partitioning that partitions data between SPRAM and Cache, and the second step is the cache conscious data layout. We formulate the cache-conscious data layout as a graph partitioning problem and show that our approach gives up to 34% improvement over an existing approach and also optimizes the off-chip memory address space. We experimented our approach with 3 embedded multimedia applications and our approach explores several hundred memory configurations for each application, yielding several optimal design points in a few hours of computation on a standard desktop.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper elucidates the methodology of applying artificial neural network model (ANNM) to predict the percent swell of calcitic soil in sulphuric acid solutions, a complex phenomenon involving many parameters. Swell data required for modelling is experimentally obtained using conventional oedometer tests under nominal surcharge. The phases in ANN include optimal design of architecture, operation and training of architecture. The designed optimal neural model (3-5-1) is a fully connected three layer feed forward network with symmetric sigmoid activation function and trained by the back propagation algorithm to minimize a quadratic error criterion.The used model requires parameters such as duration of interaction, calcite mineral content and acid concentration for prediction of swell. The observed strong correlation coefficient (R2 = 0.9979) between the values determined by the experiment and predicted using the developed model demonstrates that the network can provide answers to complex problems in geotechnical engineering.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences these parameters. Hence the embedded system designer performs a complete memory architecture exploration. This problem is a multi-objective optimization problem and can be tackled as a two-level optimization problem. The outer level explores various memory architecture while the inner level explores placement of data sections (data layout problem) to minimize memory stalls. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of Multi-objective Genetic Algorithm (Memory Architecture exploration) and an efficient heuristic data placement algorithm. At the outer level the memory architecture exploration is done by picking memory modules directly from a ASIC memory Library. This helps in performing the memory architecture exploration in a integrated framework, where the memory allocation, memory exploration and data layout works in a tightly coupled way to yield optimal design points with respect to area, power and performance. We experimented our approach for 3 embedded applications and our approach explores several thousand memory architecture for each application, yielding a few hundred optimal design points in a few hours of computation time on a standard desktop.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The thermoacoustic prime mover (TAPM) is an attractive alternative as a pressure wave generator to drive Pulse Tube Cryocoolers (PTCs), by the absence of moving parts, construction simplicity, reasonable efficiency, and environmental friendly. Decreasing the resonance frequency and improving the efficiency of the TAPM are important to drive the PTCs. These are controlled by the working gas parameters other than the dimensions of TAPM. In this technical note, the experimental studies carried out to evaluate the influence of different working fluids on the performances of a twin standing wave TAPM at various operating pressures have been compared with the simulation studies of the same system using DeltaEc wherever possible. The reasonably good agreement between them indicates the utility of DeltaEc for the optimal design of TAPM with the right working fluids for practical applications. (C) 2011 Elsevier Ltd. All rights reserved.