917 resultados para Statistical mixture-design optimization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on a method presented in detail in a previous work by the authors, similar solutions have been obtained for the steady inviscid quasi‐one‐dimensional nonreacting flow in the supersonic nozzle of a CO2–N2 gasdynamic laser system, with either H2O or He as the catalyst. It has been demonstrated how these solutions could be used to optimize the small‐signal gain coefficient on a specified vibrational‐rotational transition. Results presented for a wide range of mixture compositions include optimum values for the small‐signal gain, area ratio, reservoir temperature, and a binary scaling parameter, which is the product of reservoir pressure and nozzle shape factor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Higher order LCL filters are essential in meeting the interconnection standard requirement for grid-connected voltage source converters. LCL filters offer better harmonic attenuation and better efficiency at a smaller size when compared to the traditional L filters. The focus of this paper is to analyze the LCL filter design procedure from the point of view of power loss and efficiency. The IEEE 1547-2008 specifications for high-frequency current ripple are used as a major constraint early in the design to ensure that all subsequent optimizations are still compliant with the standards. Power loss in each individual filter component is calculated on a per-phase basis. The total inductance per unit of the LCL filter is varied, and LCL parameter values which give the highest efficiency while simultaneously meeting the stringent standard requirements are identified. The power loss and harmonic output spectrum of the grid-connected LCL filter is experimentally verified, and measurements confirm the predicted trends.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of binary fluid systems in thermally driven vapour absorption and mechanically driven vapour compression refrigeration and heatpump cycles has provided an impetus for obtaining experimental date on caloric properties of such fluid mixtures. However, direct measurements of these properties are somewhat scarce in spite of the calorimetric techniques described in the literature being quite adequate. Most of the design data are derived through calculations using theoretical models and vapour-liquid equilibrium data. This article addresses the choice of working fluids and the current status on the data availability vis-a-vis engineering applications. Particular emphasis is on organic working fluid pairs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heat exchanger design is a complex task involving the selection of a large number of interdependent design parameters. There are no established general techniques for optimizing the design, though a few earlier attempts provide computer software based on gradient methods, case study methods, etc. The authors felt that it would be useful to determine the nature of the optimal and near-optimal feasible designs to devise an optimization technique. Therefore, in this article they have obtained a large number of feasible designs of shell and tube heat exchangers, intended to perform a given heat duty, by an exhaustive search method. They have studied how their capital and operating costs varied. The study reveals several interesting aspects of the dependence of capital and total costs on various design parameters. The authors considered a typical shell and tube heat exchanger used in an oil refinery. Its heat duty, inlet temperature and other details are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clustered VLIW architectures solve the scalability problem associated with flat VLIW architectures by partitioning the register file and connecting only a subset of the functional units to a register file. However, inter-cluster communication in clustered architectures leads to increased leakage in functional components and a high number of register accesses. In this paper, we propose compiler scheduling algorithms targeting two previously ignored power-hungry components in clustered VLIW architectures, viz., instruction decoder and register file. We consider a split decoder design and propose a new energy-aware instruction scheduling algorithm that provides 14.5% and 17.3% benefit in the decoder power consumption on an average over a purely hardware based scheme in the context of 2-clustered and 4-clustered VLIW machines. In the case of register files, we propose two new scheduling algorithms that exploit limited register snooping capability to reduce extra register file accesses. The proposed algorithms reduce register file power consumption on an average by 6.85% and 11.90% (10.39% and 17.78%), respectively, along with performance improvement of 4.81% and 5.34% (9.39% and 11.16%) over a traditional greedy algorithm for 2-clustered (4-clustered) VLIW machine. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Random Access Scan, which addresses individual flip-flops in a design using a memory array like row and column decoder architecture, has recently attracted widespread attention, due to its potential for lower test application time, test data volume and test power dissipation when compared to traditional Serial Scan. This is because typically only a very limited number of random ``care'' bits in a test response need be modified to create the next test vector. Unlike traditional scan, most flip-flops need not be updated. Test application efficiency can be further improved by organizing the access by word instead of by bit. In this paper we present a new decoder structure that takes advantage of basis vectors and linear algebra to further significantly optimize test application in RAS by performing the write operations on multiple bits consecutively. Simulations performed on benchmark circuits show an average of 2-3 times speed up in test write time compared to conventional RAS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We compute the dynamic structure factors of a dense binary liquid mixture. These describe dynamics on molecular length scales, where structural relaxation is important. We find that the presence of a few large particles in a dense fluid of small particles slows down the dynamics considerably. We also observe a deep narrowing of the spectrum for a disordered mixture composed of a nearly equal packing of the two species. In contrast, a few small particles diffuse easily in the background of a dense fluid of large particles. We expect our results to describe neutron scattering from a dense mixture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider robust joint linear precoder/receive filter design for multiuser multi-input multi-output (MIMO) downlink that minimizes the sum mean square error (SMSE) in the presence of imperfect channel state information (CSI). The base station is equipped with multiple transmit antennas, and each user terminal is equipped with multiple receive antennas. The CSI is assumed to be perturbed by estimation error. The proposed transceiver design is based on jointly minimizing a modified function of the MSE, taking into account the statistics of the estimation error under a total transmit power constraint. An alternating optimization algorithm, wherein the optimization is performed with respect to the transmit precoder and the receive filter in an alternating fashion, is proposed. The robustness of the proposed algorithm to imperfections in CSI is illustrated through simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The importance of long-range prediction of rainfall pattern for devising and planning agricultural strategies cannot be overemphasized. However, the prediction of rainfall pattern remains a difficult problem and the desired level of accuracy has not been reached. The conventional methods for prediction of rainfall use either dynamical or statistical modelling. In this article we report the results of a new modelling technique using artificial neural networks. Artificial neural networks are especially useful where the dynamical processes and their interrelations for a given phenomenon are not known with sufficient accuracy. Since conventional neural networks were found to be unsuitable for simulating and predicting rainfall patterns, a generalized structure of a neural network was then explored and found to provide consistent prediction (hindcast) of all-India annual mean rainfall with good accuracy. Performance and consistency of this network are evaluated and compared with those of other (conventional) neural networks. It is shown that the generalized network can make consistently good prediction of annual mean rainfall. Immediate application and potential of such a prediction system are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optimizing a shell and tube heat exchanger for a given duty is an important and relatively difficult task. There is a need for a simple, general and reliable method for realizing this task. The authors present here one such method for optimizing single phase shell-and-tube heat exchangers with given geometric and thermohydraulic constraints. They discuss the problem in detail. Then they introduce a basic algorithm for optimizing the exchanger. This algorithm is based on data from an earlier study of a large collection of feasible designs generated for different process specifications. The algorithm ensures a near-optimal design satisfying the given heat duty and geometric constraints. The authors also provide several sub-algorithms to satisfy imposed velocity limitations. They illustrate how useful these sub-algorithms are with several examples where the exchanger weight is minimized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of a solid electrolyte that permits the use of dissimilar gas electrodes in an electrochemical cell is presented. It consists of a functionally gradient material with spatial variation in composition. The activity of the conducting ion is fixed at each electrode using different gas species. The system chosen for demonstrating the concept consists of a solid solution between K2CO3 and K2SO4. The composition of the solid solution varies from pure K2CO3 in contact with a CO2 + O2 gas mixture at one electrode to pure K2SO4 exposed to a mixture of SO3 + SO2 + O2 at the other. Two types of composition profiles are studied, one with monotonic variation in composition and the other with extrema. The e.m.f. of the cells is studied as a function of temperature and composition of the gas mixture at each electrode. The results indicate that the e.m.f. is determined primarily by the difference in the chemical potential of potassium at the two electrodes. The diffusion potential caused by ionic concentration gradients in the electrolyte appears to be negligible when the corresponding ionic transport numbers are insignificant. Studies on the response characteristics of the cell based on the gradient electrolyte indicate that the nature of the variation in composition of the electrolyte has only a minor effect on the time evolution of e.m.f. The gradient solid electrolytes have potential application in multielement galvanic sensors at high temperatures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aims to determine optimal locations of dual trailing-edge flaps and blade stiffness to achieve minimum hub vibration levels in a helicopter, with low penalty in terms of required trailing-edge flap control power. An aeroelastic analysis based on finite elements in space and time is used in conjunction with an optimal control algorithm to determine the flap time history for vibration minimization. Using the aeroelastic analysis, it is found that the objective functions are highly nonlinear and polynomial response surface approximations cannot describe the objectives adequately. A neural network is then used for approximating the objective functions for optimization. Pareto-optimal points minimizing both helicopter vibration and flap power ale obtained using the response surface and neural network metamodels. The two metamodels give useful improved designs resulting in about 27% reduction in hub vibration and about 45% reduction in flap power. However, the design obtained using response surface is less sensitive to small perturbations in the design variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the role of melt convection on the performance of heat sinks with phase change material (PCM) is investigated numerically. The heat sink consists of aluminum plate fins embedded in PCM, and is subjected to heat flux supplied from the bottom. A single-domain enthalpy-based CFD model is developed, which is capable of simulating the phase change process and the associated melt convection. The CFD model is coupled with a genetic algorithm for carrying out the optimization. Two cases are considered, namely, one without melt convection (i.e., conduction heat transfer analysis), and the other with convection. It is found that the geometrical optimizations of heat sinks are different for the two cases, indicating the importance of melt convection in the design of heat sinks with PCMs. In the case of conduction analysis, the optimum width of half fin (i.e., sum of half pitch and half fin thickness) is a constant, which is in good agreement with results reported in the literature. On the other hand, if melt convection is considered, the optimum half fin width depends on the effective thermal diffusivity due to conduction and convection. With melt convection, the optimized design results in a significant improvement of operational time.