995 resultados para criterion variables


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work addresses the optimum design of a composite box-beam structure subject to strength constraints. Such box-beams are used as the main load carrying members of helicopter rotor blades. A computationally efficient analytical model for box-beam is used. Optimal ply orientation angles are sought which maximize the failure margins with respect to the applied loading. The Tsai-Wu-Hahn failure criterion is used to calculate the reserve factor for each wall and ply and the minimum reserve factor is maximized. Ply angles are used as design variables and various cases of initial starting design and loadings are investigated. Both gradient-based and particle swarm optimization (PSO) methods are used. It is found that the optimization approach leads to the design of a box-beam with greatly improved reserve factors which can be useful for helicopter rotor structures. While the PSO yields globally best designs, the gradient-based method can also be used with appropriate starting designs to obtain useful designs efficiently. (C) 2006 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Guo and Nixon proposed a feature selection method based on maximizing I(x; Y),the multidimensional mutual information between feature vector x and class variable Y. Because computing I(x; Y) can be difficult in practice, Guo and Nixon proposed an approximation of I(x; Y) as the criterion for feature selection. We show that Guo and Nixon's criterion originates from approximating the joint probability distributions in I(x; Y) by second-order product distributions. We remark on the limitations of the approximation and discuss computationally attractive alternatives to compute I(x; Y).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The question at issue in this dissertation is the epistemic role played by ecological generalizations and models. I investigate and analyze such properties of generalizations as lawlikeness, invariance, and stability, and I ask which of these properties are relevant in the context of scientific explanations. I will claim that there are generalizable and reliable causal explanations in ecology by generalizations, which are invariant and stable. An invariant generalization continues to hold or be valid under a special change called an intervention that changes the value of its variables. Whether a generalization remains invariant during its interventions is the criterion that determines whether it is explanatory. A generalization can be invariant and explanatory regardless of its lawlike status. Stability deals with a generality that has to do with holding of a generalization in possible background conditions. The more stable a generalization, the less dependent it is on background conditions to remain true. Although it is invariance rather than stability of generalizations that furnishes us with explanatory generalizations, there is an important function that stability has in this context of explanations, namely, stability furnishes us with extrapolability and reliability of scientific explanations. I also discuss non-empirical investigations of models that I call robustness and sensitivity analyses. I call sensitivity analyses investigations in which one model is studied with regard to its stability conditions by making changes and variations to the values of the model s parameters. As a general definition of robustness analyses I propose investigations of variations in modeling assumptions of different models of the same phenomenon in which the focus is on whether they produce similar or convergent results or not. Robustness and sensitivity analyses are powerful tools for studying the conditions and assumptions where models break down and they are especially powerful in pointing out reasons as to why they do this. They show which conditions or assumptions the results of models depend on. Key words: ecology, generalizations, invariance, lawlikeness, philosophy of science, robustness, explanation, models, stability

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An analysis of eccentrically loaded short reinforced concrete columns using a variable failure strain criterion is presented. The method dispenses with the usual procedure of assuming a fixed value for the ultimate strain in concrete. The analysis is based on the use of a simple, single equation for the complete stress-strain curve of concrete and the adoption of a process of maximisation of moment with respect to extreme fibre concrete compressive strain. Columns of rectangular section and loaded eccentrically along one axis only are considered in this paper. A good agreement is observed between the theoretical and experimental values of some test results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper presents simple graphical procedures for position synthesis of plane linkage mechanisms to generate functions of two independent variables. The procedures are based on point-position reduction and permit synthesis of the linkage to satisfy up to six arbitrarily selected precision positions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper presents simple graphical procedures for the position synthesis of plane linkage mechanisms with sliding inputs and output to generate functions of two independent variables. The procedures are based on point position reduction and permit synthesis of the linkage to satisfy up to five arbitrarily selected precision positions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditionally, laminar separation bubbles have been characterised as being 'long' or 'short' on the basis of a two parameter 'bursting' criterion involving a pressure gradient parameter and Reynolds Number at separation. In the present work we suggest a refined bursting criterion, which takes into account not just the length of the bubble but also the maximum height of the bubble, thereby shedding some light on the less understood phenomenon of 'bursting' in laminar separation bubbles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of controlling the vibration pattern of a driven string is considered. The basic question dealt with here is to find the control forces which reduce the energy of vibration of a driven string over a prescribed portion of its length while maintaining the energy outside that length above a desired value. The criterion of keeping the response outside the region of energy reduction as close to the original response as possible is introduced as an additional constraint. The slack unconstrained minimization technique (SLUMT) has been successfully applied to solve the above problem. The effect of varying the phase of the control forces (which results in a six-variable control problem) is then studied. The nonlinear programming techniques which have been effectively used to handle problems involving many variables and constraints therefore offer a powerful tool for the solution of vibration control problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using normal mode analysis Rayleigh-Taylor instability is investigated for three-layer viscous stratified incompressible steady flow, when the top 3rd and bottom 1st layers extend up to infinity, the middle layer has a small thickness δ. The wave Reynolds number in the middle layer is assumed to be sufficiently small. A dispersion relation (a seventh degree polynomial in wave frequency ω) valid up to the order of the maximal value of all possible Kj (j less-than-or-equals, slant 0, K is the wave number) in each coefficient of the polynomial is obtained. A sufficient condition for instability is found out for the first time, pursuing a medium wavelength analysis. It depends on ratios (α and β) of the coefficients of viscosity, the thickness of the middle layer δ, surface tension ratio T and wave number K. This is a new analytical criterion for Rayleigh-Taylor instability of three-layer fluids. It recovers the results of the corresponding problem for two-layer fluids. Among the results obtained, it is observed that taking the coefficients of viscosity of 2nd and 3rd layers same can inhibit the effect of surface tension completely. For large wave number K, the thickness of the middle layer should be correspondingly small to keep the domain of dependence of the threshold wave number Kc constant for fixed α, β and T.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A computer code is developed as a part of an ongoing project on computer aided process modelling of forging operation, to simulate heat transfer in a die-billet system. The code developed on a stage-by-stage technique is based on an Alternating Direction Implicit scheme. The experimentally validated code is used to study the effect of process specifics such as preheat die temperature, machine ascent time, rate of deformation, and dwell time on the thermal characteristics in a batch coining operation where deformation is restricted to surface level only.