996 resultados para CPL Criterion Function
Resumo:
Prognostic procedures can be based on ranked linear models. Ranked regression type models are designed on the basis of feature vectors combined with set of relations defined on selected pairs of these vectors. Feature vectors are composed of numerical results of measurements on particular objects or events. Ranked relations defined on selected pairs of feature vectors represent additional knowledge and can reflect experts' opinion about considered objects. Ranked models have the form of linear transformations of feature vectors on a line which preserve a given set of relations in the best manner possible. Ranked models can be designed through the minimization of a special type of convex and piecewise linear (CPL) criterion functions. Some sets of ranked relations cannot be well represented by one ranked model. Decomposition of global model into a family of local ranked models could improve representation. A procedures of ranked models decomposition is described in this paper.
Resumo:
The physical model was based on the method of Newton-Euler. The model was developed by using the scientific computer program Mathematica®. Several simulations where tried varying the progress speeds (0.69; 1.12; 1.48; 1.82 and 2.12 m s-1); soil profiles (sinoidal, ascending and descending ramp) and height of the profile (0.025 and 0.05 m) to obtain the normal force of soil reaction. After the initial simulations, the mechanism was optimized using the scientific computer program Matlab® having as criterion (function-objective) the minimization of the normal force of reaction of the profile (FN). The project variables were the lengths of the bars (L1y, L2, l3 and L4), height of the operation (L7), the initial length of the spring (Lmo) and the elastic constant of the spring (k t). The lack of robustness of the mechanism in relation to the variable height of the operation was outlined by using a spring with low rigidity and large length. The results demonstrated that the mechanism optimized showed better flotation performance in relation to the initial mechanism.
Resumo:
Background: The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results: In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions: A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 <= q <= 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/.
Resumo:
A new parametric minimum distance time-domain estimator for ARFIMA processes is introduced in this paper. The proposed estimator minimizes the sum of squared correlations of residuals obtained after filtering a series through ARFIMA parameters. The estimator iseasy to compute and is consistent and asymptotically normally distributed for fractionallyintegrated (FI) processes with an integration order d strictly greater than -0.75. Therefore, it can be applied to both stationary and non-stationary processes. Deterministic components are also allowed in the DGP. Furthermore, as a by-product, the estimation procedure provides an immediate check on the adequacy of the specified model. This is so because the criterion function, when evaluated at the estimated values, coincides with the Box-Pierce goodness of fit statistic. Empirical applications and Monte-Carlo simulations supporting the analytical results and showing the good performance of the estimator in finite samples are also provided.
Resumo:
The basic matrixes method is suggested for the Leontief model analysis (LM) with some of its components indistinctly given. LM can be construed as a forecast task of product’s expenses-output on the basis of the known statistic information at indistinctly given several elements’ meanings of technological matrix, restriction vector and variables’ limits. Elements of technological matrix, right parts of restriction vector LM can occur as functions of some arguments. In this case the task’s dynamic analog occurs. LM essential complication lies in inclusion of variables restriction and criterion function in it.
Resumo:
Information extraction or knowledge discovery from large data sets should be linked to data aggregation process. Data aggregation process can result in a new data representation with decreased number of objects of a given set. A deterministic approach to separable data aggregation means a lesser number of objects without mixing of objects from different categories. A statistical approach is less restrictive and allows for almost separable data aggregation with a low level of mixing of objects from different categories. Layers of formal neurons can be designed for the purpose of data aggregation both in the case of deterministic and statistical approach. The proposed designing method is based on minimization of the of the convex and piecewise linear (CPL) criterion functions.
Resumo:
* The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be
Resumo:
2000 Mathematics Subject Classification: 62P10, 92D10, 92D30, 94A17, 62L10.
Resumo:
The contributions of this dissertation are in the development of two new interrelated approaches to video data compression: (1) A level-refined motion estimation and subband compensation method for the effective motion estimation and motion compensation. (2) A shift-invariant sub-decimation decomposition method in order to overcome the deficiency of the decimation process in estimating motion due to its shift-invariant property of wavelet transform. ^ The enormous data generated by digital videos call for an intense need of efficient video compression techniques to conserve storage space and minimize bandwidth utilization. The main idea of video compression is to reduce the interpixel redundancies inside and between the video frames by applying motion estimation and motion compensation (MEMO) in combination with spatial transform coding. To locate the global minimum of the matching criterion function reasonably, hierarchical motion estimation by coarse to fine resolution refinements using discrete wavelet transform is applied due to its intrinsic multiresolution and scalability natures. ^ Due to the fact that most of the energies are concentrated in the low resolution subbands while decreased in the high resolution subbands, a new approach called level-refined motion estimation and subband compensation (LRSC) method is proposed. It realizes the possible intrablocks in the subbands for lower entropy coding while keeping the low computational loads of motion estimation as the level-refined method, thus to achieve both temporal compression quality and computational simplicity. ^ Since circular convolution is applied in wavelet transform to obtain the decomposed subframes without coefficient expansion, symmetric-extended wavelet transform is designed on the finite length frame signals for more accurate motion estimation without discontinuous boundary distortions. ^ Although wavelet transformed coefficients still contain spatial domain information, motion estimation in wavelet domain is not as straightforward as in spatial domain due to the shift variance property of the decimation process of the wavelet transform. A new approach called sub-decimation decomposition method is proposed, which maintains the motion consistency between the original frame and the decomposed subframes, improving as a consequence the wavelet domain video compressions by shift invariant motion estimation and compensation. ^
Resumo:
The aim of this thesis is to review and augment the theory and methods of optimal experimental design. In Chapter I the scene is set by considering the possible aims of an experimenter prior to an experiment, the statistical methods one might use to achieve those aims and how experimental design might aid this procedure. It is indicated that, given a criterion for design, a priori optimal design will only be possible in certain instances and, otherwise, some form of sequential procedure would seem to be indicated. In Chapter 2 an exact experimental design problem is formulated mathematically and is compared with its continuous analogue. Motivation is provided for the solution of this continuous problem, and the remainder of the chapter concerns this problem. A necessary and sufficient condition for optimality of a design measure is given. Problems which might arise in testing this condition are discussed, in particular with respect to possible non-differentiability of the criterion function at the design being tested. Several examples are given of optimal designs which may be found analytically and which illustrate the points discussed earlier in the chapter. In Chapter 3 numerical methods of solution of the continuous optimal design problem are reviewed. A new algorithm is presented with illustrations of how it should be used in practice. It is shown that, for reasonably large sample size, continuously optimal designs may be approximated to well by an exact design. In situations where this is not satisfactory algorithms for improvement of this design are reviewed. Chapter 4 consists of a discussion of sequentially designed experiments, with regard to both the philosophies underlying, and the application of the methods of, statistical inference. In Chapter 5 we criticise constructively previous suggestions for fully sequential design procedures. Alternative suggestions are made along with conjectures as to how these might improve performance. Chapter 6 presents a simulation study, the aim of which is to investigate the conjectures of Chapter 5. The results of this study provide empirical support for these conjectures. In Chapter 7 examples are analysed. These suggest aids to sequential experimentation by means of reduction of the dimension of the design space and the possibility of experimenting semi-sequentially. Further examples are considered which stress the importance of the use of prior information in situations of this type. Finally we consider the design of experiments when semi-sequential experimentation is mandatory because of the necessity of taking batches of observations at the same time. In Chapter 8 we look at some of the assumptions which have been made and indicate what may go wrong where these assumptions no longer hold.
Resumo:
The paper discusses the effect of stress triaxiality on the onset and evolution of damage in ductile metals. A series of tests including shear tests and experiments oil smooth and pre-notched tension specimens wits carried Out for it wide range of stress triaxialities. The underlying continuum damage model is based oil kinematic definition of damage tensors. The modular structure of the approach is accomplished by the decomposition of strain rates into elastic, plastic and damage parts. Free energy functions with respect to fictitious undamaged configurations as well as damaged ones are introduced separately leading to elastic material laws which are affected by increasing damage. In addition, a macroscopic yield condition and a flow rule are used to adequately describe the plastic behavior. Numerical simulations of the experiments are performed and good correlation of tests and numerical results is achieved. Based oil experimental and numerical data the damage criterion formulated in stress space is quantified. Different branches of this function are taken into account corresponding to different damage modes depending oil stress triaxiality and Lode parameter. In addition, identification of material parameters is discussed ill detail. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Tantalum oxynitride thin films were produced by magnetron sputtering. The films were deposited usinga pure Ta target and a working atmosphere with a constant N2/O2ratio. The choice of this constant ratiolimits the study concerning the influence of each reactive gas, but allows a deeper understanding of theaspects related to the affinity of Ta to the non-metallic elements and it is economically advantageous.This work begins by analysing the data obtained directly from the film deposition stage, followed bythe analysis of the morphology, composition and structure. For a better understanding regarding theinfluence of the deposition parameters, the analyses are presented by using the following criterion: thefilms were divided into two sets, one of them produced with grounded substrate holder and the otherwith a polarization of −50 V. Each one of these sets was produced with different partial pressure of thereactive gases P(N2+ O2). All the films exhibited a O/N ratio higher than the N/O ratio in the depositionchamber atmosphere. In the case of the films produced with grounded substrate holder, a strong increaseof the O content is observed, associated to the strong decrease of the N content, when P(N2+ O2) is higherthan 0.13 Pa. The higher Ta affinity for O strongly influences the structural evolution of the films. Grazingincidence X-ray diffraction showed that the lower partial pressure films were crystalline, while X-rayreflectivity studies found out that the density of the films depended on the deposition conditions: thehigher the gas pressure, the lower the density. Firstly, a dominant -Ta structure is observed, for lowP(N2+ O2); secondly a fcc-Ta(N,O) structure, for intermediate P(N2+ O2); thirdly, the films are amorphousfor the highest partial pressures. The comparison of the characteristics of both sets of produced TaNxOyfilms are explained, with detail, in the text.
Resumo:
This study presents a classification criteria for two-class Cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland, law enforcement authorities regularly ask laboratories to determine cannabis plant's chemotype from seized material in order to ascertain that the plantation is legal or not. In this study, the classification analysis is based on data obtained from the relative proportion of three major leaf compounds measured by gas-chromatography interfaced with mass spectrometry (GC-MS). The aim is to discriminate between drug type (illegal) and fiber type (legal) cannabis at an early stage of the growth. A Bayesian procedure is proposed: a Bayes factor is computed and classification is performed on the basis of the decision maker specifications (i.e. prior probability distributions on cannabis type and consequences of classification measured by losses). Classification rates are computed with two statistical models and results are compared. Sensitivity analysis is then performed to analyze the robustness of classification criteria.
Resumo:
A new strategy for incremental building of multilayer feedforward neural networks is proposed in the context of approximation of functions from R-p to R-q using noisy data. A stopping criterion based on the properties of the noise is also proposed. Experimental results for both artificial and real data are performed and two alternatives of the proposed construction strategy are compared.
Resumo:
The hypothesis that ornaments can honestly signal quality only if their expression is condition-dependent has dominated the study of the evolution and function of colour traits. Much less interest has been devoted to the adaptive function of colour traits for which the expression is not, or is to a low extent, sensitive to body condition and the environment in which individuals live. The aim of the present paper is to review the current theoretical and empirical knowledge of the evolution, maintenance and adaptive function of colour plumage traits for which the expression is mainly under genetic control. The finding that in many bird species the inheritance of colour morphs follows the laws of Mendel indicates that genetic colour polymorphism is frequent. Polymorphism may have evolved or be maintained because each colour morph facilitates the exploitation of alternative ecological niches as suggested by the observation that individuals are not randomly distributed among habitats with respect to coloration. Consistent with the hypothesis that different colour morphs are linked to alternative strategies is the finding that in a majority of species polymorphism is associated with reproductive parameters, and behavioural, life-history and physiological traits. Experimental studies showed that such covariations can have a genetic basis. These observations suggest that colour polymorphism has an adaptive function. Aviary and field experiments demonstrated that colour polymorphism is used as a criterion in mate-choice decisions and dominance interactions confirming the claim that conspecifics assess each other's colour morphs. The factors favouring the evolution and maintenance of genetic variation in coloration are reviewed, but empirical data are virtually lacking to assess their importance. Although current theory predicts that only condition-dependent traits can signal quality, the present review shows that genetically inherited morphs can reveal the same qualities. The study of genetic colour polymorphism will provide important and original insights on the adaptive function of conspicuous traits.