933 resultados para time dependant cost function


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Toward the ultimate goal of replacing field-based evaluation of seasonal growth habit, we describe the design and validation of a multiplex polymerase chain reaction assay diagnostic for allelic status at the barley (Hordeum vulgare ssp. vulgare L.) vernalization locus, VRN-H1 By assaying for the presence of all known insertion–deletion polymorphisms thought to be responsible for the difference between spring and winter alleles, this assay directly tests for the presence of functional polymorphism at VRN-H1 Four of the nine previously recognized VRN-H1 haplotypes (including both winter alleles) give unique profiles using this assay. The remaining five spring haplotypes share a single profile, indicative of function-altering deletions spanning, or adjacent to, the putative “vernalization critical” region of intron 1. When used in conjunction with a previously published PCR-based assay diagnostic for alleles at VRN-H2, it was possible to predict growth habit in all the 100 contemporary UK spring and winter lines analyzed in this study. This assay is likely to find application in instances when seasonal growth habit needs to be determined without the time and cost of phenotypic assessment and during marker-assisted selection using conventional and multicross population analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new sparse model construction method aimed at maximizing a model’s generalisation capability for a large class of linear-in-the-parameters models. The coordinate descent optimization algorithm is employed with a modified l1- penalized least squares cost function in order to estimate a single parameter and its regularization parameter simultaneously based on the leave one out mean square error (LOOMSE). Our original contribution is to derive a closed form of optimal LOOMSE regularization parameter for a single term model, for which we show that the LOOMSE can be analytically computed without actually splitting the data set leading to a very simple parameter estimation method. We then integrate the new results within the coordinate descent optimization algorithm to update model parameters one at the time for linear-in-the-parameters models. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate and reliable rain rate estimates are important for various hydrometeorological applications. Consequently, rain sensors of different types have been deployed in many regions. In this work, measurements from different instruments, namely, rain gauge, weather radar, and microwave link, are combined for the first time to estimate with greater accuracy the spatial distribution and intensity of rainfall. The objective is to retrieve the rain rate that is consistent with all these measurements while incorporating the uncertainty associated with the different sources of information. Assuming the problem is not strongly nonlinear, a variational approach is implemented and the Gauss–Newton method is used to minimize the cost function containing proper error estimates from all sensors. Furthermore, the method can be flexibly adapted to additional data sources. The proposed approach is tested using data from 14 rain gauges and 14 operational microwave links located in the Zürich area (Switzerland) to correct the prior rain rate provided by the operational radar rain product from the Swiss meteorological service (MeteoSwiss). A cross-validation approach demonstrates the improvement of rain rate estimates when assimilating rain gauge and microwave link information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents three contributions to the literature on the welfare cost of ináation. First, it introduces a new sensible way of measuring this cost - that of a compensating variation in consumption or income, instead of the equivalent variation notion that has been extensively used in empirical and theoretical research during the past fiftt years. We Önd this new measure to be interestingly related to the proxy measure of the shopping-time welfare cost of ináation introduced by Simonsen and Cysne (2001). Secondly, it discusses for which money-demand functions this and the shopping-time measure can be evaluated in an economically meaningful way. And, last but not least, it completely orders a comprehensive set of measures of the welfare cost of ináation for these money-demand specification. All of our results are extended to an economy in which there are many types of monies present, and are illustrated with the log-log money-demand specification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O objetivo deste estudo foi comparar a intensidade de exercício no lactato mínimo (LACmin), com a intensidade correspondente ao limiar de lactato (LL) e limiar anaeróbio (LAn). Participaram do estudo, 11 atletas do sexo masculino (idade, 22,5 + 3,17 anos; altura, 172,3 + 8,2 cm; peso, 66,9 + 8,2kg; e gordura corporal, 9,8 + 3,4%). Os indivíduos foram submetidos, em uma bicicleta eletromagnética (Quinton - Corival 400), a dois testes: 1) exercício contínuo de cargas crescentes - carga inicial de 100W, com incrementos de 25W a cada três min. até a exaustão voluntária; e 2) teste de lactato mínimo - inicialmente os indivíduos pedalaram duas vezes 425W (+ 120%max) durante 30 segundos, com um min. de intervalo, com o objetivo de induzir o acúmulo de lactato. Após oito min. de recuperação passiva, os indivíduos iniciaram um teste contínuo de cargas progressivas, idêntico ao descrito anteriormente. O LL e o LAn foram identificados como sendo o menor valor entre a razão - lactato sanguíneo (mM) / intensidade de exercício (W), e a intensidade correspondente a 3,5mM de lactato sanguíneo, respectivamente. O LACmin foi identificado como sendo a intensidade correspondente a menor concentração de lactato durante o teste de cargas progressivas. Não foi observada diferença significante entre a potência do LL (197,7 + 20,7W) e do LACmin (201,6 + 13,0W), sendo ambas significantemente menores do que do LAn (256,7 + 33,3W). Não foram encontradas também diferenças significantes para o (ml.kg-1.min-1) e a FC (bpm) obtidos no LL (43,2 + 5,01; 152,0 + 13,0) e no LACmin (42,1 + 3,9; 159,0 + 10,0), sendo entretanto significantemente menores do que os obtidos para o LAn (52,2 + 8,2; 174,0 + 13,0, respectivamente). Pode-se concluir que o teste de LACmin, nas condições experimentais deste estudo, pode subestimar a intensidade de MSSLAC (estimada indiretamente pelo LAn), o que concordacom outros estudos que determinaram a MSSLAC diretamente. Assim, são necessários mais estudos que analisem o possível componente tempo-dependente (intensidade inicial) que pode existir no protocolo do LACmin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a numerical procedure for plotting the force-versus-time curves in elastic collisions between identical conducting balls. A system of parametric equations relating the force and the time to a dimensionless parameter is derived from the assumption of a force compatible with Hertz's theory of collision. A simple experimental arrangement consisting of a mechanical system of colliding balls and an electrical circuit containing a crystal oscillator and an electronic counter is used to measure the collision time as a function of the energy of impact. From the data we can determine the relevant parameters. The calculated results agree very well with the expected values and are consistent with the assumption that the collisions are elastic. (C) 2006 American Association of Physics Teachers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When the (X) over bar chart is in use, samples are regularly taken from the process, and their means are plotted on the chart. In some cases, it is too expensive to obtain the X values, but not the values of a correlated variable Y. This paper presents a model for the economic design of a two-stage control chart, that is. a control chart based on both performance (X) and surrogate (Y) variables. The process is monitored by the surrogate variable until it signals an out-of-control behavior, and then a switch is made to the (X) over bar chart. The (X) over bar chart is built with central, warning. and action regions. If an X sample mean falls in the central region, the process surveillance returns to the (Y) over bar chart. Otherwise. The process remains under the (X) over bar chart's surveillance until an (X) over bar sample mean falls outside the control limits. The search for an assignable cause is undertaken when the performance variable signals an out-of-control behavior. In this way, the two variables, are used in an alternating fashion. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A study is performed to examine the economic advantages of using performance and surrogate variables. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an economic design of (X) over bar control charts with variable sample sizes, variable sampling intervals, and variable control limits. The sample size n, the sampling interval h, and the control limit coefficient k vary between minimum and maximum values, tightening or relaxing the control. The control is relaxed when an (X) over bar value falls close to the target and is tightened when an (X) over bar value falls far from the target. A cost model is constructed that involves the cost of false alarms, the cost of finding and eliminating the assignable cause, the cost associated with production in an out-of-control state, and the cost of sampling and testing. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A comprehensive study is performed to examine the economic advantages of varying the (X) over bar chart parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The collapse of trapped Boson-Einstein condensate (BEC) of atoms in states 1 and 2 was studied. When the interaction among the atoms in state i was attractive the component i of the condensate experienced collapse. When the interaction between an atom in state 1 and state 2 was attractive both components experienced collapse. The time-dependant Gross-Pitaevski (GP) equation was used to study the time evolution of the collapse. There was an alternate growth and decay in the number of particles experiencing collapse.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop an economic model for X̄ control charts having all design parameters varying in an adaptive way, that is, in real time considering current sample information. In the proposed model, each of the design parameters can assume two values as a function of the most recent process information. The cost function is derived and it provides a device for optimal selection of the design parameters. Through a numerical example one can foresee the savings that the developed model possibly provides. © 2001 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article describes the application of an Artificial Intelligence Planner in a robotized assembly cell that can be integrated to a Flexible Manufacturing System. The objective is to allow different products to be automatically assembled in a single production line with no pre-established assembly plans. The planner function is to generate action plans to the robot, in real time, from two input information: the initial state (disposition of parts of the product in line) and the final state (configuration of the assembled product). Copyright © 2007 IFAC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present an optimization of the Optimum-Path Forest classifier training procedure, which is based on a theoretical relationship between minimum spanning forest and optimum-path forest for a specific path-cost function. Experiments on public datasets have shown that the proposed approach can obtain similar accuracy to the traditional one but with faster data training. © 2012 ICPR Org Committee.