968 resultados para taguchi loss function


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a study where the energy loss function of Ta2O5, initially derived in the optical limit for a limited region of excitation energies from reflection electron energy loss spectroscopy (REELS) measurements, was improved and extended to the whole momentum and energy excitation region through a suitable theoretical analysis using the Mermin dielectric function and requiring the fulfillment of physically motivated restrictions, such as the f- and KK-sum rules. The material stopping cross section (SCS) and energy-loss straggling measured for 300–2000 keV proton and 200–6000 keV helium ion beams by means of Rutherford backscattering spectrometry (RBS) were compared to the same quantities calculated in the dielectric framework, showing an excellent agreement, which is used to judge the reliability of the Ta2O5 energy loss function. Based on this assessment, we have also predicted the inelastic mean free path and the SCS of energetic electrons in Ta2O5.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present research aims to evaluate the usefulness of the application of Life Cycle Management in the agricultural sector focusing on the environmental and socio-economic aspects of decision making in the Colombian cocoa production. Such appraisal is based on the application of two methodological tools: Life Cycle Assessment, which considers environmental impacts throughout the life cycle of the cocoa production system, and Taguchi Loss Function, which measures the economic impact of a process' deviation from production targets. Results show that appropriate improvements in farming practices and supply consumption can enhance decision-making in the agricultural cocoa sector towards sustainability. In terms of agri-business purposes, such qualitative shift allows not only meeting consumer demands for environmentally friendly products, but also increasing the productivity and competitiveness of cocoa production, all of which has helped Life Cycle Management gain global acceptance. Since farmers have an important role in improving social and economic indicators at the national level, more attention should be paid to the upgrading of their cropping practices. Finally, one fundamental aspect of national cocoa production is the institutional and governmental support available for farmers in face of socio-economic or technological needs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy of the associated lossless universal code. Using Shtarkov's theorem and tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret. The bound depends on certain metric properties of the class of predictors. We apply the bound to both parametric and nonparametric classes ofpredictors. Finally, we point out a suboptimal behavior of the popular Bayesian weighted average algorithm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ABSTRACTA model to estimate yield loss caused by Asian soybean rust (ASR) (Phakopsora pachyrhizi) was developed by collecting data from field experiments during the growing seasons 2009/10 and 2010/11, in Passo Fundo, RS. The disease intensity gradient, evaluated in the phenological stages R5.3, R5.4 and R5.5 based on leaflet incidence (LI) and number of uredinium and lesions/cm2, was generated by applying azoxystrobin 60 g a.i/ha + cyproconazole 24 g a.i/ha + 0.5% of the adjuvant Nimbus. The first application occurred when LI = 25% and the remaining ones at 10, 15, 20 and 25-day intervals. Harvest occurred at physiological maturity and was followed by grain drying and cleaning. Regression analysis between the grain yield and the disease intensity assessment criteria generated 56 linear equations of the yield loss function. The greatest loss was observed in the earliest growth stage, and yield loss coefficients ranged from 3.41 to 9.02 kg/ha for each 1% LI for leaflet incidence, from 13.34 to 127.4 kg/ha/1 lesion/cm2 for lesion density and from 5.53 to 110.0 kg/ha/1 uredinium/cm2 for uredinium density.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The optimum quality that can be asymptotically achieved in the estimation of a probability p using inverse binomial sampling is addressed. A general definition of quality is used in terms of the risk associated with a loss function that satisfies certain assumptions. It is shown that the limit superior of the risk for p asymptotically small has a minimum over all (possibly randomized) estimators. This minimum is achieved by certain non-randomized estimators. The model includes commonly used quality criteria as particular cases. Applications to the non-asymptotic regime are discussed considering specific loss functions, for which minimax estimators are derived.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

What genotype should the scientist specify for conducting a database search to try to find the source of a low-template-DNA (lt-DNA) trace? When the scientist answers this question, he or she makes a decision. Here, we approach this decision problem from a normative point of view by defining a decision-theoretic framework for answering this question for one locus. This framework combines the probability distribution describing the uncertainty over the trace's donor's possible genotypes with a loss function describing the scientist's preferences concerning false exclusions and false inclusions that may result from the database search. According to this approach, the scientist should choose the genotype designation that minimizes the expected loss. To illustrate the results produced by this approach, we apply it to two hypothetical cases: (1) the case of observing one peak for allele xi on a single electropherogram, and (2) the case of observing one peak for allele xi on one replicate, and a pair of peaks for alleles xi and xj, i ≠ j, on a second replicate. Given that the probabilities of allele drop-out are defined as functions of the observed peak heights, the threshold values marking the turning points when the scientist should switch from one designation to another are derived in terms of the observed peak heights. For each case, sensitivity analyses show the impact of the model's parameters on these threshold values. The results support the conclusion that the procedure should not focus on a single threshold value for making this decision for all alleles, all loci and in all laboratories.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study presents a classification criteria for two-class Cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland, law enforcement authorities regularly ask laboratories to determine cannabis plant's chemotype from seized material in order to ascertain that the plantation is legal or not. In this study, the classification analysis is based on data obtained from the relative proportion of three major leaf compounds measured by gas-chromatography interfaced with mass spectrometry (GC-MS). The aim is to discriminate between drug type (illegal) and fiber type (legal) cannabis at an early stage of the growth. A Bayesian procedure is proposed: a Bayes factor is computed and classification is performed on the basis of the decision maker specifications (i.e. prior probability distributions on cannabis type and consequences of classification measured by losses). Classification rates are computed with two statistical models and results are compared. Sensitivity analysis is then performed to analyze the robustness of classification criteria.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the recent years, kernel methods have revealed very powerful tools in many application domains in general and in remote sensing image classification in particular. The special characteristics of remote sensing images (high dimension, few labeled samples and different noise sources) are efficiently dealt with kernel machines. In this paper, we propose the use of structured output learning to improve remote sensing image classification based on kernels. Structured output learning is concerned with the design of machine learning algorithms that not only implement input-output mapping, but also take into account the relations between output labels, thus generalizing unstructured kernel methods. We analyze the framework and introduce it to the remote sensing community. Output similarity is here encoded into SVM classifiers by modifying the model loss function and the kernel function either independently or jointly. Experiments on a very high resolution (VHR) image classification problem shows promising results and opens a wide field of research with structured output kernel methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes a new methodology to compute Value at Risk (VaR) for quantifying losses in credit portfolios. We approximate the cumulative distribution of the loss function by a finite combination of Haar wavelet basis functions and calculate the coefficients of the approximation by inverting its Laplace transform. The Wavelet Approximation (WA) method is specially suitable for non-smooth distributions, often arising in small or concentrated portfolios, when the hypothesis of the Basel II formulas are violated. To test the methodology we consider the Vasicek one-factor portfolio credit loss model as our model framework. WA is an accurate, robust and fast method, allowing to estimate VaR much more quickly than with a Monte Carlo (MC) method at the same level of accuracy and reliability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a closed economy context there is common agreement on price inflation stabilization being one of the objects of monetary policy. Moving to an open economy context gives rise to the coexistence of two measures of inflation: domestic inflation (DI) and consumer price inflation (CPI). Which one of the two measures should be the target variable? This is the question addressed in this paper. In particular, I use a small open economy model to show that once sticky wages indexed to past CPI inflation are introduced, a complete inward looking monetary policy is no more optimal. I first, derive a loss function from a secondorder approximation of the utility function and then, I compute the fully optimalmonetary policy under commitment. Then, I use the optimal monetary policy as a benchmark to compare the performance of different monetary policy rules. The main result is that once a positive degree of indexation is introduced in the model the rule performing better (among the Taylor type rules considered) is the one targeting wage inflation and CPI inflation. Moreover this rule delivers results very close to the one obtained under the fully optimal monetary policy with commitment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We report an investigation on the optical properties of Cu3Ge thin films displaying very high conductivity, with thickness ranging from 200 to 2000 Å, deposited on Ge substrates. Reflectance, transmittance, and ellipsometric spectroscopy measurements were performed at room temperature in the 0.01-6.0, 0.01-0.6, and 1.4-5.0 eV energy range, respectively. The complex dielectric function, the optical conductivity, the energy-loss function, and the effective charge density were obtained over the whole spectral range. The low-energy free-carrier response was well fitted by using the classical Drude-Lorentz dielectric function. A simple two-band model allowed the resulting optical parameters to be interpreted coherently with those previously obtained from transport measurements, hence yielding the densities and the effective masses of electrons and holes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper applies probability and decision theory in the graphical interface of an influence diagram to study the formal requirements of rationality which justify the individualization of a person found through a database search. The decision-theoretic part of the analysis studies the parameters that a rational decision maker would use to individualize the selected person. The modeling part (in the form of an influence diagram) clarifies the relationships between this decision and the ingredients that make up the database search problem, i.e., the results of the database search and the different pairs of propositions describing whether an individual is at the source of the crime stain. These analyses evaluate the desirability associated with the decision of 'individualizing' (and 'not individualizing'). They point out that this decision is a function of (i) the probability that the individual in question is, in fact, at the source of the crime stain (i.e., the state of nature), and (ii) the decision maker's preferences among the possible consequences of the decision (i.e., the decision maker's loss function). We discuss the relevance and argumentative implications of these insights with respect to recent comments in specialized literature, which suggest points of view that are opposed to the results of our study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a recent paper, Komaki studied the second-order asymptotic properties of predictive distributions, using the Kullback-Leibler divergence as a loss function. He showed that estimative distributions with asymptotically efficient estimators can be improved by predictive distributions that do not belong to the model. The model is assumed to be a multidimensional curved exponential family. In this paper we generalize the result assuming as a loss function any f divergence. A relationship arises between alpha connections and optimal predictive distributions. In particular, using an alpha divergence to measure the goodness of a predictive distribution, the optimal shift of the estimate distribution is related to alpha-covariant derivatives. The expression that we obtain for the asymptotic risk is also useful to study the higher-order asymptotic properties of an estimator, in the mentioned class of loss functions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper develops and estimates a game-theoretical model of inflation targeting where the central banker's preferences are asymmetric around the targeted rate. In particular, positive deviations from the target can be weighted more, or less, severely than negative ones in the central banker's loss function. It is shown that some of the previous results derived under the assumption of symmetry are not robust to the generalization of preferences. Estimates of the central banker's preference parameters for Canada, Sweden, and the United Kingdom are statistically different from the ones implied by the commonly used quadratic loss function. Econometric results are robust to different forecasting models for the rate of unemployment but not to the use of measures of inflation broader than the one targeted.