1000 resultados para excess functions


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optimal design for generalized linear models has primarily focused on univariate data. Often experiments are performed that have multiple dependent responses described by regression type models, and it is of interest and of value to design the experiment for all these responses. This requires a multivariate distribution underlying a pre-chosen model for the data. Here, we consider the design of experiments for bivariate binary data which are dependent. We explore Copula functions which provide a rich and flexible class of structures to derive joint distributions for bivariate binary data. We present methods for deriving optimal experimental designs for dependent bivariate binary data using Copulas, and demonstrate that, by including the dependence between responses in the design process, more efficient parameter estimates are obtained than by the usual practice of simply designing for a single variable only. Further, we investigate the robustness of designs with respect to initial parameter estimates and Copula function, and also show the performance of compound criteria within this bivariate binary setting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0–1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0–1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function—that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise, and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Experimental results for a reactive non-buoyant plume of nitric oxide (NO) in a turbulent grid flow doped with ozone (O3) are presented. The Damkohler number (Nd) for the experiment is of order unity indicating the turbulence and chemistry have similar timescales and both affect the chemical reaction rate. Continuous measurements of two components of velocity using hot-wire anemometry and the two reactants using chemiluminescent analysers have been made. A spatial resolution for the reactants of four Kolmogorov scales has been possible because of the novel design of the experiment. Measurements at this resolution for a reactive plume are not found in the literature. The experiment has been conducted relatively close to the grid in the region where self-similarity of the plume has not yet developed. Statistics of a conserved scalar, deduced from both reactive and non-reactive scalars by conserved scalar theory, are used to establish the mixing field of the plume, which is found to be consistent with theoretical considerations and with those found by other investigators in non-reative flows. Where appropriate the reactive species means and higher moments, probability density functions, joint statistics and spectra are compared with their respective frozen, equilibrium and reaction-dominated limits deduced from conserved scalar theory. The theoretical limits bracket reactive scalar statistics where this should be so according to conserved scalar theory. Both reactants approach their equilibrium limits with greater distance downstream. In the region of measurement, the plume reactant behaves as the reactant not in excess and the ambient reactant behaves as the reactant in excess. The reactant covariance lies outside its frozen and equilibrium limits for this value of Vd. The reaction rate closure of Toor (1969) is compared with the measured reaction rate. The gradient model is used to obtain turbulent diffusivities from turbulent fluxes. Diffusivity of a non-reactive scalar is found to be close to that measured in non-reactive flows by others.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study we set out to dissociate the developmental time course of automatic symbolic number processing and cognitive control functions in grade 1-3 British primary school children. Event-related potential (ERP) and behavioral data were collected in a physical size discrimination numerical Stroop task. Task-irrelevant numerical information was processed automatically already in grade 1. Weakening interference and strengthening facilitation indicated the parallel development of general cognitive control and automatic number processing. Relationships among ERP and behavioral effects suggest that control functions play a larger role in younger children and that automaticity of number processing increases from grade 1 to 3.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper illustrates the damage identification and condition assessment of a three story bookshelf structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). A major obstacle of using measured frequency response function data is a large size input variables to ANNs. This problem is overcome by applying a data reduction technique called principal component analysis (PCA). In the proposed procedure, ANNs with their powerful pattern recognition and classification ability were used to extract damage information such as damage locations and severities from measured FRFs. Therefore, simple neural network models are developed, trained by Back Propagation (BP), to associate the FRFs with the damage or undamaged locations and severity of the damage of the structure. Finally, the effectiveness of the proposed method is illustrated and validated by using the real data provided by the Los Alamos National Laboratory, USA. The illustrated results show that the PCA based artificial Neural Network method is suitable and effective for damage identification and condition assessment of building structures. In addition, it is clearly demonstrated that the accuracy of proposed damage detection method can also be improved by increasing number of baseline datasets and number of principal components of the baseline dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, very few attempts have been made to explore the structure damage with noise polluted data which is unavoidable effect in real world. The measurement data are contaminated by noise because of test environment as well as electronic devices and this noise tend to give error results with structural damage identification methods. Therefore it is important to investigate a method which can perform better with noise polluted data. This paper introduces a new damage index using principal component analysis (PCA) for damage detection of building structures being able to accept noise polluted frequency response functions (FRFs) as input. The FRF data are obtained from the function datagen of MATLAB program which is available on the web site of the IASC-ASCE (International Association for Structural Control– American Society of Civil Engineers) Structural Health Monitoring (SHM) Task Group. The proposed method involves a five-stage process: calculation of FRFs, calculation of damage index values using proposed algorithm, development of the artificial neural networks and introducing damage indices as input parameters and damage detection of the structure. This paper briefly describes the methodology and the results obtained in detecting damage in all six cases of the benchmark study with different noise levels. The proposed method is applied to a benchmark problem sponsored by the IASC-ASCE Task Group on Structural Health Monitoring, which was developed in order to facilitate the comparison of various damage identification methods. The illustrated results show that the PCA-based algorithm is effective for structural health monitoring with noise polluted FRFs which is of common occurrence when dealing with industrial structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poisson distribution has often been used for count like accident data. Negative Binomial (NB) distribution has been adopted in the count data to take care of the over-dispersion problem. However, Poisson and NB distributions are incapable of taking into account some unobserved heterogeneities due to spatial and temporal effects of accident data. To overcome this problem, Random Effect models have been developed. Again another challenge with existing traffic accident prediction models is the distribution of excess zero accident observations in some accident data. Although Zero-Inflated Poisson (ZIP) model is capable of handling the dual-state system in accident data with excess zero observations, it does not accommodate the within-location correlation and between-location correlation heterogeneities which are the basic motivations for the need of the Random Effect models. This paper proposes an effective way of fitting ZIP model with location specific random effects and for model calibration and assessment the Bayesian analysis is recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The presence of large number of single-phase distributed energy resources (DERs) can cause severe power quality problems in distribution networks. The DERs can be installed in random locations. This may cause the generation in a particular phase exceeds the load demand in that phase. Therefore the excess power in that phase will be fed back to the transmission network. To avoid this problem, the paper proposes the use of distribution static compensator (DSTATCOM) that needs to be connected at the first bus following a substation. When operated properly, the DSTATCOM can facilitate a set of balanced current flow from the substation, even when excess power is generated by DERs. The proposals are validated through extensive digital computer simulation studies using PSCAD and MATLAB.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research described in this paper forms part of an in-depth investigation of safety culture in one of Australia’s largest construction companies. The research builds on a previous qualitative study with organisational safety leaders and further investigates how safety culture is perceived and experienced by organisational members, as well as how this relates to their safety behaviour and related outcomes at work. Participants were 2273 employees of the case study organisation, with 689 from the Construction function and 1584 from the Resources function. The results of several analyses revealed some interesting organisational variance on key measures. Specifically, the Construction function scored significantly higher on all key measures: safety climate, safety motivation, safety compliance, and safety participation. The results are discussed in terms of relevance in an applied research context.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 1980 Alltop produced a family of cubic phase sequences that nearly meet the Welch bound for maximum non-peak correlation magnitude. This family of sequences were shown by Wooters and Fields to be useful for quantum state tomography. Alltop’s construction used a function that is not planar, but whose difference function is planar. In this paper we show that Alltop type functions cannot exist in fields of characteristic 3 and that for a known class of planar functions, x^3 is the only Alltop type function.