964 resultados para variance component models
Resumo:
Foley [J. Opt. Soc. Am. A 11 (1994) 1710] has proposed an influential psychophysical model of masking in which mask components in a contrast gain pool are raised to an exponent before summation and divisive inhibition. We tested this summation rule in experiments in which contrast detection thresholds were measured for a vertical 1 c/deg (or 2 c/deg) sine-wave component in the presence of a 3 c/deg (or 6 c/deg) mask that had either a single component oriented at -45° or a pair of components oriented at ±45°. Contrary to the predictions of Foley's model 3, we found that for masks of moderate contrast and above, threshold elevation was predicted by linear summation of the mask components in the inhibitory stage of the contrast gain pool. We built this feature into two new models, referred to as the early adaptation model and the hybrid model. In the early adaptation model, contrast adaptation controls a threshold-like nonlinearity on the output of otherwise linear pathways that provide the excitatory and inhibitory inputs to a gain control stage. The hybrid model involves nonlinear and nonadaptable routes to excitatory and inhibitory stages as well as an adaptable linear route. With only six free parameters, both models provide excellent fits to the masking and adaptation data of Foley and Chen [Vision Res. 37 (1997) 2779] but unlike Foley and Chen's model, are able to do so with only one adaptation parameter. However, only the hybrid model is able to capture the features of Foley's (1994) pedestal plus orthogonal fixed mask data. We conclude that (1) linear summation of inhibitory components is a feature of contrast masking, and (2) that the main aftereffect of spatial adaptation on contrast increment thresholds can be assigned to a single site. © 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Rhizome of cassava plants (Manihot esculenta Crantz) was catalytically pyrolysed at 500 °C using analytical pyrolysis–gas chromatography/mass spectrometry (Py–GC/MS) method in order to investigate the relative effect of various catalysts on pyrolysis products. Selected catalysts expected to affect bio-oil properties were used in this study. These include zeolites and related materials (ZSM-5, Al-MCM-41 and Al-MSU-F type), metal oxides (zinc oxide, zirconium (IV) oxide, cerium (IV) oxide and copper chromite) catalysts, proprietary commercial catalysts (Criterion-534 and alumina-stabilised ceria-MI-575) and natural catalysts (slate, char and ashes derived from char and biomass). The pyrolysis product distributions were monitored using models in principal components analysis (PCA) technique. The results showed that the zeolites, proprietary commercial catalysts, copper chromite and biomass-derived ash were selective to the reduction of most oxygenated lignin derivatives. The use of ZSM-5, Criterion-534 and Al-MSU-F catalysts enhanced the formation of aromatic hydrocarbons and phenols. No single catalyst was found to selectively reduce all carbonyl products. Instead, most of the carbonyl compounds containing hydroxyl group were reduced by zeolite and related materials, proprietary catalysts and copper chromite. The PCA model for carboxylic acids showed that zeolite ZSM-5 and Al-MSU-F tend to produce significant amounts of acetic and formic acids.
Resumo:
In this paper we present a novel method for emulating a stochastic, or random output, computer model and show its application to a complex rabies model. The method is evaluated both in terms of accuracy and computational efficiency on synthetic data and the rabies model. We address the issue of experimental design and provide empirical evidence on the effectiveness of utilizing replicate model evaluations compared to a space-filling design. We employ the Mahalanobis error measure to validate the heteroscedastic Gaussian process based emulator predictions for both the mean and (co)variance. The emulator allows efficient screening to identify important model inputs and better understanding of the complex behaviour of the rabies model.
Resumo:
Software development methodologies are becoming increasingly abstract, progressing from low level assembly and implementation languages such as C and Ada, to component based approaches that can be used to assemble applications using technologies such as JavaBeans and the .NET framework. Meanwhile, model driven approaches emphasise the role of higher level models and notations, and embody a process of automatically deriving lower level representations and concrete software implementations. The relationship between data and software is also evolving. Modern data formats are becoming increasingly standardised, open and empowered in order to support a growing need to share data in both academia and industry. Many contemporary data formats, most notably those based on XML, are self-describing, able to specify valid data structure and content, and can also describe data manipulations and transformations. Furthermore, while applications of the past have made extensive use of data, the runtime behaviour of future applications may be driven by data, as demonstrated by the field of dynamic data driven application systems. The combination of empowered data formats and high level software development methodologies forms the basis of modern game development technologies, which drive software capabilities and runtime behaviour using empowered data formats describing game content. While low level libraries provide optimised runtime execution, content data is used to drive a wide variety of interactive and immersive experiences. This thesis describes the Fluid project, which combines component based software development and game development technologies in order to define novel component technologies for the description of data driven component based applications. The thesis makes explicit contributions to the fields of component based software development and visualisation of spatiotemporal scenes, and also describes potential implications for game development technologies. The thesis also proposes a number of developments in dynamic data driven application systems in order to further empower the role of data in this field.
Resumo:
The aim of our paper is to examine whether Exchange Traded Funds (ETFs) diversify away the private information of informed traders. We apply the spread decomposition models of Glosten and Harris (1998) and Madhavan, Richardson and Roomans (1997) to a sample of ETFs and their control securities. Our results indicate that ETFs have significantly lower adverse selection costs than their control securities. This suggests that private information is diversified away for these securities. Our results therefore offer one explanation for the rapid growth in the ETF market.
Resumo:
This article investigates the performance of a model called Full-Scale Optimisation, which was presented recently and is used for financial investment advice. The investor’s preferences of expected risk and return are entered into the model, and a recommended portfolio is produced. This model is theoretically more accurate than the mainstream investment advice model, called Mean-Variance Optimization, as there are fewer assumptions made. Our investigation of the model’s performance is broader when it comes to investor preferences, and more general when it comes to investment type, as compared to previous studies. Our investigation shows that Full-Scale Optimisation is more widely applicable than earlier known.
Resumo:
The last major study of sales performance variance explained by salespeople attributes was by Churchill et al. (1985). They examined the effect of role, skills, motivation, personal factors, aptitude, and organizational/environmental factors on sales performance—factors that have dominated the sales performance area. About the same time, Weitz, Sujan, and Sujan (1986) introduced the concepts of salespeople's knowledge structures. Considerable work on the relationship of the elements of knowledge structures and performance can be found in the literature. In this research note, we determine the degree to which sales performance can be explained by knowledge structure variables, a heretofore unexplored area. If knowledge structure variables explain more variance than traditional variables, then this paper would be a call to further research in this area. In examining this research question in a retail context, we find that knowledge structure variables explain 50.2 percent of the variance in sales performance. We also find that variance explained by knowledge structures is significantly different based on gender. The impact of knowledge structures on performance was higher for men than for women. The models using education demonstrated smaller differences.
Resumo:
This paper employs a Component GARCH in Mean model to show that house prices across a number of major US cities between 1987 and 2009 have displayed asset market properties in terms of both risk-return relationships and asymmetric adjustment to shocks. In addition, tests for structural breaks in the mean and variance indicate structural instability across the data range. Multiple breaks are identified across all cities, particularly for the early 1990s and during the post-2007 financial crisis as housing has become an increasingly risky asset. Estimating the models over the individual sub-samples suggests that over the last 20 years the financial sector has increasingly failed to account for the levels of risk associated with real estate markets. This result has possible implications for the way in which financial institutions should be regulated in the future.
Resumo:
With new and emerging e-business technologies to transform business processes, it is important to understand how those technologies will affect the performance of a business. Will the overall business process be cheaper, faster and more accurate or will a sub-optimal change have been implemented? The use of simulation to model the behaviour of business processes is well established, and it has been applied to e-business processes to understand their performance in terms of measures such as lead-time, cost and responsiveness. This paper introduces the concept of simulation components that enable simulation models of e-business processes to be built quickly from generic e-business templates. The paper demonstrates how these components were devised, as well as the results from their application through case studies.
Resumo:
Die vorliegende Studie untersuchte die im Job-Demand-Control-Support-Modell und Effort-Reward-Imbalance-Modell beschriebenen Tätigkeitsmerkmale in Bezug auf Depressivität in einer Stichprobe von 265 Erwerbstätigen. Anhand konfirmatorischer Faktorenanalysen wurden Gemeinsamkeiten und Unterschiede beider Modelle geprüft. Anschließend wurde die Bedeutung der nachweisbaren Tätigkeitsmerkmale für die Vorhersage von Depressivität getestet und untersucht, inwieweit die Effekte durch Überforderungserleben mediiert werden. Die Analysen zeigten, dass die Modelle sowohl gemeinsame (Arbeitsintensität bzw. berufliche Anforderungen) als auch distinkte Arbeitsmerkmale (Tätigkeitsspielraum, Arbeitsplatzsicherheit, beruflicher Status, soziale Anerkennung) erfassen. Hohe Arbeitsintensität, geringe Arbeitsplatzsicherheit und fehlende soziale Anerkennung standen in signifikantem Zusammenhang mit Depressivität. Anders als erwartet war der berufliche Status positiv mit Depressivität assoziiert, während für den Tätigkeitsspielraum keine signifikanten Effekte nachweisbar waren. Das Pfadmodell bestätigte sowohl direkte als auch durch Überforderungserleben vermittelte Zusammenhänge zwischen den Tätigkeitsmerkmalen und Depressivität (39 % Varianzaufklärung). Die Ergebnisse bieten eine Grundlage für die Identifizierung potenzieller Risikofaktoren für das Auftreten depressiver Symptome am Arbeitsplatz. This study examined the job characteristics in the Job-Demand-Control-Support Model and in the Effort-Reward Imbalance Model with regard to depression in a sample of 265 employees. First, we tested by means of confirmatory factor analysis similarities and differences of the two models. Secondly, job characteristics were introduced as predictors in a path model to test their relation with depression. Furthermore, we examined whether the associations were mediated by the experience of excessive demands. Our analyses showed the demand/effort component to be one common factor, while decision latitude and reward (subdivided into the three facets of job security, social recognition, and status-related reward) remained distinctive components. Employees with high job demands/effort, low job security, low social recognition, but high status-related rewards reported higher depression scores. Unexpectedly, status-related rewards were positively associated with depression, while we found no significant effects for decision latitude. The path models confirmed direct as well as mediation effects (through experienced excessive demands) between job characteristics and depression (39 % explained variance in depression). Our results could be useful to identify possible job-related risk factors for depression.
Resumo:
Relationships among quality factors in retailed free-range, corn-fed, organic, and conventional chicken breasts (9) were modeled using chemometric approaches. Use of principal component analysis (PCA) to neutral lipid composition data explained the majority (93%) of variability (variance) in fatty acid contents in 2 significant multivariate factors. PCA explained 88 and 75% variance in 3 factors for, respectively, flame ionization detection (FID) and nitrogen phosphorus (NPD) components in chromatographic flavor data from cooked chicken after simultaneous distillation extraction. Relationships to tissue antioxidant contents were modeled. Partial least square regression (PLS2), interrelating total data matrices, provided no useful models. By using single antioxidants as Y variables in PLS (1), good models (r2 values > 0.9) were obtained for alpha-tocopherol, glutathione, catalase, glutathione peroxidase, and reductase and FID flavor components and among the variables total mono and polyunsaturated fatty acids and subsets of FID, and saturated fatty acid and NPD components. Alpha-tocopherol had a modest (r2 = 0.63) relationship with neutral lipid n-3 fatty acid content. Such factors thus relate to flavor development and quality in chicken breast meat.
Resumo:
2000 Mathematics Subject Classification: 62H12, 62P99
Resumo:
Mixture experiments are typical for chemical, food, metallurgical and other industries. The aim of these experiments is to find optimal component proportions that provide desired values of some product performance characteristics.
Resumo:
Analysis of risk measures associated with price series data movements and its predictions are of strategic importance in the financial markets as well as to policy makers in particular for short- and longterm planning for setting up economic growth targets. For example, oilprice risk-management focuses primarily on when and how an organization can best prevent the costly exposure to price risk. Value-at-Risk (VaR) is the commonly practised instrument to measure risk and is evaluated by analysing the negative/positive tail of the probability distributions of the returns (profit or loss). In modelling applications, least-squares estimation (LSE)-based linear regression models are often employed for modeling and analyzing correlated data. These linear models are optimal and perform relatively well under conditions such as errors following normal or approximately normal distributions, being free of large size outliers and satisfying the Gauss-Markov assumptions. However, often in practical situations, the LSE-based linear regression models fail to provide optimal results, for instance, in non-Gaussian situations especially when the errors follow distributions with fat tails and error terms possess a finite variance. This is the situation in case of risk analysis which involves analyzing tail distributions. Thus, applications of the LSE-based regression models may be questioned for appropriateness and may have limited applicability. We have carried out the risk analysis of Iranian crude oil price data based on the Lp-norm regression models and have noted that the LSE-based models do not always perform the best. We discuss results from the L1, L2 and L∞-norm based linear regression models. ACM Computing Classification System (1998): B.1.2, F.1.3, F.2.3, G.3, J.2.
Resumo:
A major challenge of modern teams lies in the coordination of the efforts not just of individuals within a team, but also of teams whose efforts are ultimately entwined with those of other teams. Despite this fact, much of the research on work teams fails to consider the external dependencies that exist in organizational teams and instead focuses on internal or within team processes. Multi-Team Systems Theory is used as a theoretical framework for understanding teams-of-teams organizational forms (Multi-Team Systems; MTS's); and leadership teams are proposed as one remedy that enable MTS members to dedicate needed resources to intra-team activities while ensuring effective synchronization of between-team activities. Two functions of leader teams were identified: strategy development and coordination facilitation; and a model was developed delineating the effects of the two leader roles on multi-team cognitions, processes, and performance.^ Three hundred eighty-four undergraduate psychology and business students participated in a laboratory simulation that modeled an MTS; each MTS was comprised of three, two-member teams each performing distinct but interdependent components of an F-22 battle simulation task. Two roles of leader teams supported in the literature were manipulated through training in a 2 (strategy training vs. control) x 2 (coordination training vs. control) design. Multivariate analysis of variance (MANOVA) and mediated regression analysis were used to test the study's hypotheses. ^ Results indicate that both training manipulations produced differences in the effectiveness of the intended form of leader behavior. The enhanced leader strategy training resulted in more accurate (but not more similar) MTS mental models, better inter-team coordination, and higher levels of multi-team (but not component team) performance. Moreover, mental model accuracy fully mediated the relationship between leader strategy and inter-team coordination; and inter-team coordination fully mediated the effect of leader strategy on multi-team performance. Leader coordination training led to better inter-team coordination, but not to higher levels of either team or multi-team performance. Mediated Input-Process-Output (I-P-O) relationships were not supported with leader coordination; rather, leader coordination facilitation and inter-team coordination uniquely contributed to component team and multi-team level performance. The implications of these findings and future research directions are also discussed. ^