854 resultados para Simulator of Performance in Error
Resumo:
Traditional psychometric theory and practice classify people according to broad ability dimensions but do not examine how these mental processes occur. Hunt and Lansman (1975) proposed a 'distributed memory' model of cognitive processes with emphasis on how to describe individual differences based on the assumption that each individual possesses the same components. It is in the quality of these components ~hat individual differences arise. Carroll (1974) expands Hunt's model to include a production system (after Newell and Simon, 1973) and a response system. He developed a framework of factor analytic (FA) factors for : the purpose of describing how individual differences may arise from them. This scheme is to be used in the analysis of psychometric tes ts . Recent advances in the field of information processing are examined and include. 1) Hunt's development of differences between subjects designated as high or low verbal , 2) Miller's pursuit of the magic number seven, plus or minus two, 3) Ferguson's examination of transfer and abilities and, 4) Brown's discoveries concerning strategy teaching and retardates . In order to examine possible sources of individual differences arising from cognitive tasks, traditional psychometric tests were searched for a suitable perceptual task which could be varied slightly and administered to gauge learning effects produced by controlling independent variables. It also had to be suitable for analysis using Carroll's f ramework . The Coding Task (a symbol substitution test) found i n the Performance Scale of the WISe was chosen. Two experiments were devised to test the following hypotheses. 1) High verbals should be able to complete significantly more items on the Symbol Substitution Task than low verbals (Hunt, Lansman, 1975). 2) Having previous practice on a task, where strategies involved in the task may be identified, increases the amount of output on a similar task (Carroll, 1974). J) There should be a sUbstantial decrease in the amount of output as the load on STM is increased (Miller, 1956) . 4) Repeated measures should produce an increase in output over trials and where individual differences in previously acquired abilities are involved, these should differentiate individuals over trials (Ferguson, 1956). S) Teaching slow learners a rehearsal strategy would improve their learning such that their learning would resemble that of normals on the ,:same task. (Brown, 1974). In the first experiment 60 subjects were d.ivided·into high and low verbal, further divided randomly into a practice group and nonpractice group. Five subjects in each group were assigned randomly to work on a five, seven and nine digit code throughout the experiment. The practice group was given three trials of two minutes each on the practice code (designed to eliminate transfer effects due to symbol similarity) and then three trials of two minutes each on the actual SST task . The nonpractice group was given three trials of two minutes each on the same actual SST task . Results were analyzed using a four-way analysis of variance . In the second experiment 18 slow learners were divided randomly into two groups. one group receiving a planned strategy practioe, the other receiving random practice. Both groups worked on the actual code to be used later in the actual task. Within each group subjects were randomly assigned to work on a five, seven or nine digit code throughout. Both practice and actual tests consisted on three trials of two minutes each. Results were analyzed using a three-way analysis of variance . It was found in t he first experiment that 1) high or low verbal ability by itself did not produce significantly different results. However, when in interaction with the other independent variables, a difference in performance was noted . 2) The previous practice variable was significant over all segments of the experiment. Those who received previo.us practice were able to score significantly higher than those without it. J) Increasing the size of the load on STM severely restricts performance. 4) The effect of repeated trials proved to be beneficial. Generally, gains were made on each successive trial within each group. S) In the second experiment, slow learners who were allowed to practice randomly performed better on the actual task than subjeots who were taught the code by means of a planned strategy. Upon analysis using the Carroll scheme, individual differences were noted in the ability to develop strategies of storing, searching and retrieving items from STM, and in adopting necessary rehearsals for retention in STM. While these strategies may benef it some it was found that for others they may be harmful . Temporal aspects and perceptual speed were also found to be sources of variance within individuals . Generally it was found that the largest single factor i nfluencing learning on this task was the repeated measures . What e~ables gains to be made, varies with individuals . There are environmental factors, specific abilities, strategy development, previous learning, amount of load on STM , perceptual and temporal parameters which influence learning and these have serious implications for educational programs .
Resumo:
This thesis examines salary structure types (hierarchical or compressed) as predictors of team performance in the National Hockey League (NHL). Additionally, an analysis of goalie statistics is completed in order to determine what, if any, performance measures relate to salary. Data in this research were collected from the 2005-06 season up to the 2010-11 season. Salary inequality/equality (Gini coefficient) was used in a regression analysis to determine if it was an effective predictor of team performance (n = 178) (winning percentage). The results indicated that a hierarchical salary structure increased team performance, although the amount of variability explained was very small. Another regression analysis was completed to determine if any goalie performance measures (n = 245) were effective predictors of individual salary. A regression analysis was employed and indicated that goalie performance measures predicted 19.8% of variance to salary. The only statistical significant variable was games played.
Resumo:
Past research has shown a positive relationship between efficacy and performance (Feltz & Lirgg, 1998). Feltz and Lirgg (1998) found a positive relationship between efficacy and sport performance in hockey players, however they excluded goaltenders due to their unique position. The present study replicated Feltz and Lirgg (1998) with only goaltenders. Data was collected from 12 goaltenders from three Ontario hockey leagues. Efficacy was measured through an online questionnaire and official game statistics provided the performance measures. Data was collected for 70 games to total of 112 responses. Results of this study revealed non-significant relationships between both self- and collective efficacy and all performance indicators. Results of the present study are not consistent with Feltz and Lirgg’s (1998), however other published research has found a non-significant relationship between efficacy and sport performance (Sitzmann & Yeo, 2013). Therefore, it is possible that goaltender efficacy is not the most influential psychological construct.
Resumo:
We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross-equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared to simulation-based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal, Student t; normal mixtures and stable error models. In the Gaussian case, finite-sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi-stage Monte Carlo test methods. For non-Gaussian distribution families involving nuisance parameters, confidence sets are derived for the the nuisance parameters and the error distribution. The procedures considered are evaluated in a small simulation experi-ment. Finally, the tests are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995.
Resumo:
Ma thèse de doctorat, In the Circle: Jazz Griots and the Mapping of African American Cultural History in Poetry, étudie la façon dont les poètes afro-américains des années 1960 et 1970, Langston Hughes, David Henderson, Sonia Sanchez, et Amiri Baraka, emploient le jazz afin d’ancrer leur poésie dans la tradition de performance. Ce faisant, chacun de ces poètes démontre comment la culture noire, en conceptualisant à travers la performance des modes de résistance, fût utilisée par les peuples de descendance africaine pour contrer le racisme institutionnalisé et les discours discriminatoires. Donc, pour les fins de cette thèse, je me concentre sur quatre poètes engagés dans des dialogues poétiques avec la musicologie, l’esthétique, et la politique afro-américaines des années 1960 et 1970. Ces poètes affirment la centralité de la performativité littéraire noire afin d’assurer la survie et la continuité de la mémoire culturelle collective des afro-américains. De plus, mon argument est que la théorisation de l’art afro-américain comme engagement politique devient un élément central à l’élaboration d’une esthétique noire basée sur la performance. Ma thèse de doctorat propose donc une analyse originale des ces quatre poètes qui infusent leur poèmes avec des références au jazz et à la politique dans le but de rééduquer les générations des années 2000 en ce qui concerne leur mémoire collective.
Resumo:
Emma Hamilton (1765-1815) eut un impact considérable à un moment charnière de l’histoire et de l’art européens. Faisant preuve d’une énorme résilience, elle trouva un moyen efficace d’affirmer son agentivité et fut une source d’inspiration puissante pour des générations de femmes et d’artistes dans leur propre quête d’expression et de réalisation de soi. Cette thèse démontre qu’Emma tira sa puissance particulière de sa capacité à négocier des identités différentes et parfois même contradictoires – objet et sujet ; modèle et portraiturée ; artiste, muse et œuvre d’art ; épouse, maîtresse et prostituée ; roturière et aristocrate ; mondaine et ambassadrice : et interprète d’une myriade de caractères historiques, bibliques, littéraires et mythologiques, tant masculins que féminins. Épouse de l’ambassadeur anglais à Naples, favorite de la reine de Naples et amante de l’amiral Horatio Nelson, elle fut un agent sur la scène politique pendant l’époque révolutionnaire et napoléonienne. Dans son ascension sociale vertigineuse qui la mena de la plus abjecte misère aux plus hauts échelons de l’aristocratie anglaise, elle sut s’adapter, s’ajuster et se réinventer. Elle reçut et divertit d’innombrables écrivains, artistes, scientifiques, nobles, diplomates et membres de la royauté. Elle participa au développement et à la dissémination du néoclassicisme au moment même de son efflorescence. Elle créa ses Attitudes, une performance répondant au goût de son époque pour le classicisme, qui fut admirée et imitée à travers l’Europe et qui inspira des générations d’interprètes féminines. Elle apprit à danser la tarentelle et l’introduisit dans les salons aristocratiques. Elle influença un réseau de femmes s’étendant de Paris à Saint-Pétersbourg et incluant Élisabeth Vigée-Le Brun, Germaine de Staël et Juliette Récamier. Modèle hors pair, elle inspira plusieurs artistes pour la production d’œuvres qu’ils reconnurent comme parmi leurs meilleures. Elle fut représentée par les plus grands artistes de son temps, dont Angelica Kauffman, Benjamin West, Élisabeth Vigée-Le Brun, George Romney, James Gillray, Joseph Nollekens, Joshua Reynolds, Thomas Lawrence et Thomas Rowlandson. Elle bouscula, de façon répétée, les limites et mœurs sociales. Néanmoins, Emma ne tentait pas de présenter une identité cohérente, unifiée, polie. Au contraire, elle était un kaléidoscope de multiples « sois » qu’elle gardait actifs et en dialogue les uns avec les autres, réarrangeant continuellement ses facettes afin de pouvoir simultanément s’exprimer pleinement et présenter aux autres ce qu’ils voulaient voir.
Resumo:
Ce mémoire est composé de trois articles et présente les résultats de travaux de recherche effectués dans le but d'améliorer les techniques actuelles permettant d'utiliser des données associées à certaines tâches dans le but d'aider à l'entraînement de réseaux de neurones sur une tâche différente. Les deux premiers articles présentent de nouveaux ensembles de données créés pour permettre une meilleure évaluation de ce type de techniques d'apprentissage machine. Le premier article introduit une suite d'ensembles de données pour la tâche de reconnaissance automatique de chiffres écrits à la main. Ces ensembles de données ont été générés à partir d'un ensemble de données déjà existant, MNIST, auquel des nouveaux facteurs de variation ont été ajoutés. Le deuxième article introduit un ensemble de données pour la tâche de reconnaissance automatique d'expressions faciales. Cet ensemble de données est composé d'images de visages qui ont été collectées automatiquement à partir du Web et ensuite étiquetées. Le troisième et dernier article présente deux nouvelles approches, dans le contexte de l'apprentissage multi-tâches, pour tirer avantage de données pour une tâche donnée afin d'améliorer les performances d'un modèle sur une tâche différente. La première approche est une généralisation des neurones Maxout récemment proposées alors que la deuxième consiste en l'application dans un contexte supervisé d'une technique permettant d'inciter des neurones à apprendre des fonctions orthogonales, à l'origine proposée pour utilisation dans un contexte semi-supervisé.
Resumo:
Global Positioning System (GPS), with its high integrity, continuous availability and reliability, revolutionized the navigation system based on radio ranging. With four or more GPS satellites in view, a GPS receiver can find its location anywhere over the globe with accuracy of few meters. High accuracy - within centimeters, or even millimeters is achievable by correcting the GPS signal with external augmentation system. The use of satellite for critical application like navigation has become a reality through the development of these augmentation systems (like W AAS, SDCM, and EGNOS, etc.) with a primary objective of providing essential integrity information needed for navigation service in their respective regions. Apart from these, many countries have initiated developing space-based regional augmentation systems like GAGAN and IRNSS of India, MSAS and QZSS of Japan, COMPASS of China, etc. In future, these regional systems will operate simultaneously and emerge as a Global Navigation Satellite System or GNSS to support a broad range of activities in the global navigation sector.Among different types of error sources in the GPS precise positioning, the propagation delay due to the atmospheric refraction is a limiting factor on the achievable accuracy using this system. The WADGPS, aimed for accurate positioning over a large area though broadcasts different errors involved in GPS ranging including ionosphere and troposphere errors, due to the large temporal and spatial variations in different atmospheric parameters especially in lower atmosphere (troposphere), the use of these broadcasted tropospheric corrections are not sufficiently accurate. This necessitated the estimation of tropospheric error based on realistic values of tropospheric refractivity. Presently available methodologies for the estimation of tropospheric delay are mostly based on the atmospheric data and GPS measurements from the mid-latitude regions, where the atmospheric conditions are significantly different from that over the tropics. No such attempts were made over the tropics. In a practical approach when the measured atmospheric parameters are not available analytical models evolved using data from mid-latitudes for this purpose alone can be used. The major drawback of these existing models is that it neglects the seasonal variation of the atmospheric parameters at stations near the equator. At tropics the model underestimates the delay in quite a few occasions. In this context, the present study is afirst and major step towards the development of models for tropospheric delay over the Indian region which is a prime requisite for future space based navigation program (GAGAN and IRNSS). Apart from the models based on the measured surface parameters, a region specific model which does not require any measured atmospheric parameter as input, but depends on latitude and day of the year was developed for the tropical region with emphasis on Indian sector.Large variability of atmospheric water vapor content in short spatial and/or temporal scales makes its measurement rather involved and expensive. A local network of GPS receivers is an effective tool for water vapor remote sensing over the land. This recently developed technique proves to be an effective tool for measuring PW. The potential of using GPS to estimate water vapor in the atmosphere at all-weather condition and with high temporal resolution is attempted. This will be useful for retrieving columnar water vapor from ground based GPS data. A good network of GPS could be a major source of water vapor information for Numerical Weather Prediction models and could act as surrogate to the data gap in microwave remote sensing for water vapor over land.
Resumo:
It has become clear over the last few years that many deterministic dynamical systems described by simple but nonlinear equations with only a few variables can behave in an irregular or random fashion. This phenomenon, commonly called deterministic chaos, is essentially due to the fact that we cannot deal with infinitely precise numbers. In these systems trajectories emerging from nearby initial conditions diverge exponentially as time evolves)and therefore)any small error in the initial measurement spreads with time considerably, leading to unpredictable and chaotic behaviour The thesis work is mainly centered on the asymptotic behaviour of nonlinear and nonintegrable dissipative dynamical systems. It is found that completely deterministic nonlinear differential equations describing such systems can exhibit random or chaotic behaviour. Theoretical studies on this chaotic behaviour can enhance our understanding of various phenomena such as turbulence, nonlinear electronic circuits, erratic behaviour of heart and brain, fundamental molecular reactions involving DNA, meteorological phenomena, fluctuations in the cost of materials and so on. Chaos is studied mainly under two different approaches - the nature of the onset of chaos and the statistical description of the chaotic state.
Resumo:
The results of an investigation on the limits of the random errors contained in the basic data of Physical Oceanography and their propagation through the computational procedures are presented in this thesis. It also suggest a method which increases the reliability of the derived results. The thesis is presented in eight chapters including the introductory chapter. Chapter 2 discusses the general theory of errors that are relevant in the context of the propagation of errors in Physical Oceanographic computations. The error components contained in the independent oceanographic variables namely, temperature, salinity and depth are deliniated and quantified in chapter 3. Chapter 4 discusses and derives the magnitude of errors in the computation of the dependent oceanographic variables, density in situ, gt, specific volume and specific volume anomaly, due to the propagation of errors contained in the independent oceanographic variables. The errors propagated into the computed values of the derived quantities namely, dynamic depth and relative currents, have been estimated and presented chapter 5. Chapter 6 reviews the existing methods for the identification of level of no motion and suggests a method for the identification of a reliable zero reference level. Chapter 7 discusses the available methods for the extension of the zero reference level into shallow regions of the oceans and suggests a new method which is more reliable. A procedure of graphical smoothening of dynamic topographies between the error limits to provide more reliable results is also suggested in this chapter. Chapter 8 deals with the computation of the geostrophic current from these smoothened values of dynamic heights, with reference to the selected zero reference level. The summary and conclusion are also presented in this chapter.
Resumo:
The paper summarizes the design and implementation of a quadratic edge detection filter, based on Volterra series, for enhancing calcifications in mammograms. The proposed filter can account for much of the polynomial nonlinearities inherent in the input mammogram image and can replace the conventional edge detectors like Laplacian, gaussian etc. The filter gives rise to improved visualization and early detection of microcalcifications, which if left undetected, can lead to breast cancer. The performance of the filter is analyzed and found superior to conventional spatial edge detectors
Resumo:
An important feature of maintaining the agricultural stability in millennia-old mountain oases of northern Oman is the temporary abandonment of terraces. To analyse the effects of a fallow period on soil microbial performance, i.e. microbial activity and microbial biomass, samples of eight terrace soils abandoned for different periods were collected in situ, assigned to four fallow age classes and incubated for 30 days in the laboratory after rewetting. The younger fallow age classes of 1 and 5 years were based on the records of the farmers’ recollections, the two older fallow age classes of 10–20 and 25–60 years according to the increase in the D -to- L ratio of valine and leucine enantiomers. The increase in these two ratios was in agreement with that of the D -to- L ratio of lysine. The strongest relationship was observed between the increase in the D -to- L ratio of lysine and the decrease in soil microbial biomass C. However, the most stringent coherence between the increase in fallow age and soil properties was revealed by the decreases in cumulative respiration and net N mineralisation rates with decreasing availability of substrate to soil microorganisms. During the 30-day incubation following rewetting, relative changes in microbial activity (respiration and net N mineralisation) and microbial biomass (C and N)indices were similar in the eight terrace soils on a fallow age-class-specific level, indicating that the same basic processes occurred in all of the sandy terrace soils investigated.
Resumo:
The overall aim of the work presented was to evaluate soil health management with a specific focus on soil borne diseases of peas. For that purpose field experiments were carried out from 2009 until 2013 to assess crop performance and pathogen occurrence in the rotation winter pea-maize-winter wheat and if the application of composts can improve system performance. The winter peas were left untreated or inoculated with Phoma medicaginis, in the presence or absence of yard waste compost at rate of 5 t dry matter ha-1. A second application of compost was made to the winter wheat. Fusarium ssp. were isolated and identified from the roots of all three crops and the Ascochyta complex pathogens on peas. Bioassays were conducted under controlled conditions to assess susceptibility of two peas to Fusarium avenaceum, F. solani, P. medicaginis and Didymella pinodes and of nine plant species to F. avenaceum. Also, effects of compost applications and temperature on pea diseases were assessed. Application of composts overall stabilized crop performance but it did not lead to significant yield increases nor did it affect pathogen composition and occurrence. Phoma medicaginis was dominating the pathogen complex on peas. F. graminearum, F. culmorum, F. proliferatum, Microdochium nivale, F. crookwellense, F. sambucinum, F. oxysporum, F. avenaceum and F. equiseti were frequently isolated species from maize and winter wheat with no obvious influence of the pre-crop on the Fusarium species composition. The spring pea Santana was considerably more susceptible to the pathogens tested than the winter pea EFB33 in both sterile sand and non-sterilized field soil. F. avenaceum was the most aggressive pathogen, followed by P. medicaginis, D. pinodes, and F. solani. Aggressiveness of all pathogens was greatly reduced in non-sterile field soil. F. avenaceum caused severe symptoms on roots of all nine plant species tested. Especially susceptible were Trifolium repens, T. subterraneum, Brassica juncea and Sinapis alba in addition to peas. Reduction of growing temperatures from 19/16°C day/night to 16/12°C and 13/10°C did not affect the efficacy of compost. It reduced plant growth and slightly increased disease on EFB33 whereas the highest disease severity on Santana was observed at the highest temperature, 19/16°C. Application of 20% v/v of compost reduced disease on peas due to all four pathogens depending on pea variety, pathogen and growing media used. Suppression was also achieved with lower application rate of 3.5% v/v. Tests with γ sterilized compost suggest that the suppression of disease caused by Fusarium spp. is biological in origin, whereas chemical and physical properties of compost are playing an additional role in the suppression of disease caused by D. pinodes and P. medicaginis.
Resumo:
Most psychophysical studies of object recognition have focussed on the recognition and representation of individual objects subjects had previously explicitely been trained on. Correspondingly, modeling studies have often employed a 'grandmother'-type representation where the objects to be recognized were represented by individual units. However, objects in the natural world are commonly members of a class containing a number of visually similar objects, such as faces, for which physiology studies have provided support for a representation based on a sparse population code, which permits generalization from the learned exemplars to novel objects of that class. In this paper, we present results from psychophysical and modeling studies intended to investigate object recognition in natural ('continuous') object classes. In two experiments, subjects were trained to perform subordinate level discrimination in a continuous object class - images of computer-rendered cars - created using a 3D morphing system. By comparing the recognition performance of trained and untrained subjects we could estimate the effects of viewpoint-specific training and infer properties of the object class-specific representation learned as a result of training. We then compared the experimental findings to simulations, building on our recently presented HMAX model of object recognition in cortex, to investigate the computational properties of a population-based object class representation as outlined above. We find experimental evidence, supported by modeling results, that training builds a viewpoint- and class-specific representation that supplements a pre-existing repre-sentation with lower shape discriminability but possibly greater viewpoint invariance.
Resumo:
One of the key challenges in face perception lies in determining the contribution of different cues to face identification. In this study, we focus on the role of color cues. Although color appears to be a salient attribute of faces, past research has suggested that it confers little recognition advantage for identifying people. Here we report experimental results suggesting that color cues do play a role in face recognition and their contribution becomes evident when shape cues are degraded. Under such conditions, recognition performance with color images is significantly better than that with grayscale images. Our experimental results also indicate that the contribution of color may lie not so much in providing diagnostic cues to identity as in aiding low-level image-analysis processes such as segmentation.