889 resultados para Man-Machine Perceptual Performance.
Resumo:
This thesis presents an examination of the factors which influence the performance of eddy-current machines and the way in which they affect optimality of those machines. After a brief introduction to the types of eddy-current machine considered, the applications to which these machines are put are examined. A list of parameters by which to assess their performance is obtained by considering the machine as part of a system. in this way an idea of what constitutes an optimal machine is obtained. The third chapter then identifies the factors which affects the performance and makes a quantitative evaluation of the effect. Here the various alternative configurations and components are compared with regard to their influence on the mechanical, electromagnetic, and thermal performance criteria of the machine. Chapter four contains a brief review of the methods of controlling eddy-current machines by electronic methods using thyristors or transistors as the final control element. Where necessary, the results of previous workers in the field of electrical machines have been extended or adapted to increase the usefulness of this thesis.
Resumo:
Keyword identification in one of two simultaneous sentences is improved when the sentences differ in F0, particularly when they are almost continuously voiced. Sentences of this kind were recorded, monotonised using PSOLA, and re-synthesised to give a range of harmonic ?F0s (0, 1, 3, and 10 semitones). They were additionally re-synthesised by LPC with the LPC residual frequency shifted by 25% of F0, to give excitation with inharmonic but regularly spaced components. Perceptual identification of frequency-shifted sentences showed a similar large improvement with nominal ?F0 as seen for harmonic sentences, although overall performance was about 10% poorer. We compared performance with that of two autocorrelation-based computational models comprising four stages: (i) peripheral frequency selectivity and half-wave rectification; (ii) within-channel periodicity extraction; (iii) identification of the two major peaks in the summary autocorrelation function (SACF); (iv) a template-based approach to speech recognition using dynamic time warping. One model sampled the correlogram at the target-F0 period and performed spectral matching; the other deselected channels dominated by the interferer and performed matching on the short-lag portion of the residual SACF. Both models reproduced the monotonic increase observed in human performance with increasing ?F0 for the harmonic stimuli, but not for the frequency-shifted stimuli. A revised version of the spectral-matching model, which groups patterns of periodicity that lie on a curve in the frequency-delay plane, showed a closer match to the perceptual data for frequency-shifted sentences. The results extend the range of phenomena originally attributed to harmonic processing to grouping by common spectral pattern.
Resumo:
Currently over 50 million people worldwide wear contact lenses, of which over 75% wear hydrogel lenses. Significant deposition occurs in approximately 80% of hydrogel lenses and many contact lens wearers cease wearing lenses due to problems associated with deposition. The contact lens field is not alone in encountering complications associated with interactions between the body and artificial devices. The widespread use of man-made materials to replace structures in the body has emphasised the importance of studies that examine the interactions between implantation materials and body tissues.This project used carefully controlled, randomized clinical studies to study the interactive effects of contact lens materials, care systems, replacement periods and patient differences. Of principal interest was the influence of these factors on material deposition and their subsequent impact on subjective performance. A range of novel and established analytical techniques were used to examine hydrogel lenses following carefully controlled clinical studies in which clinical performance was meticulously monitored. These studies established the inter-relationship between clinical performance and deposition to be evaluated. This project showed that significant differences exist between individuals in their ability to deposit hydrogel lenses, with approximately 20% of subjects displaying significant deposition irrespective of the lens material. Additionally, materials traditionally categorised together show markedly different spoilation characteristics, which are wholly attributable to their detailed chemical structure. For the first time the in vivo deposition kinetics of both protein and lipid in charged and uncharged polymers was demonstrated. In addition the importance of care systems in the deposition process was shown, clearly demonstrating the significance of the quality rather than the quantity of deposition in influencing subjective performance.
Resumo:
We examined the relations between selection for perception and selection for action in a patient FK, with bilateral damage to his temporal and medial frontal cortices. The task required a simple grasp response to a common object (a cup) in the presence of a distractor (another cup). The target was cued by colour or location, and FK made manual responses. We examined the effects on performance of cued and uncued dimensions of both the target and the distractor. FK was impaired at perceptually selecting the target when cued by colour, when the target colour but not its location changed on successive trials. The effect was sensitive to the relative orientations of targets and distractors, indicating an effect of action selection on perceptual selection, when perceptual selection was weakly instantiated. The dimension-specific carry-over effect on reaching was enhanced when there was a temporal delay between a cue and the response, and it disappeared when there was a between-trial delay. The results indicate that perceptual and action selection systems interact to determine the efficiency with which actions are selected to particular objects.
Resumo:
Over the past decade, several experienced Operational Researchers have advanced the view that the theoretical aspects of model building have raced ahead of the ability of people to use them. Consequently, the impact of Operational Research on commercial organisations and the public sector is limited, and many systems fail to achieve their anticipated benefits in full. The primary objective of this study is to examine a complex interactive Stock Control system, and identify the reasons for the differences between the theoretical expectations and the operational performance. The methodology used is to hypothesise all the possible factors which could cause a divergence between theory and practice, and to evaluate numerically the effect each of these factors has on two main control indices - Service Level and Average Stock Value. Both analytical and empirical methods are used, and simulation is employed extensively. The factors are divided into two main categories for analysis - theoretical imperfections in the model, and the usage of the system by Buyers. No evidence could be found in the literature of any previous attempts to place the differences between theory and practice in a system in quantitative perspective nor, more specifically, to study the effects of Buyer/computer interaction in a Stock Control system. The study reveals that, in general, the human factors influencing performance are of a much higher order of magnitude than the theoretical factors, thus providing objective evidence to support the original premise. The most important finding is that, by judicious intervention into an automatic stock control algorithm, it is possible for Buyers to produce results which not only attain but surpass the algorithmic predictions. However, the complexity and behavioural recalcitrance of these systems are such that an innately numerate, enquiring type of Buyer needs to be inducted to realise the performance potential of the overall man/computer system.
Resumo:
This thesis introduces and develops a novel real-time predictive maintenance system to estimate the machine system parameters using the motion current signature. Recently, motion current signature analysis has been addressed as an alternative to the use of sensors for monitoring internal faults of a motor. A maintenance system based upon the analysis of motion current signature avoids the need for the implementation and maintenance of expensive motion sensing technology. By developing nonlinear dynamical analysis for motion current signature, the research described in this thesis implements a novel real-time predictive maintenance system for current and future manufacturing machine systems. A crucial concept underpinning this project is that the motion current signature contains information relating to the machine system parameters and that this information can be extracted using nonlinear mapping techniques, such as neural networks. Towards this end, a proof of concept procedure is performed, which substantiates this concept. A simulation model, TuneLearn, is developed to simulate the large amount of training data required by the neural network approach. Statistical validation and verification of the model is performed to ascertain confidence in the simulated motion current signature. Validation experiment concludes that, although, the simulation model generates a good macro-dynamical mapping of the motion current signature, it fails to accurately map the micro-dynamical structure due to the lack of knowledge regarding performance of higher order and nonlinear factors, such as backlash and compliance. Failure of the simulation model to determine the micro-dynamical structure suggests the presence of nonlinearity in the motion current signature. This motivated us to perform surrogate data testing for nonlinearity in the motion current signature. Results confirm the presence of nonlinearity in the motion current signature, thereby, motivating the use of nonlinear techniques for further analysis. Outcomes of the experiment show that nonlinear noise reduction combined with the linear reverse algorithm offers precise machine system parameter estimation using the motion current signature for the implementation of the real-time predictive maintenance system. Finally, a linear reverse algorithm, BJEST, is developed and applied to the motion current signature to estimate the machine system parameters.
Resumo:
We examined whether inductive reasoning development is better characterized by accounts assuming an early category bias versus an early perceptual bias. We trained 264 children aged 3 to 9 years to categorize novel insects using a rule that directly pitted category membership against appearance. This was followed by an induction task with perceptual distractors at different levels of featural similarity. An additional 52 children were given the same training followed by an induction task with alternative stimuli. Categorization performance was consistently high, however we found a gradual transition from a perceptual bias in our youngest children to a category bias around age 6-7. In addition, children of all ages were equally distracted by higher levels of featural similarity. The transition is unlikely to be due to an increased ability to inhibit perceptual distractors. Instead, we argue that the transition is driven by a fundamental change in children’s understanding of category membership.
Resumo:
Insights from the stream of research on knowledge calibration, which refers to the correspondence between accuracy and confidence in knowledge, enable a better understanding of consequences of inaccurate perceptions of managers. This paper examines the consequences of inaccurate managerial knowledge through the lens of knowledge calibration. Specifically, the paper examines the antecedent role of miscalibration of knowledge in strategy formation. It is postulated that miscalibrated managers who overestimate external factors and display a high level of confidence in their estimates are likely to enact strategies that are relatively more evolutionary and incremental in nature, whereas miscalibrated managers who overestimate internal factors and display a high level of confidence in their estimates are likely to enact strategies that are relatively more discontinuous and disruptive in nature. Perspectives from social cognitive theory provide support for the underlying processes. The paper, in part, explains the paradox of the prevalence of inaccurate managerial perceptions and efficacious performance. It also advances the literature on strategy formation through the application of the construct of knowledge calibration.
Resumo:
The low-energy consumption of IEEE 802.15.4 networks makes it a strong candidate for machine-to-machine (M2M) communications. As multiple M2M applications with 802.15.4 networks may be deployed closely and independently in residential or enterprise areas, supporting reliable and timely M2M communications can be a big challenge especially when potential hidden terminals appear. In this paper, we investigate two scenarios of 802.15.4 network-based M2M communication. An analytic model is proposed to understand the performance of uncoordinated coexisting 802.15.4 networks. Sleep mode operations of the networks are taken into account. Simulations verified the analytic model. It is observed that reducing sleep time and overlap ratio can increase the performance of M2M communications. When the networks are uncoordinated, reducing the overlap ratio can effectively improve the network performance. © 2012 Chao Ma et al.
Resumo:
IEEE 802.15.4 standard has been proposed for low power wireless personal area networks. It can be used as an important component in machine to machine (M2M) networks for data collection, monitoring and controlling functions. With an increasing number of machine devices enabled by M2M technology and equipped with 802.15.4 radios, it is likely that multiple 802.15.4 networks may be deployed closely, for example, to collect data for smart metering at residential or enterprise areas. In such scenarios, supporting reliable communications for monitoring and controlling applications is a big challenge. The problem becomes more severe due to the potential hidden terminals when the operations of multiple 802.15.4 networks are uncoordinated. In this paper, we investigate this problem from three typical scenarios and propose an analytic model to reveal how performance of coexisting 802.15.4 networks may be affected by uncoordinated operations under these scenarios. Simulations will be used to validate the analytic model. It is observed that uncoordinated operations may lead to a significant degradation of system performance in M2M applications. With the proposed analytic model, we also investigate the performance limits of the 802.15.4 networks, and the conditions under which coordinated operations may be required to support M2M applications. © 2012 Springer Science + Business Media, LLC.
Resumo:
Combining the results of classifiers has shown much promise in machine learning generally. However, published work on combining text categorizers suggests that, for this particular application, improvements in performance are hard to attain. Explorative research using a simple voting system is presented and discussed in the light of a probabilistic model that was originally developed for safety critical software. It was found that typical categorization approaches produce predictions which are too similar for combining them to be effective since they tend to fail on the same records. Further experiments using two less orthodox categorizers are also presented which suggest that combining text categorizers can be successful, provided the essential element of ‘difference’ is considered.
Resumo:
Background - The binding between peptide epitopes and major histocompatibility complex proteins (MHCs) is an important event in the cellular immune response. Accurate prediction of the binding between short peptides and the MHC molecules has long been a principal challenge for immunoinformatics. Recently, the modeling of MHC-peptide binding has come to emphasize quantitative predictions: instead of categorizing peptides as "binders" or "non-binders" or as "strong binders" and "weak binders", recent methods seek to make predictions about precise binding affinities. Results - We developed a quantitative support vector machine regression (SVR) approach, called SVRMHC, to model peptide-MHC binding affinities. As a non-linear method, SVRMHC was able to generate models that out-performed existing linear models, such as the "additive method". By adopting a new "11-factor encoding" scheme, SVRMHC takes into account similarities in the physicochemical properties of the amino acids constituting the input peptides. When applied to MHC-peptide binding data for three mouse class I MHC alleles, the SVRMHC models produced more accurate predictions than those produced previously. Furthermore, comparisons based on Receiver Operating Characteristic (ROC) analysis indicated that SVRMHC was able to out-perform several prominent methods in identifying strongly binding peptides. Conclusion - As a method with demonstrated performance in the quantitative modeling of MHC-peptide binding and in identifying strong binders, SVRMHC is a promising immunoinformatics tool with not inconsiderable future potential.
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Resumo:
The concept of measurement-enabled production is based on integrating metrology systems into production processes and generated significant interest in industry, due to its potential to increase process capability and accuracy, which in turn reduces production times and eliminates defective parts. One of the most promising methods of integrating metrology into production is the usage of external metrology systems to compensate machine tool errors in real time. The development and experimental performance evaluation of a low-cost, prototype three-axis machine tool that is laser tracker assisted are described in this paper. Real-time corrections of the machine tool's absolute volumetric error have been achieved. As a result, significant increases in static repeatability and accuracy have been demonstrated, allowing the low-cost three-axis machine tool to reliably reach static positioning accuracies below 35 μm throughout its working volume without any prior calibration or error mapping. This is a significant technical development that demonstrated the feasibility of the proposed methods and can have wide-scale industrial applications by enabling low-cost and structural integrity machine tools that could be deployed flexibly as end-effectors of robotic automation, to achieve positional accuracies that were the preserve of large, high-precision machine tools.
Resumo:
As machine tools continue to become increasingly repeatable and accurate, high-precision manufacturers may be tempted to consider how they might utilise machine tools as measurement systems. In this paper, we have explored this paradigm by attempting to repurpose state-of-the-art coordinate measuring machine Uncertainty Evaluating Software (UES) for a machine tool application. We performed live measurements on all the systems in question. Our findings have highlighted some gaps with UES when applied to machine tools, and we have attempted to identify the sources of variation which have led to discrepancies. Implications of this research include requirements to evolve the algorithms within the UES if it is to be adapted for on-machine measurement, improve the robustness of the input parameters, and most importantly, clarify expectations.