968 resultados para Performance scores
Resumo:
The current study aimed to investigate and provide furthering evidence of individual differences as determinants of task performance. This research focused on the effects of the personality traits Openness to Experience and Neuroticism, and two goal orientation traits. Learning Orientation and Avoid Orientation, on task performance. The hypotheses addressed the predictability of the traits, the differential effects of personality and goal orientation traits, and the mediating effects of goal orientation on the relationship between personality and performance. The results were based on questionnaire responses completed by a sample of 103 students. Scores on a computerised Air Traffic Control (ATC) decision-making task were used as a measure of task performance. Learning Orientation was found to be a significant predictor of performance, whilst the effect of Neuroticism was 'approaching' significance. Results indicated strong support for the differential relationship between personality traits and corresponding goal orientation traits. The mediating relationship between Openness to Experience, Learning Orientation and performance was also found to be 'approaching' significance. Results were indicative of the influences of personality and goal orientation on consequent performance outcomes. Implications were discussed, as well as suggestions for possible future directions in research assessing the predictabilit)' of individual differences in learning contexts.
Resumo:
The performances of five different ESI sources coupled to a polystyrene-divinylbenzene monolithic column were compared in a series of LC-ESI-MS/MS analyses of Escherichia coli outer membrane proteins. The sources selected for comparison included two different modifications of the standard electrospray source, a commercial low-flow sprayer, a stainless steel nanospray needle and a coated glass Picotip. Respective performances were judged on sensitivity and the number and reproducibility of significant protein identifications obtained through the analysis of multiple identical samples. Data quality varied between that of a ground silica capillary, with 160 total protein identifications, the lowest number of high quality peptide hits obtained (3012), and generally peaks of lower intensity; and a stainless steel nanospray needle, which resulted in increased precursor ion abundance, the highest-quality peptide fragmentation spectra (5414) and greatest number of total protein identifications (259) exhibiting the highest MASCOT scores (average increase in score of 27.5% per identified protein). The data presented show that, despite increased variability in comparative ion intensity, the stainless steel nanospray needle provides the highest overall sensitivity. However, the resulting data were less reproducible in terms of proteins identified in complex mixtures -- arguably due to an increased number of high intensity precursor ion candidates.
Resumo:
This paper analyses the mechanisms through which binding finance constraints can induce debt-constrained firms to improve technical efficiency to guarantee positive profits. This hypothesis is tested on a sample of firms belonging to the Italian manufacturing. Technical efficiency scores are computed by estimating parametric production frontiers using the one stage approach as in Battese and Coelli [Battese, G., Coelli, T., 1995. A model for technical efficiency effects in a stochastic frontier production function for panel data. Empirical Economics 20, 325-332]. The results support the hypothesis that a restriction in the availability of financial resources can affect positively efficiency. © 2004 Elsevier B.V. All rights reserved.
Resumo:
Visual detection performance (d') is usually an accelerating function of stimulus contrast, which could imply a smooth, threshold-like nonlinearity in the sensory response. Alternatively, Pelli (1985 Journal of the Optical Society of America A 2 1508 - 1532) developed the 'uncertainty model' in which responses were linear with contrast, but the observer was uncertain about which of many noisy channels contained the signal. Such internal uncertainty effectively adds noise to weak signals, and predicts the nonlinear psychometric function. We re-examined these ideas by plotting psychometric functions (as z-scores) for two observers (SAW, PRM) with high precision. The task was to detect a single, vertical, blurred line at the fixation point, or identify its polarity (light vs dark). Detection of a known polarity was nearly linear for SAW but very nonlinear for PRM. Randomly interleaving light and dark trials reduced performance and rendered it non-linear for SAW, but had little effect for PRM. This occurred for both single-interval and 2AFC procedures. The whole pattern of results was well predicted by our Monte Carlo simulation of Pelli's model, with only two free parameters. SAW (highly practised) had very low uncertainty. PRM (with little prior practice) had much greater uncertainty, resulting in lower contrast sensitivity, nonlinear performance, and no effect of external (polarity) uncertainty. For SAW, identification was about v2 better than detection, implying statistically independent channels for stimuli of opposite polarity, rather than an opponent (light - dark) channel. These findings strongly suggest that noise and uncertainty, rather than sensory nonlinearity, limit visual detection.
Resumo:
This study examined the extent to which students could fake responses on personality and approaches to studying questionnaires, and the effects of such responding on the validity of non-cognitive measures for predicting academic performance (AP). University students produced a profile of an ‘ideal’ student using the Big-Five personality taxonomy, which yielded a stereotype with low scores for Neuroticism, and high scores for the other four traits. A sub-set of participants were allocated to a condition in which they were instructed to fake their responses as University applicants, portraying themselves as positively as possible. Scores for these participants revealed higher scores than those in a control condition on measures of deep and strategic approaches to studying, but lower scores on the surface approach variable. Conscientiousness was a significant predictor of AP in both groups, but the predictive effect of approaches to studying variables and Openness to Experience identified in the control group was lower in the group who faked their responses. Non-cognitive psychometric measures can be valid predictors of AP, but scores on these measures can be affected by instructional set. Further implications for psychometric measurement in educational settings are discussed.
Resumo:
The contributions in this research are split in to three distinct, but related, areas. The focus of the work is based on improving the efficiency of video content distribution in the networks that are liable to packet loss, such as the Internet. Initially, the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP) is presented. Since added FEC can be used to reduce the number of retransmissions, the requirement for TCP to deal with any losses is greatly reduced. When real-time applications are needed, delay must be kept to a minimum, and retransmissions not desirable. A balance, therefore, between additional bandwidth and delays due to retransmissions must be struck. This is followed by the proposal of a hybrid transport, specifically for H.264 encoded video, as a compromise between the delay-prone TCP and the loss-prone UDP. It is argued that the playback quality at the receiver often need not be 100% perfect, providing a certain level is assured. Reliable TCP is used to transmit and guarantee delivery of the most important packets. The delay associated with the proposal is measured, and the potential for use as an alternative to the conventional methods of transporting video by either TCP or UDP alone is demonstrated. Finally, a new objective measurement is investigated for assessing the playback quality of video transported using TCP. A new metric is defined to characterise the quality of playback in terms of its continuity. Using packet traces generated from real TCP connections in a lossy environment, simulating the playback of a video is possible, whilst monitoring buffer behaviour to calculate pause intensity values. Subjective tests are conducted to verify the effectiveness of the metric introduced and show that the results of objective and subjective scores made are closely correlated.
Resumo:
A critical review of previous research revealed that visual attention tests, such as the Useful Field of View (UFOV) test, provided the best means of detecting age-related changes to the visual system that could potentially increase crash risk. However, the question was raised as to whether the UFOV, which was regarded as a static visual attention test, could be improved by inclusion of kinetic targets that more closely represent the driving task. A computer program was written to provide more information about the derivation of UFOV test scores. Although this investigation succeeded in providing new information, some of the commercially protected UFOV test procedures still remain unknown. Two kinetic visual attention tests (DRTS1 and 2), developed at Aston University to investigate inclusion of kinetic targets in visual attention tests, were introduced. The UFOV was found to be more repeatable than either of the kinetic visual attention tests and learning effects or age did not influence these findings. Determinants of static and kinetic visual attention were explored. Increasing target eccentricity led to reduced performance on the UFOV and DRTS1 tests. The DRTS2 was not affected by eccentricity but this may have been due to the style of presentation of its targets. This might also have explained why only the DRTS2 showed laterality effects (i.e. better performance to targets presented on the left hand side of the road). Radial location, explored using the UFOV test, showed that subjects responded best to targets positioned to the horizontal meridian. Distraction had opposite effects on static and kinetic visual attention. While UFOV test performance declined with distraction, DRTS1 performance increased. Previous research had shown that this striking difference was to be expected. Whereas the detection of static targets is attenuated in the presence of distracting stimuli, distracting stimuli that move in a structured flow field enhances the detection of moving targets. Subjects reacted more slowly to kinetic compared to static targets, longitudinal motion compared to angular motion and to increased self-motion. However, the effects of longitudinal motion, angular motion, self-motion and even target eccentricity were caused by target edge speed variations arising because of optic flow field effects. The UFOV test was more able to detect age-related changes to the visual system than were either of the kinetic visual attention tests. The driving samples investigated were too limited to draw firm conclusions. Nevertheless, the results presented showed that neither the DRTS2 nor the UFOV tests were powerful tools for the identification of drivers prone to crashes or poor driving performance.
Resumo:
Research into FL/EFL macro-reading (the effect of the broader context of reading) has been little explored in spite of its importance in the FL/EFL reading programmes. This study was designed to build on previous work by explaining in more depth the influence of the socio-educational reading environment in an Arab university (Al-Fateh University in Tripoli, Libya) - as reported by students, upon these students' reading ability in English and Arabic (particularly the former). Certain aspects of the lecturers' reading habits and attitudes and classroom operation were also investigated. Written cloze tests in English and Arabic and self-administered questionnaires were given to 125 preliminary-year undergraduates in three faculties of Al-Fateh University on the basis of their use of English as a medium of instruction (one representing the Arts' stream and two representing the Science stream). Twenty two lecturers were interviewed and observed by an inventory technique along with twenty other preliminary-year students. Factor analysis and standard multiple regression technique were among the statistical methods used to analyse the main data. The findings demonstrate a significant relationship between reading ability in English and the reading individual and environmental variables - as defined in the study. A combination of common and different series of such predictors were found accountable for the variation (43% for the first year English specialist; 48% for the combined Medicine student sample) in the English reading tests. Also found was a significant, though not very large, relationship between reading ability in Arabic and the reading environment. Non-statistical but objective analyses, based on the present data, also revealed an overall association between English reading performance and an important number of reading environmental variables - where many `poor' users of the reading environment (particularly the academic one) obtained low scores in the English cloze tests. Accepting the limitations of a single study, it is nevertheless clear that the reading environment at the University is in need of improvement and that students' use of it also requires better guidance and training in how to use it effectively. Suggestions are made for appropriate educational changes.
Resumo:
This thesis proposes that despite many experimental studies of thinking, and the development of models of thinking, such as Bruner's (1966) enactive, iconic and symbolic developmental modes, the imagery and inner verbal strategies used by children need further investigation to establish a coherent, theoretical basis from which to create experimental curricula for direct improvement of those strategies. Five hundred and twenty-three first, second and third year comprehensive school children were tested on 'recall' imagery, using a modified Betts Imagery Test; and a test of dual-coding processes (Paivio, 1971, p.179), by the P/W Visual/Verbal Questionnaire, measuring 'applied imagery' and inner verbalising. Three lines of investigation were pursued: 1. An investigation a. of hypothetical representational strategy differences between boys and girls; and b. the extent to which strategies change with increasing age. 2. The second and third year children's use of representational processes, were taken separately and compared with performance measures of perception, field independence, creativity, self-sufficiency and self-concept. 3. The second and third year children were categorised into four dual-coding strategy groups: a. High Visual/High Verbal b. Low Visual/High Verbal c. High Visual/Low Verbal d. Low Visual/Low Verbal These groups were compared on the same performance measures. The main result indicates that: 1. A hierarchy of dual-coding strategy use can be identified that is significantly related (.01, Binomial Test) to success or failure in the performance measures: the High Visual/High Verbal group registering the highest scores, the Low Visual/High Verbal and High Visual/Low Verbal groups registering intermediate scores, and the Low Visual/Low Verbal group registering the lowest scores on the performance measures. Subsidiary results indicate that: 2. Boys' use of visual strategies declines, and of verbal strategies increases, with age; girls' recall imagery strategy increases with age. Educational implications from the main result are discussed, the establishment of experimental curricula proposed, and further research suggested.
Resumo:
The diagnosis and monitoring of ocular disease presents considerable clinical difficulties for two main reasons i) the substantial physiological variation of anatomical structure of the visual pathway and ii) constraints due to technical limitations of diagnostic hardware. These are further confounded by difficulties in detecting early loss or change in visual function due to the masking of disease effects, for example, due to a high degree of redundancy in terms of nerve fibre number along the visual pathway. This thesis addresses these issues across three areas of study: 1. Factors influencing retinal thickness measures and their clinical interpretation As the retina is the principal anatomical site for damage associated with visual loss, objective measures of retinal thickness and retinal nerve fibre layer thickness are key to the detection of pathology. In this thesis the ability of optical coherence tomography (OCT) to provide repeatable and reproducible measures of retinal structure at the macula and optic nerve head is investigated. In addition, the normal physiological variations in retinal thickness and retinal nerve fibre layer thickness are explored. Principal findings were: • Macular retinal thickness and optic nerve head measurements are repeatable and reproducible for normal subjects and diseased eyes • Macular and retinal nerve fibre layer thickness around the optic nerve correlate negatively with axial length, suggesting that larger eyes have thinner retinae, potentially making them more susceptible to damage or disease • Foveola retinal thickness increases with age while retinal nerve fibre layer thickness around the optic nerve head decreases with age. Such findings should be considered during examination of the eye with suspect pathology or in long-term disease monitoring 2. Impact of glucose control on retinal anatomy and function in diabetes Diabetes is a major health concern in the UK and worldwide and diabetic retinopathy is a major cause of blindness in the working population. Objective, quantitative measurements of retinal thickness. particularly at the macula provide essential information regarding disease progression and the efficacy of treatment. Functional vision loss in diabetic patients is commonly observed in clinical and experimental studies and is thought to be affected by blood glucose levels. In the first study of its kind, the short term impact of fluctuations in blood glucose levels on retinal structure and function over a 12 hour period in patients with diabetes are investigated. Principal findings were: • Acute fluctuations in blood glucose levels are greater in diabetic patients than normal subjects • The fluctuations in blood glucose levels impact contrast sensitivity scores. SWAP visual fields, intraocular pressure and diastolic pressure. This effect is similar for type 1 and type 2 diabetic patients despite the differences in their physiological status. • Long-term metabolic control in the diabetic patient is a useful predictor in the fluctuation of contrast sensitivity scores. • Large fluctuations in blood glucose levels and/or visual function and structure may be indicative of an increased risk of development or progression of retinopathy 3. Structural and functional damage of the visual pathway in glaucomatous optic neuropathy The glaucomatous eye undergoes a number of well documented pathological changes including retinal nerve fibre loss and optic nerve head damage which is correlated with loss of functional vision. In experimental glaucoma there is evidence that glaucomatous damage extends from retinal ganglion cells in the eye, along the visual pathway, to vision centres in the brain. This thesis explores the effects of glaucoma on retinal nerve fibre layer thickness, ocular anterior anatomy and cortical structure, and its correlates with visual function in humans. Principal findings were: • In the retina, glaucomatous retinal nerve fibre layer loss is less marked with increasing distance from the optic nerve head, suggesting that RNFL examination at a greater distance than traditionally employed may provide invaluable early indicators of glaucomatous damage • Neuroretinal rim area and retrobulbar optic nerve diameter are strong indicators of visual field loss • Grey matter density decreases at a rate of 3.85% per decade. There was no clear evidence of a disease effect • Cortical activation as measured by fMRI was a strong indicator of functional damage in patients with significant neuroretinal rim loss despite relatively modest visual field defects These investigations have shown that the effects of senescence are evident in both the anterior and posterior visual pathway. A variety of anatomical and functional diagnostic protocols for the investigation of damage to the visual pathway in ocular disease are required to maximise understanding of the disease processes and thereby optimising patient care.
Resumo:
The rationale for carrying out this research was to address the clear lack of knowledge surrounding the measurement of public hospital performance in Ireland. The objectives of this research were to develop a comprehensive model for measuring hospital performance and using this model to measure the performance of public acute hospitals in Ireland in 2007. Having assessed the advantages and disadvantages of various measurement models the Data Envelopment Analysis (DEA) model was chosen for this research. DEA was initiated by Charnes, Cooper and Rhodes in 1978 and further developed by Fare et al. (1983) and Banker et al. (1984). The method used to choose relevant inputs and outputs to be included in the model followed that adopted by Casu et al. (2005) which included the use of focus groups. The main conclusions of the research are threefold. Firstly, it is clear that each stakeholder group has differing opinions on what constitutes good performance. It is therefore imperative that any performance measurement model would be designed within parameters that are clearly understood by any intended audience. Secondly, there is a lack of publicly available qualitative information in Ireland that inhibits detailed analysis of hospital performance. Thirdly, based on available qualitative and quantitative data the results indicated a high level of efficiency among the public acute hospitals in Ireland in their staffing and non pay costs, averaging 98.5%. As DEA scores are sensitive to the number of input and output variables as well as the size of the sample it should be borne in mind that a high level of efficiency could be as a result of using DEA with too many variables compared to the number of hospitals. No hospital was deemed to be scale efficient in any of the models even though the average scale efficiency for all of the hospitals was relatively high at 90.3%. Arising from this research the main recommendations would be that information on medical outcomes, survival rates and patient satisfaction should be made publicly available in Ireland; that despite a high average efficiency level that many individual hospitals need to focus on improving their technical and scale efficiencies, and that performance measurement models should be developed that would include more qualitative data.
Resumo:
This paper proposes a new framework for evaluating the performance of employment offices based on non-parametric technique of data envelopment analysis. This framework is explained using the assessment of technical efficiency of 82 employment offices in Tunisia which are under the direction of the National Agency for Employment and Independent Work. We further investigated the exogenous factors that may explain part of the variation in efficiency scores using a bootstrapping approach in period January 2006 to December 2008. Given the specialisation of employment offices, we used the proposed approach for the efficiency evaluation of graduate employment offices and multi-services employment offices, separately.
Resumo:
Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.
Resumo:
There is growing peer and donor pressure on African countries to utilize available resources more efficiently in a bid to support the ongoing efforts to expand coverage of health interventions with a view to achieving the health-related Millennium Development Goals. The purpose of this study was to estimate the technical and scale efficiency of national health systems in African continent. Methods The study applied the Data Envelopment Analysis approach to estimate the technical efficiency and scale efficiency among the 53 countries of the African Continent. Results Out of the 38 low-income African countries, 12 countries national health systems manifested a constant returns to scale technical efficiency (CRSTE) score of 100%; 15 countries had a VRSTE score of 100%; and 12 countries had a SE score of one. The average variable returns to scale technical efficiency (VRSTE) score was 95% and the mean scale efficiency (SE) score was 59%; meaning that while on average the degree of inefficiency was only 5%, the magnitude of scale inefficiency was 41%. Of the 15 middle-income countries, 5 countries, 9 countries and 5 countries had CRSTE, VRSTE and SE scores of 100%. Ten countries, six countries and 10 countries had CRSTE, VRSTE and SE scores of less than 100%; and thus, they were deemed inefficient. The average VRSTE (i.e. pure efficiency) score was 97.6%. The average SE score was 49.9%. Conclusion There are large unmet need for health and health-related services among countries of the African Continent. Thus, it would not be advisable for health policy-makers address NHS inefficiencies through reduction in excess human resources for health. Instead, it would be more prudent for them to leverage health promotion approaches and universal access prepaid (tax-based, insurance-based or mixtures) health financing systems to create demand for under utilised health services/interventions with a view to increasing ultimate health outputs to efficient target levels.
Resumo:
Background - Modelling the interaction between potentially antigenic peptides and Major Histocompatibility Complex (MHC) molecules is a key step in identifying potential T-cell epitopes. For Class II MHC alleles, the binding groove is open at both ends, causing ambiguity in the positional alignment between the groove and peptide, as well as creating uncertainty as to what parts of the peptide interact with the MHC. Moreover, the antigenic peptides have variable lengths, making naive modelling methods difficult to apply. This paper introduces a kernel method that can handle variable length peptides effectively by quantifying similarities between peptide sequences and integrating these into the kernel. Results - The kernel approach presented here shows increased prediction accuracy with a significantly higher number of true positives and negatives on multiple MHC class II alleles, when testing data sets from MHCPEP [1], MCHBN [2], and MHCBench [3]. Evaluation by cross validation, when segregating binders and non-binders, produced an average of 0.824 AROC for the MHCBench data sets (up from 0.756), and an average of 0.96 AROC for multiple alleles of the MHCPEP database. Conclusion - The method improves performance over existing state-of-the-art methods of MHC class II peptide binding predictions by using a custom, knowledge-based representation of peptides. Similarity scores, in contrast to a fixed-length, pocket-specific representation of amino acids, provide a flexible and powerful way of modelling MHC binding, and can easily be applied to other dynamic sequence problems.