15 resultados para ÉEG
em Aston University Research Archive
Resumo:
The increasing prevalence, variable pathogenesis, progressive natural history, and complications of type 2 diabetes emphasise the urgent need for new treatment strategies. Longacting (eg, once weekly) agonists of the glucagon-like-peptide-1 receptor are advanced in development, and they improve prandial insulin secretion, reduce excess glucagon production, and promote satiety. Trials of inhibitors of dipeptidyl peptidase 4, which enhance the effect of endogenous incretin hormones, are also nearing completion. Novel approaches to glycaemic regulation include use of inhibitors of the sodium-glucose cotransporter 2, which increase renal glucose elimination, and inhibitors of 11ß-hydroxysteroid dehydrogenase 1, which reduce the glucocorticoid effects in liver and fat. Insulin-releasing glucokinase activators and pancreatic-G-protein-coupled fatty-acid-receptor agonists, glucagon-receptor antagonists, and metabolic inhibitors of hepatic glucose output are being assessed. Early proof of principle has been shown for compounds that enhance and partly mimic insulin action and replicate some effects of bariatric surgery.
Resumo:
Our understanding of early spatial vision owes much to contrast masking and summation paradigms. In particular, the deep region of facilitation at low mask contrasts is thought to indicate a rapidly accelerating contrast transducer (eg a square-law or greater). In experiment 1, we tapped an early stage of this process by measuring monocular and binocular thresholds for patches of 1 cycle deg-1 sine-wave grating. Threshold ratios were around 1.7, implying a nearly linear transducer with an exponent around 1.3. With this form of transducer, two previous models (Legge, 1984 Vision Research 24 385 - 394; Meese et al, 2004 Perception 33 Supplement, 41) failed to fit the monocular, binocular, and dichoptic masking functions measured in experiment 2. However, a new model with two-stages of divisive gain control fits the data very well. Stage 1 incorporates nearly linear monocular transducers (to account for the high level of binocular summation and slight dichoptic facilitation), and monocular and interocular suppression (to fit the profound 42 Oral presentations: Spatial vision Thursday dichoptic masking). Stage 2 incorporates steeply accelerating transduction (to fit the deep regions of monocular and binocular facilitation), and binocular summation and suppression (to fit the monocular and binocular masking). With all model parameters fixed from the discrimination thresholds, we examined the slopes of the psychometric functions. The monocular and binocular slopes were steep (Weibull ߘ3-4) at very low mask contrasts and shallow (ߘ1.2) at all higher contrasts, as predicted by all three models. The dichoptic slopes were steep (ߘ3-4) at very low contrasts, and very steep (ß>5.5) at high contrasts (confirming Meese et al, loco cit.). A crucial new result was that intermediate dichoptic mask contrasts produced shallow slopes (ߘ2). Only the two-stage model predicted the observed pattern of slope variation, so providing good empirical support for a two-stage process of binocular contrast transduction. [Supported by EPSRC GR/S74515/01]
Resumo:
There have been two main approaches to feature detection in human and computer vision - luminance-based and energy-based. Bars and edges might arise from peaks of luminance and luminance gradient respectively, or bars and edges might be found at peaks of local energy, where local phases are aligned across spatial frequency. This basic issue of definition is important because it guides more detailed models and interpretations of early vision. Which approach better describes the perceived positions of elements in a 3-element contour-alignment task? We used the class of 1-D images defined by Morrone and Burr in which the amplitude spectrum is that of a (partially blurred) square wave and Fourier components in a given image have a common phase. Observers judged whether the centre element (eg ±458 phase) was to the left or right of the flanking pair (eg 0º phase). Lateral offset of the centre element was varied to find the point of subjective alignment from the fitted psychometric function. This point shifted systematically to the left or right according to the sign of the centre phase, increasing with the degree of blur. These shifts were well predicted by the location of luminance peaks and other derivative-based features, but not by energy peaks which (by design) predicted no shift at all. These results on contour alignment agree well with earlier ones from a more explicit feature-marking task, and strongly suggest that human vision does not use local energy peaks to locate basic first-order features. [Supported by the Wellcome Trust (ref: 056093)]
Resumo:
Edge blur is an important perceptual cue, but how does the visual system encode the degree of blur at edges? Blur could be measured by the width of the luminance gradient profile, peak ^ trough separation in the 2nd derivative profile, or the ratio of 1st-to-3rd derivative magnitudes. In template models, the system would store a set of templates of different sizes and find which one best fits the `signature' of the edge. The signature could be the luminance profile itself, or one of its spatial derivatives. I tested these possibilities in blur-matching experiments. In a 2AFC staircase procedure, observers adjusted the blur of Gaussian edges (30% contrast) to match the perceived blur of various non-Gaussian test edges. In experiment 1, test stimuli were mixtures of 2 Gaussian edges (eg 10 and 30 min of arc blur) at the same location, while in experiment 2, test stimuli were formed from a blurred edge sharpened to different extents by a compressive transformation. Predictions of the various models were tested against the blur-matching data, but only one model was strongly supported. This was the template model, in which the input signature is the 2nd derivative of the luminance profile, and the templates are applied to this signature at the zero-crossings. The templates are Gaussian derivative receptive fields that covary in width and length to form a self-similar set (ie same shape, different sizes). This naturally predicts that shorter edges should look sharper. As edge length gets shorter, responses of longer templates drop more than shorter ones, and so the response distribution shifts towards shorter (smaller) templates, signalling a sharper edge. The data confirmed this, including the scale-invariance implied by self-similarity, and a good fit was obtained from templates with a length-to-width ratio of about 1. The simultaneous analysis of edge blur and edge location may offer a new solution to the multiscale problem in edge detection.
Resumo:
It is well known that optic flow - the smooth transformation of the retinal image experienced by a moving observer - contains valuable information about the three-dimensional layout of the environment. From psychophysical and neurophysiological experiments, specialised mechanisms responsive to components of optic flow (sometimes called complex motion) such as expansion and rotation have been inferred. However, it remains unclear (a) whether the visual system has mechanisms for processing the component of deformation and (b) whether there are multiple mechanisms that function independently from each other. Here, we investigate these issues using random-dot patterns and a forced-choice subthreshold summation technique. In experiment 1, we manipulated the size of a test region that was permitted to contain signal and found substantial spatial summation for signal components of translation, expansion, rotation, and deformation embedded in noise. In experiment 2, little or no summation was found for the superposition of orthogonal pairs of complex motion patterns (eg expansion and rotation), consistent with probability summation between pairs of independent detectors. Our results suggest that optic-flow components are detected by mechanisms that are specialised for particular patterns of complex motion.
Aspects of the learner's dictionary with special reference to advanced Pakistani learners of English
Resumo:
The present work is an empirical investigation into the lq`reference skills' of Pakistani learners and their language needs on semantic, phonetic, lexical and pragmatic levels in the dictionary. The introductory chapter discusses the relatively problematic nature of lexis in comparison with the other aspects in EFL learning and spells out the aim of this study. Chapter two provides an analytical survey of the various types of research undertaken in different contexts of the dictionary and explains the eclectic approach adopted in the present work. Chapter three studies the `reference skills' of this category of learners in the background of highly sophisticated information structure of learners' dictionaries under evaluation and suggests some measures for improvement in this context. Chapter four considers various criteria, eg. pedagogic, linguistic and sociolinguistic for determining the macro-structure of learner's dictionary with a focus on specific Ll speakers. Chapter five is concerned with various aspects of the semantic information provided in the dictionaries matched against the needs of Pakistani learners with regard to both comprehension and production. The type, scale and presentation of grammatical information in the dictionary is analysed in chapter six with the object of discovering their role and utility for the learner. Chapter seven explores the rationale for providing phonological information, the extent to which this guidance is vital and the problems of phonetic symbols employed in the dictionaries. Chapter eight brings into perspective the historical background of English-Urdu bilingual lexicography and evalutes the currently popular bilingual dictionaries among the student community, with the aim of discovering the extent to which they have taken account of the modern tents of lexicography and investigating their validity as a useful reference tool in the learning of English language. The final chapter concludes the findings of individual aspects in a coherent fashion to assess the viability of the original hypothesis that learners' dictionaries if compiled with a specific set of users in mind would be more useful.
Resumo:
This thesis criticises many psychological experiments on 'pornography' which attempt to demonstrate how 'pornography' causes and/or equals rape. It challenges simplistic definitions of 'pornography', arguing that sexually explicit materials (SEM) are constructed and interpreted in a number of different ways; and demonstrates that how, when and where materials are depicted or viewed will influence perceptions and reactions. In addition, it opposes the overreliance on male undergraduates as participants in 'porn' research. Theories of feminist psychology and reflexivity are used throughout the thesis, and provide a detailed contextual framework in a complex area. Results from a number of interlinking studies which use a variety of methodological approaches (focus groups, questionnaires and content analysis), indicate how contextual issues are omitted in much existing research on SEM. These include the views and experiences participants' hold prior to completing SEM studies; their opinions about those who 'use' 'pornography'; their understanding of key terms linked with SEM (eg: pornography and erotica); and discussions of sexual magazines aimed at male and female audiences. Participants' reactions to images and texts associated with SEM presented in different contexts are discussed. Three main conclusions are drawn from this thesis. Firstly, images deemed 'pornographic' differ through historical and cultural periods' and political, economic and social climates, so 'experimental' approaches may not always be the most appropriate research tool. Secondly, there is not one definition, source, or factor which may be named 'pornography'; and thirdly the context and presentation of materials influence how images are perceived and reacted to. The thesis argues a number of factors influence view of 'pornography', suggesting SEM may be 'in the eye of the beholder'.
Resumo:
A re-examination of fundamental concepts and a formal structuring of the waveform analysis problem is presented in Part I. eg. the nature of frequency is examined and a novel alternative to the classical methods of detection proposed and implemented which has the advantage of speed and independence from amplitude. Waveform analysis provides the link between Parts I and II. Part II is devoted to Human Factors and the Adaptive Task Technique. The Historical, Technical and Intellectual development of the technique is traced in a review which examines the evidence of its advantages relative to non-adaptive fixed task methods of training, skill assessment and man-machine optimisation. A second review examines research evidence on the effect of vibration on manual control ability. Findings are presented in terms of percentage increment or decrement in performance relative to performance without vibration in the range 0-0.6Rms'g'. Primary task performance was found to vary by as much as 90% between tasks at the same Rms'g'. Differences in task difficulty accounted for this difference. Within tasks vibration-added-difficulty accounted for the effects of vibration intensity. Secondary tasks were found to be largely insensitive to vibration except secondaries which involved fine manual adjustment of minor controls. Three experiments are reported next in which an adaptive technique was used to measure the % task difficulty added by vertical random and sinusoidal vibration to a 'Critical Compensatory Tracking task. At vibration intensities between 0 - 0.09 Rms 'g' it was found that random vibration added (24.5 x Rms'g')/7.4 x 100% to the difficulty of the control task. An equivalence relationship between Random and Sinusoidal vibration effects was established based upon added task difficulty. Waveform Analyses which were applied to the experimental data served to validate Phase Plane analysis and uncovered the development of a control and possibly a vibration isolation strategy. The submission ends with an appraisal of subjects mentioned in the thesis title.
Resumo:
Background Pharmacy has experienced both incomplete professionalization and deprofessionalization. Since the late 1970s, a concerted attempt has been made to re-professionalize pharmacy in the United Kingdom (UK) through role extension—a key feature of which has been a drive for greater pharmacy involvement in public health. However, the continual corporatization of the UK community pharmacy sector may reduce the professional autonomy of pharmacists and may threaten to constrain attempts at reprofessionalization. Objectives The objectives of the research: to examine the public health activities of community pharmacists in the UK; to explore the attitudes of community pharmacists toward recent relevant UK policy and barriers to the development of their public health function; and, to investigate associations between activity, attitudes, and the type of community pharmacy worked in (eg, supermarket, chain, independent). Methods A self-completion postal questionnaire was sent to a random sample of practicing community pharmacists, stratified for country and sex, within Great Britain (n = 1998), with a follow-up to nonresponders 4 weeks later. Data were analyzed using SPSS (SPSS Inc., Chicago, IL, USA) (v12.0). A final response rate of 51% (n = 1023/1998) was achieved. Results The level of provision of emergency hormonal contraception on a patient group direction, supervised administration of medicines, and needle-exchange schemes was lower in supermarket pharmacies than in the other types of pharmacy. Respondents believed that supermarkets and the major multiple pharmacy chains held an advantageous position in terms of attracting financing for service development despite suggesting that the premises of such pharmacies may not be the most suitable for the provision of such services. Conclusions A mixed market in community pharmacy may be required to maintain a comprehensive range of pharmacy-based public health services and provide maximum benefit to all patients. Longitudinal monitoring is recommended to ensure that service provision is adequate across the pharmacy network.
Resumo:
The objectives are to examine rural road accident data in order to develop a method by which high accident rate locations and accident causes can be identified, and also to develop proposals for improvements at such locations and to identify measures which will improve road safety throughout the country. The problem of road safety in Iran is an important issue, because of the tragic and unnecessary loss of life, and the enormous cost of accidents in the country. The resources available to deal with the problems are limited and must be allocated on priority basis. This study represents an initial effort to identify the extent of the problem in order to take remedial measures. A study was made of all the available road accident data collected by agencies related to road safety in Iran, and the major organisations responsible for road safety development were visited. The Vice Minister of Roads and Transportation selected for this study a 280 Km rural road in South West Iran. Mainly because of the lack of suitable maps and plans of the roads, it was not possible to accurately identify the location of accidents. Accident scene data was subsequently collected by the highway police and personally by the author. The data for the study road was then analysed to identify 'high accident rate' locations, and also to determine, as far as was possible, the reasons for the accidents. The study suggests specific improvements for each of the high accident rate locations examined (eg. the building of dual carriageways with central guard rails to reduce the risk of collision with oncoming vehicles, pedestrian facilities to allow pedestrians to cross dangerous roadsl]. In addition recommendations are made to guide and assist the major organisations responsible for road safety in Iran. These recommendations are: (al for improving accident data collection and storage (bl for subsequent analysis for taking remedial measures with a view to accident prevention
Resumo:
Anchorage dependent cell culture is a useful model for investigating the interface that becomes established when a synthetic polymer is placed in contact with a biological system. The primary aim of this interdisciplinary study was to systematically investigate a number of properties that were already considered to have an influence on cell behaviour and thereby establish the extent of their importance. It is envisaged that investigations such as these will not only further the understanding of the mechanisms that affect cell adhesion but may ultimately lead to the development of improved biomaterials. In this study, surface analysis of materials was carried out in parallel with culture studies using fibroblast cells. Polarity, in it's ability to undergo hydrogen bonding (eg with water and proteins), had an important affect on cell behaviour, although structural arrangement and crystallinity were not found to exert any marked influence. In addition, the extent of oxidation that had occurred during the process of manufacture of substrates was also important. The treatment of polystyrene with a selected series of acids and gas plasmas confirmed the importance of polarity, structural groups and surface charge and it was shown that this polymer was not unique among `hydrophobic' materials in it's inability to support cell adhesion. The individual water structuring groups within hydrogel polymers were also observed to have controlling effects on cell behaviour. An overall view of the biological response to both hydrogel and non-hydrogel materials highlighted the importance of surface oxidation, polarity, water structuring groups and surface charge. Initial steps were also taken to analyse foetal calf serum, which is widely used to supplement cell culture media. Using an array of analytical techniques, further experiments were carried out to observe any possible differences in the amounts of lipids and calcium that become deposited to tissue culture and bacteriological grade plastic under cell culture conditions.
Resumo:
Intraocular light scatter is high in certain subject groups eg the elderly, due to increased optical media turbidity, which scatters and attenuates light travelling towards the retina. This causes reduced retinal contrast especially in the presence of glare light. Such subjects have depressed Contrast Sensitivity Functions (CSF). Currently available clinical tests do not effectively reflect this visual disability. Intraocular light scatter may be quantified by measuring the CSF with and without glare light and calculating Light Scatter Factors (LSF). To record the CSF on clinically available equipment (Nicolet CS2000), several psychophysical measurement techniques were investigated, and the 60 sec Method of Increasing Contrast was selected as the most appropriate. It was hypothesised that intraocular light scatter due to particles of different dimensions could be identified by glare sources at wide (30°) and narrow (3.5°) angles. CSFs andLSFs were determined for: (i) Subjects in young, intermediate and old age groups. (ii) Subjects during recovery from large amounts of induced corneal oedema. (iii) A clinical sample of contact lens (CL) wearers with a group of matched controls. The CSF was attenuated at all measured spatial frequencies with the intermediate and old group compared to the young group. High LSF values were found only in the old group (over 60 years). It was concluded that CSF attenuation in the intermediate group was due to reduced pupil size, media absorption and/or neural factors. In the old group, the additional factor was high intraocular light scatter levels of lenticular origin. The rate of reduction of the LSF for the 3.5° glare angle was steeper than that for the 30° angle, following induced corneal oedema. This supported the hypothesis, as it was anticipated that epithelial oedema would recover more rapidly than stromal oedema. CSFs and LSFs were markedly abnormal in the CL wearers. The analytical details and the value of these investigative techniques in contact lens research are discussed.
Resumo:
The status of Science and Technology in KUWAIT has been analysed in order to assess the extent of the application of Science and Technology needed for the Country's development. The design and implementation of a Science and Technology Policy has been examined to identify the appropriate technology necessary to improve KUWAIT's socio-economic-industrial structures. Following a general and critical review of the role of Science and Technology in the developing countries, the author has reviewed the past and contemporary employment of Science and Technology for development.of various sectors and the existence, if any, of any form (explicit, implicit, or both) of a Science and Technology Policy in KUWAIT. The thesis is structured to evaluate almost all of the sectors in KUWAIT which utilise Science and/or Technology, the effectiveness of such practices, their policymaking process, the channels by which policies were transformed into sources of influence through Governmental action and the impact that various policy instruments at the disposal of the the Government had on the development of S & T capabilities. The author has studied the implications of the absence of a Science and Technology Policy in Kuwait by examining some specific case studies, eg, the absence of a Technology Assessment Process and the negative impacts resulting from this; the ad-hoc allocation of the research and development budget instead of its being based on a percentage of GNP; the limitations imposed on the development of indigenous contracting companies and consultancy and engineering design offices; the impacts of the absence of Technology Transfer Centre, and so forth. As a consequence of the implications of the above studies, together with the negative results from the absence of an explicit Science and Technology Policy, eg, research and development activities do not relate to the national development plans, the author suggests that a Science and Technology Policy-Making Body should be established to formulate, develop, monitor and correlate the Science and Technology Activities in KUWAIT.
Resumo:
Masking, adaptation, and summation paradigms have been used to investigate the characteristics of early spatio-temporal vision. Each has been taken to provide evidence for (i) oriented and (ii) nonoriented spatial-filtering mechanisms. However, subsequent findings suggest that the evidence for nonoriented mechanisms has been misinterpreted: those experiments might have revealed the characteristics of suppression (eg, gain control), not excitation, or merely the isotropic subunits of the oriented detecting mechanisms. To shed light on this, we used all three paradigms to focus on the ‘high-speed’ corner of spatio-temporal vision (low spatial frequency, high temporal frequency), where cross-oriented achromatic effects are greatest. We used flickering Gabor patches as targets and a 2IFC procedure for monocular, binocular, and dichoptic stimulus presentations. To account for our results, we devised a simple model involving an isotropic monocular filter-stage feeding orientation-tuned binocular filters. Both filter stages are adaptable, and their outputs are available to the decision stage following nonlinear contrast transduction. However, the monocular isotropic filters (i) adapt only to high-speed stimuli—consistent with a magnocellular subcortical substrate—and (ii) benefit decision making only for high-speed stimuli (ie, isotropic monocular outputs are available only for high-speed stimuli). According to this model, the visual processes revealed by masking, adaptation, and summation are related but not identical.