843 resultados para Relevance feature
Resumo:
A large body of evidence supports a role of oxidative stress in Alzheimer disease (AD) and in cerebrovascular disease. A vascular component might be critical in the pathophysiology of AD, but there is a substantial lack of data regarding the simultaneous behavior of peripheral antioxidants and biomarkers of oxidative stress in AD and vascular dementia (VaD). Sixty-three AD patients, 23 VaD patients and 55 controls were included in the study. We measured plasma levels of water-soluble (vitamin C and uric acid) and lipophilic (vitamin E, vitamin A, carotenoids including lutein, zeaxanthin, β-cryptoxanthin, lycopene, α- and β-carotene) antioxidant micronutrients as well as levels of biomarkers of lipid peroxidation [malondialdehyde (MDA)] and of protein oxidation [immunoglobulin G (IgG) levels of protein carbonyls and dityrosine] in patients and controls. With the exception of β-carotene, all antioxidants were lower in demented patients as compared to controls. Furthermore, AD patients showed a significantly higher IgG dityrosine content as compared to controls. AD and VaD patients showed similar plasma levels of plasma antioxidants and MDA as well as a similar IgG content of protein carbonyls and dityrosine. We conclude that, independent of its nature - vascular or degenerative - dementia is associated with the depletion of a large spectrum of antioxidant micronutrients and with increased protein oxidative modification. This might be relevant to the pathophysiology of dementing disorders, particularly in light of the recently suggested importance of the vascular component in AD development. Copyright © 2004 S. Karger AG, Basel.
Resumo:
This paper discusses critical findings from a two-year EU-funded research project involving four European countries: Austria, England, Slovenia and Romania. The project had two primary aims. The first of these was to develop a systematic procedure for assessing the balance between learning outcomes acquired in education and the specific needs of the labour market. The second aim was to develop and test a set of meta-level quality indicators aimed at evaluating the linkages between education and employment. The project was distinctive in that it combined different partners from Higher Education, Vocational Training, Industry and Quality Assurance. One of the key emergent themes identified in exploratory interviews was that employers and recent business graduates in all four countries want a well-rounded education which delivers a broad foundation of key business knowledge across the various disciplines. Both groups also identified the need for personal development in critical skills and competencies. Following the exploratory study, a questionnaire was designed to address five functional business areas, as well as a cluster of 8 business competencies. Within the survey, questions relating to the meta-level quality indicators assessed the impact of these learning outcomes on the workplace, in terms of the following: 1) value, 2) relevance and 3) graduate ability. This paper provides an overview of the study findings from a sample of 900 business graduates and employers. Two theoretical models are proposed as tools for predicting satisfaction with work performance and satisfaction with business education. The implications of the study findings for education, employment and European public policy are discussed.
Resumo:
We present results that compare the performance of neural networks trained with two Bayesian methods, (i) the Evidence Framework of MacKay (1992) and (ii) a Markov Chain Monte Carlo method due to Neal (1996) on a task of classifying segmented outdoor images. We also investigate the use of the Automatic Relevance Determination method for input feature selection.
Resumo:
A practical Bayesian approach for inference in neural network models has been available for ten years, and yet it is not used frequently in medical applications. In this chapter we show how both regularisation and feature selection can bring significant benefits in diagnostic tasks through two case studies: heart arrhythmia classification based on ECG data and the prognosis of lupus. In the first of these, the number of variables was reduced by two thirds without significantly affecting performance, while in the second, only the Bayesian models had an acceptable accuracy. In both tasks, neural networks outperformed other pattern recognition approaches.
Resumo:
Data visualization algorithms and feature selection techniques are both widely used in bioinformatics but as distinct analytical approaches. Until now there has been no method of measuring feature saliency while training a data visualization model. We derive a generative topographic mapping (GTM) based data visualization approach which estimates feature saliency simultaneously with the training of the visualization model. The approach not only provides a better projection by modeling irrelevant features with a separate noise model but also gives feature saliency values which help the user to assess the significance of each feature. We compare the quality of projection obtained using the new approach with the projections from traditional GTM and self-organizing maps (SOM) algorithms. The results obtained on a synthetic and a real-life chemoinformatics dataset demonstrate that the proposed approach successfully identifies feature significance and provides coherent (compact) projections. © 2006 IEEE.
Resumo:
Two experiments investigated the conditions under which majority and minority sources instigate systematic processing of their messages. Both experiments crossed source status (majority vs. minority) with message quality (strong vs. weak arguments). In each experiment, message elaboration was manipulated by varying either motivational (outcome relevance, Experiment 1) or cognitive (orientating tasks, Experiment 2) factors. The results showed that when either motivational or cognitive factors encouraged low message elaboration, there was heuristic acceptance of the majority position without detailed message processing. When the level of message elaboration was intermediate, there was message processing only for the minority source. Finally, when message elaboration was high, there was message processing for both source conditions. These results show that majority and minority influence is sensitive to motivational and cognitive factors that constrain or enhance message elaboration and that both sources can lead to systematic processing under specific circumstances. © 2007 by the Society for Personality and Social Psychology, Inc.
Resumo:
There have been two main approaches to feature detection in human and computer vision - luminance-based and energy-based. Bars and edges might arise from peaks of luminance and luminance gradient respectively, or bars and edges might be found at peaks of local energy, where local phases are aligned across spatial frequency. This basic issue of definition is important because it guides more detailed models and interpretations of early vision. Which approach better describes the perceived positions of elements in a 3-element contour-alignment task? We used the class of 1-D images defined by Morrone and Burr in which the amplitude spectrum is that of a (partially blurred) square wave and Fourier components in a given image have a common phase. Observers judged whether the centre element (eg ±458 phase) was to the left or right of the flanking pair (eg 0º phase). Lateral offset of the centre element was varied to find the point of subjective alignment from the fitted psychometric function. This point shifted systematically to the left or right according to the sign of the centre phase, increasing with the degree of blur. These shifts were well predicted by the location of luminance peaks and other derivative-based features, but not by energy peaks which (by design) predicted no shift at all. These results on contour alignment agree well with earlier ones from a more explicit feature-marking task, and strongly suggest that human vision does not use local energy peaks to locate basic first-order features. [Supported by the Wellcome Trust (ref: 056093)]
Resumo:
In recent years, technologically advanced methodologies such as Translog have gained a lot of ground in translation process research. However, in this paper it will be argued that quantitative research methods can be supplemented by ethnographic qualitative ones so as to enhance our understanding of what underlies the translation process. Although translation studies scholars have sometimes applied an ethnographic approach to the study of translation, this paper offers a different perspective and considers the potential of ethnographic research methods for tapping cognitive and behavioural aspects of the translation process. A number of ethnographic principles are discussed and it is argued that process researchers aiming to understand translators’ perspectives and intentions, how these shape their behaviours, as well as how translators reflect on the situations they face and how they see themselves, would undoubtedly benefit from adopting an ethnographic framework for their studies on translation processes.
Resumo:
We present a new form of contrast masking in which the target is a patch of low spatial frequency grating (0.46 c/deg) and the mask is a dark thin ring that surrounds the centre of the target patch. In matching and detection experiments we found little or no effect for binocular presentation of mask and test stimuli. But when mask and test were presented briefly (33 or 200 ms) to different eyes (dichoptic presentation), masking was substantial. In a 'half-binocular' condition the test stimulus was presented to one eye, but the mask stimulus was presented to both eyes with zero-disparity. This produced masking effects intermediate to those found in dichoptic and full-binocular conditions. We suggest that interocular feature matching can attenuate the potency of interocular suppression, but unlike in previous work (McKee, S. P., Bravo, M. J., Taylor, D. G., & Legge, G. E. (1994) Stereo matching precedes dichoptic masking. Vision Research, 34, 1047) we do not invoke a special role for depth perception. © 2004 Elsevier Ltd. All rights reserved.
Resumo:
This article describes the history of the Human Genome Project, how the human genome was sequenced, and analyses the likely impact which the results will have on the diagnosis, scientific understanding and, ultimately, treatment of ocular disease in the future.
Resumo:
A novel biosensing system based on a micromachined rectangular silicon membrane is proposed and investigated in this paper. A distributive sensing scheme is designed to monitor the dynamics of the sensing structure. An artificial neural network is used to process the measured data and to identify cell presence and density. Without specifying any particular bio-application, the investigation is mainly concentrated on the performance testing of this kind of biosensor as a general biosensing platform. The biosensing experiments on the microfabricated membranes involve seeding different cell densities onto the sensing surface of membrane, and measuring the corresponding dynamics information of each tested silicon membrane in the form of a series of frequency response functions (FRFs). All of those experiments are carried out in cell culture medium to simulate a practical working environment. The EA.hy 926 endothelial cell lines are chosen in this paper for the bio-experiments. The EA.hy 926 endothelial cell lines represent a particular class of biological particles that have irregular shapes, non-uniform density and uncertain growth behaviour, which are difficult to monitor using the traditional biosensors. The final predicted results reveal that the methodology of a neural-network based algorithm to perform the feature identification of cells from distributive sensory measurement has great potential in biosensing applications.
Resumo:
How effective are non-government organisations (NG0s) in their response to Third World poverty? That is the question which this thesis examines. The thesis begins with an overview of the problems facing Third World communities, and notes the way in which people in Britain have responded through NG0s. A second part of the thesis sets out the issues on which the analysis of NGOs has been made. These are: - the ways in which NGOs analyse the process of development; - the use of 'improving nutrition' and 'promoting self-reliance' as special objectives by NG0s; and - the nature of rural change, and the implications for NGOs as agents of rural development. Kenya is taken as a case study. Firstly the political and economic structure of the country is studied, and the natures of development, nutritional problems and self-reliance in the Kenyan context are noted. The study then focusses attention onto Kitui District, an area of Kenya which at the time of the study was suffering from drought. However, it is argued that the problems of Kitui District and the constraints to change there are as much a consequence of Kenya's structural underdevelopment as of reduced rainfall. Against this background the programmes of some British NGOs in the country are examined, and it is concluded that much of their work has little relevance to the principal problems which have been identified. A final part of the thesis takes a wider look at the policies and practices of NG0s. Issues such as the choice of countries in which NGOs work, how they are represented overseas, and their educational role in Britain are considered. It is concluded that while all NGOs have a concern for the conditions in which the poorest communities of the Third World live, many NGOs take a quite narrow view of development problems, giving only little recognition to the international and intranational political and economic systems which contribute to Third World poverty.