9 resultados para retinal images

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study sets out to provide new information about the interaction between abstract religious ideas and actual acts of violence in the early crusading movement. The sources are asked, whether such a concept as religious violence can be sorted out as an independent or distinguishable source of aggression at the moment of actual bloodshed. The analysis concentrates on the practitioners of sacred violence, crusaders and their mental processing of the use of violence, the concept of the violent act, and the set of values and attitudes defining this concept. The scope of the study, the early crusade movement, covers the period from late 1080 s to the crusader conquest of Jerusalem in 15 July 1099. The research has been carried out by contextual reading of relevant sources. Eyewitness reports will be compared with texts that were produced by ecclesiastics in Europe. Critical reading of the texts reveals both connecting ideas and interesting differences between them. The sources share a positive attitude towards crusading, and have principally been written to propagate the crusade institution and find new recruits. The emphasis of the study is on the interpretation of images: the sources are not asked what really happened in chronological order, but what the crusader understanding of the reality was like. Fictional material can be even more crucial for the understanding of the crusading mentality. Crusader sources from around the turn of the twelfth century accept violent encounters with non-Christians on the grounds of external hostility directed towards the Christian community. The enemies of Christendom can be identified with either non-Christians living outside the Christian society (Muslims), non-Christians living within the Christian society (Jews) or Christian heretics. Western Christians are described as both victims and avengers of the surrounding forces of diabolical evil. Although the ideal of universal Christianity and gradual eradication of the non-Christian is present, the practical means of achieving a united Christendom are not discussed. The objective of crusader violence was thus entirely Christian: the punishment of the wicked and the restoration of Christian morals and the divine order. Meanwhile, the means used to achieve these objectives were not. Given the scarcity of written regulations concerning the use of force in bello, perceptions concerning the practical use of violence were drawn from a multitude of notions comprising an adaptable network of secular and ecclesiastical, pre-Christian and Christian traditions. Though essentially ideological and often religious in character, the early crusader concept of the practise of violence was not exclusively rooted in Christian thought. The main conclusion of the study is that there existed a definable crusader ideology of the use of force by 1100. The crusader image of violence involved several levels of thought. Predominantly, violence indicates a means of achieving higher spiritual rewards; eternal salvation and immortal glory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The subject of the thesis is the mediated construction of author images in popular music. In the study, the construction of images is treated as a process in which artists, the media and the members of the audience participate. The notions of presented, mediated and compiled author images are used in explaining the mediation process and the various authorial roles of the agents involved. In order to explore the issue more closely, I analyse the author images of a group of popular music artists representing the genres of rock, pop and electronic dance music. The analysed material consists mostly of written media texts through which the artists authorial roles and creative responsibilities are discussed. Theoretically speaking, the starting points for the examination lie in cultural studies and discourse analysis. Even though author images may be conceived as intertextual constructions, the artist is usually presented as a recognizable figure whose purpose is to give the music its public face. This study does not, then, deal with musical authors as such, but rather with their public images and mediated constructions. Because of the author-based functioning of popular music culture and the idea of the artist s individual creative power, the collective and social processes involved in the making of popular music are often superseded by the belief in a single, originating authorship. In addition to the collective practices of music making, the roles of the media and the marketing machinery complicate attempts to clarify the sharing of authorial contributions. As the case studies demonstrate, the differences between the examined author images are connected with a number of themes ranging from issues of auteurism and stardom to the use of masked imagery and the blending of authorial voices. Also the emergence of new music technologies has affected not only the ways in which music is made, but also how the artist s authorial status and artistic identity is understood. In the study at hand, the author images of auteurs, stars, DJs and sampling artists are discussed alongside such varied topics as collective authorship, evaluative hierarchies, visual promotion and generic conventions. Taken altogether, the examined case studies shed light on the functioning of popular music culture and the ways in which musical authorship is (re)defined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When experts construct mental images, they do not rely only on perceptual features; they also access domain-specific knowledge and skills in long-term memory, which enables them to exceed the capacity limitations of the short-term working memory system. The central question of the present dissertation was whether the facilitating effect of long-term memory knowledge on working memory imagery tasks is primarily based on perceptual chunking or whether it relies on higher-level conceptual knowledge. Three domains of expertise were studied: chess, music, and taxi driving. The effects of skill level, stimulus surface features, and the stimulus structure on incremental construction of mental images were investigated. A method was developed to capture the chunking mechanisms that experts use in constructing images: chess pieces, street names, and visual notes were presented in a piecemeal fashion for later recall. Over 150 experts and non-experts participated in a total of 13 experiments, as reported in five publications. The results showed skill effects in all of the studied domains when experts performed memory and problem solving tasks that required mental imagery. Furthermore, only experts' construction of mental images benefited from meaningful stimuli. Manipulation of the stimulus surface features, such as replacing chess pieces with dots, did not significantly affect experts' performance in the imagery tasks. In contrast, the structure of the stimuli had a significant effect on experts' performance in every task domain. For example, taxi drivers recalled more street names from lists that formed a spatially continuous route than from alphabetically organised lists. The results suggest that the mechanisms of conceptual chunking rather than automatic perceptual pattern matching underlie expert performance, even though the tasks of the present studies required perception-like mental representations. The results show that experts are able to construct skilled images that surpass working memory capacity, and that their images are conceptually organised and interpreted rather than merely depictive.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

What can the statistical structure of natural images teach us about the human brain? Even though the visual cortex is one of the most studied parts of the brain, surprisingly little is known about how exactly images are processed to leave us with a coherent percept of the world around us, so we can recognize a friend or drive on a crowded street without any effort. By constructing probabilistic models of natural images, the goal of this thesis is to understand the structure of the stimulus that is the raison d etre for the visual system. Following the hypothesis that the optimal processing has to be matched to the structure of that stimulus, we attempt to derive computational principles, features that the visual system should compute, and properties that cells in the visual system should have. Starting from machine learning techniques such as principal component analysis and independent component analysis we construct a variety of sta- tistical models to discover structure in natural images that can be linked to receptive field properties of neurons in primary visual cortex such as simple and complex cells. We show that by representing images with phase invariant, complex cell-like units, a better statistical description of the vi- sual environment is obtained than with linear simple cell units, and that complex cell pooling can be learned by estimating both layers of a two-layer model of natural images. We investigate how a simplified model of the processing in the retina, where adaptation and contrast normalization take place, is connected to the nat- ural stimulus statistics. Analyzing the effect that retinal gain control has on later cortical processing, we propose a novel method to perform gain control in a data-driven way. Finally we show how models like those pre- sented here can be extended to capture whole visual scenes rather than just small image patches. By using a Markov random field approach we can model images of arbitrary size, while still being able to estimate the model parameters from the data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: The aim of the present study was to develop and test new digital imaging equipment and methods for diagnosis and follow-up of ocular diseases. Methods: The whole material comprised 398 subjects (469 examined eyes), including 241 patients with melanocytic choroidal tumours, 56 patients with melanocytic iris tumours, 42 patients with diabetes, a 52-year old patient with chronic phase of VKH disease, a 30-year old patient with an old blunt eye injury, and 57 normal healthy subjects. Digital 50° (Topcon TRC 50 IA) and 45° (Canon CR6-45NM) fundus cameras, a new handheld digital colour videocamera for eye examinations (MediTell), a new subtraction method using the Topcon Image Net Program (Topcon corporation, Tokyo, Japan), a new method for digital IRT imaging of the iris we developed, and Zeiss photoslitlamp with a digital camera body were used for digital imaging. Results: Digital 50° red-free imaging had a sensitivity of 97.7% and two-field 45° and 50° colour imaging a sensitivity of 88.9-94%. The specificity of the digital 45°-50° imaging modalities was 98.9-100% versus the reference standard and ungradeable images that were 1.2-1.6%. By using the handheld digital colour video camera only, the optic disc and central fundus located inside 20° from the fovea could be recorded with a sensitivity of 6.9% for detection of at least mild NPDR when compared with the reference standard. Comparative use of digital colour, red-free, and red light imaging showed 85.7% sensitivity, 99% specificity, and 98.2 % exact agreement versus the reference standard in differentiation of small choroidal melanoma from pseudomelanoma. The new subtraction method showed growth in four of 94 melanocytic tumours (4.3%) during a mean ±SD follow-up of 23 ± 11 months. The new digital IRT imaging of the iris showed the sphincter muscle and radial contraction folds of Schwalbe in the pupillary zone and radial structural folds of Schwalbe and circular contraction furrows in the ciliary zone of the iris. The 52-year-old patient with a chronic phase of VKH disease showed extensive atrophy and occasional pigment clumps in the iris stroma, detachment of the ciliary body with severe ocular hypotony, and shallow retinal detachment of the posterior pole in both eyes. Infrared transillumination imaging and fluorescein angiographic findings of the iris showed that IR translucence (p=0.53), complete masking of fluorescence (p=0.69), presence of disorganized vessels (p=0.32), and fluorescein leakage (p=1.0) at the site of the lesion did not differentiate an iris nevus from a melanoma. Conclusions: Digital 50° red-free and two-field 50° or 45° colour imaging were suitable for DR screening, whereas the handheld digital video camera did not fulfill the needs of DR screening. Comparative use of digital colour, red-free and red light imaging was a suitable method in the differentiation of small choroidal melanoma from different pseudomelanomas. The subtraction method may reveal early growth of the melanocytic choroidal tumours. Digital IRT imaging may be used to study changes of the stroma and posterior surface of the iris in various diseases of the uvea. It contributed to the revealment of iris atrophy and serous detachment of the ciliary body with ocular hypotony together with the shallow retinal detachment of the posterior pole as new findings of the chronic phase of VKH disease. Infrared translucence and angiographic findings are useful in differential diagnosis of melanocytic iris tumours, but they cannot be used to determine if the lesion is benign or malignant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tiivistelmä ReferatAbstract Metabolomics is a rapidly growing research field that studies the response of biological systems to environmental factors, disease states and genetic modifications. It aims at measuring the complete set of endogenous metabolites, i.e. the metabolome, in a biological sample such as plasma or cells. Because metabolites are the intermediates and end products of biochemical reactions, metabolite compositions and metabolite levels in biological samples can provide a wealth of information on on-going processes in a living system. Due to the complexity of the metabolome, metabolomic analysis poses a challenge to analytical chemistry. Adequate sample preparation is critical to accurate and reproducible analysis, and the analytical techniques must have high resolution and sensitivity to allow detection of as many metabolites as possible. Furthermore, as the information contained in the metabolome is immense, the data set collected from metabolomic studies is very large. In order to extract the relevant information from such large data sets, efficient data processing and multivariate data analysis methods are needed. In the research presented in this thesis, metabolomics was used to study mechanisms of polymeric gene delivery to retinal pigment epithelial (RPE) cells. The aim of the study was to detect differences in metabolomic fingerprints between transfected cells and non-transfected controls, and thereafter to identify metabolites responsible for the discrimination. The plasmid pCMV-β was introduced into RPE cells using the vector polyethyleneimine (PEI). The samples were analyzed using high performance liquid chromatography (HPLC) and ultra performance liquid chromatography (UPLC) coupled to a triple quadrupole (QqQ) mass spectrometer (MS). The software MZmine was used for raw data processing and principal component analysis (PCA) was used in statistical data analysis. The results revealed differences in metabolomic fingerprints between transfected cells and non-transfected controls. However, reliable fingerprinting data could not be obtained because of low analysis repeatability. Therefore, no attempts were made to identify metabolites responsible for discrimination between sample groups. Repeatability and accuracy of analyses can be influenced by protocol optimization. However, in this study, optimization of analytical methods was hindered by the very small number of samples available for analysis. In conclusion, this study demonstrates that obtaining reliable fingerprinting data is technically demanding, and the protocols need to be thoroughly optimized in order to approach the goals of gaining information on mechanisms of gene delivery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dhondup Gyal (Don grub rgyal, 1953 - 1985) was a Tibetan writer from Amdo (Qinghai, People's Republic of China). He wrote several prose works, poems, scholarly writings and other works which have been later on collected together into The Collected Works of Dhondup Gyal, in six volumes. He had a remarkable influence on the development of modern Tibetan literature in the 1980s. Examining his works, which are characterized by rich imagery, it is possible to notice a transition from traditional to modern ways of literary expression. Imagery is found in both the poems and prose works of Dhondup Gyal. Nature imagery is especially prominent and his writings contain images of flowers and plants, animals, water, wind and clouds, the heavenly bodies and other environmental elements. Also there are images of parts of the body and material and cultural images. To analyse the images, most of which are metaphors and similes, the use of the cognitive theory of metaphor provides a good framework for making comparisons with images in traditional Tibetan literature and also some images in Chinese, Indian and Western literary works. The analysis shows that the images have both traditional and innovative features. The source domains of images often appear similar to those found in traditional Tibetan literature and are slow to change. However, innovative shifts occur in the way they are mapped on their target domains, which may express new meanings and are usually secular in nature if compared to the religiosity which often characterizes traditional Tibetan literature. Dhondup Gyal's poems are written in a variety of styles, ranging from traditional types of verse compositions and poems in the ornate kāvya-style to modern free verse poetry. The powerful central images of his free verse poems and some other works can be viewed as structurally innovative and have been analysed with the help of the theory of conceptual blending. They are often ambiguous in their meaning, but can be interpreted to express ideas related to creativity, freedom and the need for change and development.