52 resultados para Capitation of images

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a global vision of images in forensic science. The proliferation of perspectives on the use of images throughout criminal investigations and the increasing demand for research on this topic seem to demand a forensic science-based analysis. In this study, the definitions of and concepts related to material traces are revisited and applied to images, and a structured approach is used to persuade the scientific community to extend and improve the use of images as traces in criminal investigations. Current research efforts focus on technical issues and evidence assessment. This article provides a sound foundation for rationalising and explaining the processes involved in the production of clues from trace images. For example, the mechanisms through which these visual traces become clues of presence or action are described. An extensive literature review of forensic image analysis emphasises the existing guidelines and knowledge available for answering investigative questions (who, what, where, when and how). However, complementary developments are still necessary to demystify many aspects of image analysis in forensic science, including how to review and select images or use them to reconstruct an event or assist intelligence efforts. The hypothetico-deductive reasoning pathway used to discover unknown elements of an event or crime can also help scientists understand the underlying processes involved in their decision making. An analysis of a single image in an investigative or probative context is used to demonstrate the highly informative potential of images as traces and/or clues. Research efforts should be directed toward formalising the extraction and combination of clues from images. An appropriate methodology is key to expanding the use of images in forensic science.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A wide variation in patient exposure has been observed in interventional radiology and cardiology. The purpose of this study was to investigate the patient dose from fluoroscopy-guided procedures performed in non-academic centres when compared with academic centres. Four procedures (coronary angiography, percutaneous coronary intervention, angiography of the lower limbs and percutaneous transluminal angioplasty of the lower limbs) were evaluated. Data on the dose-area product, fluoroscopy time and number of images for 1000 procedures were obtained from 23 non-academic centres and compared with data from 5 academic centres. No differences were found for cardiology procedures performed in non-academic centres versus academic ones. However, significantly lower doses were delivered to patients for procedures of the lower limbs when they were performed in non-academic centres. This may be due to more complex procedures performed in the academic centres. Comparison between the centres showed a great variation in the patient dose for these lower limb procedures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally, as clues in investigation and crime reconstruction processes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

X-ray microtomography has become a new tool in earth sciences to obtain non-destructive 3D-image data from geological objects in which variations in mineralogy, chemical composition and/or porosity create sufficient x-ray density contrasts.We present here first, preliminary results of an application to the external and internal morphology of Permian to Recent Larger Foraminifera. We use a SkyScan-1072 high-resolution desk-top micro-CT system. The system has a conical x-ray source with a spot size of about 5µm that runs at 20-100kV, 0-250µA, resulting in a maximal resolution of 5µm. X-ray transmission images are captured by a scintillator coupled via fibre optics to a 1024x1024 pixel 12-bit CCD. The object is placed between the x-ray source and the scintillator on a stub that rotates 360°around its vertical axis in steps as small as 0.24 degrees. Sample size is limited to 2 cm due to the absorption of geologic material for x-rays. The transmission images are back projected using a Feldkamp algorithm into a vertical stack of up to 1000 1Kx1K images that represent horizontal cuts of the object. This calculation takes 2 to several hours on a Double-Processor 2.4GHz PC. The stack of images (.bmp) can be visualized with any 3D-imaging software, used to produce cuts of Larger Foraminifera. Among other applications, the 3D-imaging software furnished by SkyScan can produce 3D-models by defining a threshold density value to distinguish "solid" from "void. Several models with variable threshold values and colors can be imbricated, rotated and cut together. The best results were obtained with microfossils devoid of chamber-filling cements (Permian, Eocene, Recent). However, even slight differences in cement mineralogy/composition can result in surprisingly good x-ray density contrasts.X-ray microtomography may develop into a powerful tool for larger microfossils with a complex internal structure, because it is non-destructive, requires no preparation of the specimens, and produces a true 3D-image data set. We will use these data sets in the future to produce cuts in any direction to compare them with arbitrary cuts of complex microfossils in thin sections. Many groups of benthic and planktonic foraminifera may become more easily determinable in thin section by this way.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECT: In this study the accuracy of multislice computerized tomography (MSCT) angiography in the postoperative examination of clip-occluded intracranial aneurysms was compared with that of intraarterial digital subtraction (DS) angiography METHODS: Forty-nine consecutive patients with 60 clipped aneurysms (41 of which had ruptured) were studied with the aid of postoperative MSCT and DS angiography. Both types of radiological studies were reviewed independently by two observers to assess the quality of the images, the artifacts left by the clips, the completeness of aneurysm occlusion, the patency of the parent vessel, and the duration and cost of the examination. The quality of MSCT angiography was good in 42 patients (86%). Poor-quality MSCT angiograms (14%) were a result of the late acquisition of images in three patients and the presence of clip or motion artifacts in four. Occlusion of the aneurysm on good-quality MSCT angiograms was confirmed in all but two patients in whom a small (2-mm) remnant was confirmed on DS angiograms. In one patient, occlusion of a parent vessel was seen on DS angiograms but missed on MSCT angiograms. The sensitivity and specificity for detecting neck remnants on MSCT angiography were both 100%, and the sensitivity and specificity for evaluating vessel patency were 80 and 100%, respectively (95% confidence interval 29.2-100%). Interobserver agreements were 0.765 and 0.86, respectively. The mean duration of the examination was 13 minutes for MSCT angiography and 75 minutes for DS angiography (p < 0.05). Multislice CT angiography was highly cost effective (p < 0.01). CONCLUSIONS: Current-generation MSCT angiography is an accurate noninvasive tool used for assessment of clipped aneurysms in the anterior circulation. Its high sensitivity and low cost warrant its use for postoperative routine control examinations following clip placement on an aneurysm. Digital subtraction angiography must be performed if the interpretation of MSCT angiograms is doubtful or if the aneurysm is located in the posterior circulation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objectives: We are interested in the numerical simulation of the anastomotic region comprised between outflow canula of LVAD and the aorta. Segmenta¬tion, geometry reconstruction and grid generation from patient-specific data remain an issue because of the variable quality of DICOM images, in particular CT-scan (e.g. metallic noise of the device, non-aortic contrast phase). We pro¬pose a general framework to overcome this problem and create suitable grids for numerical simulations.Methods: Preliminary treatment of images is performed by reducing the level window and enhancing the contrast of the greyscale image using contrast-limited adaptive histogram equalization. A gradient anisotropic diffusion filter is applied to reduce the noise. Then, watershed segmentation algorithms and mathematical morphology filters allow reconstructing the patient geometry. This is done using the InsightToolKit library (www.itk.org). Finally the Vascular Model¬ing ToolKit (www.vmtk.org) and gmsh (www.geuz.org/gmsh) are used to create the meshes for the fluid (blood) and structure (arterial wall, outflow canula) and to a priori identify the boundary layers. The method is tested on five different patients with left ventricular assistance and who underwent a CT-scan exam.Results: This method produced good results in four patients. The anastomosis area is recovered and the generated grids are suitable for numerical simulations. In one patient the method failed to produce a good segmentation because of the small dimension of the aortic arch with respect to the image resolution.Conclusions: The described framework allows the use of data that could not be otherwise segmented by standard automatic segmentation tools. In particular the computational grids that have been generated are suitable for simulations that take into account fluid-structure interactions. Finally the presented method features a good reproducibility and fast application.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The repeated presentation of simple objects as well as biologically salient objects can cause the adaptation of behavioral and neural responses during the visual categorization of these objects. Mechanisms of response adaptation during repeated food viewing are of particular interest for better understanding food intake beyond energetic needs. Here, we measured visual evoked potentials (VEPs) and conducted neural source estimations to initial and repeated presentations of high-energy and low-energy foods as well as non-food images. The results of our study show that the behavioral and neural responses to food and food-related objects are not uniformly affected by repetition. While the repetition of images displaying low-energy foods and non-food modulated VEPs as well as their underlying neural sources and increased behavioral categorization accuracy, the responses to high-energy images remained largely invariant between initial and repeated encounters. Brain mechanisms when viewing images of high-energy foods thus appear less susceptible to repetition effects than responses to low-energy and non-food images. This finding is likely related to the superior reward value of high-energy foods and might be one reason why in particular high-energetic foods are indulged although potentially leading to detrimental health consequences.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nowadays, the joint exploitation of images acquired daily by remote sensing instruments and of images available from archives allows a detailed monitoring of the transitions occurring at the surface of the Earth. These modifications of the land cover generate spectral discrepancies that can be detected via the analysis of remote sensing images. Independently from the origin of the images and of type of surface change, a correct processing of such data implies the adoption of flexible, robust and possibly nonlinear method, to correctly account for the complex statistical relationships characterizing the pixels of the images. This Thesis deals with the development and the application of advanced statistical methods for multi-temporal optical remote sensing image processing tasks. Three different families of machine learning models have been explored and fundamental solutions for change detection problems are provided. In the first part, change detection with user supervision has been considered. In a first application, a nonlinear classifier has been applied with the intent of precisely delineating flooded regions from a pair of images. In a second case study, the spatial context of each pixel has been injected into another nonlinear classifier to obtain a precise mapping of new urban structures. In both cases, the user provides the classifier with examples of what he believes has changed or not. In the second part, a completely automatic and unsupervised method for precise binary detection of changes has been proposed. The technique allows a very accurate mapping without any user intervention, resulting particularly useful when readiness and reaction times of the system are a crucial constraint. In the third, the problem of statistical distributions shifting between acquisitions is studied. Two approaches to transform the couple of bi-temporal images and reduce their differences unrelated to changes in land cover are studied. The methods align the distributions of the images, so that the pixel-wise comparison could be carried out with higher accuracy. Furthermore, the second method can deal with images from different sensors, no matter the dimensionality of the data nor the spectral information content. This opens the doors to possible solutions for a crucial problem in the field: detecting changes when the images have been acquired by two different sensors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: To set local dose reference levels (DRL) that allow radiologists to control stochastic and deterministic effects. Methods and materials: Dose indicators for cerebral angiographies and hepatic embolizations were collected during 4 months and analyzed in our hospital. The data were compared when an image amplifier was used instead of a flat panel detector. The Mann and Whitney test was used. Results: For the 40 cerebral angiographies performed the DRL for DAP, fluoroscopy time and number of images were respectively: 166 Gy.cm2, 19 min, 600. The maximum DAP was 490 Gy.cm2 (fluoroscopy time: 84 min). No significant difference for fluoroscopy time and DAP for image amplifier and flat panel detector (p = 0.88) was observed. The number of images was larger for flat panel detector (p = 0.004). The values obtained were slightly over the present proposed DRL: 150 Gy.cm2, 15 min, 400. Concerning the 13 hepatic embolizations the DRL for DAP fluoroscopy time and number of images were: 315 Gy.cm2, 25 min, 370. The maximum DAP delivered was 845 Gy.cm2 (fluoroscopy time of 48 min). No significant difference between image amplifier and flat panel detector was observed (p = 0.005). The values obtained were also slightly over the present proposed DRL: 300 Gy.cm2, 20 min, 200. Conclusion: These results show that the introduction of flat panel detector did not lead to an increase in patient dose. A DRL concerning the cumulative dose (that allow to control the deterministic effect) should be introduced to allow radiologists to have full control on the risks associated with ionizing radiations. Results of this on going study will be presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Atherosclerosis is a chronic cardiovascular disease that involves the thicken¬ing of the artery walls as well as the formation of plaques (lesions) causing the narrowing of the lumens, in vessels such as the aorta, the coronary and the carotid arteries. Magnetic resonance imaging (MRI) is a promising modality for the assessment of atherosclerosis, as it is a non-invasive and patient-friendly procedure that does not use ionizing radiation. MRI offers high soft tissue con¬trast already without the need of intravenous contrast media; while modifica¬tion of the MR pulse sequences allows for further adjustment of the contrast for specific diagnostic needs. As such, MRI can create angiographic images of the vessel lumens to assess stenoses at the late stage of the disease, as well as blood flow-suppressed images for the early investigation of the vessel wall and the characterization of the atherosclerotic plaques. However, despite the great technical progress that occurred over the past two decades, MRI is intrinsically a low sensitive technique and some limitations still exist in terms of accuracy and performance. A major challenge for coronary artery imaging is respiratory motion. State- of-the-art diaphragmatic navigators rely on an indirect measure of motion, per¬form a ID correction, and have long and unpredictable scan time. In response, self-navigation (SM) strategies have recently been introduced that offer 100% scan efficiency and increased ease of use. SN detects respiratory motion di¬rectly from the image data obtained at the level of the heart, and retrospectively corrects the same data before final image reconstruction. Thus, SN holds po-tential for multi-dimensional motion compensation. To this regard, this thesis presents novel SN methods that estimate 2D and 3D motion parameters from aliased sub-images that are obtained from the same raw data composing the final image. Combination of all corrected sub-images produces a final image with reduced motion artifacts for the visualization of the coronaries. The first study (section 2.2, 2D Self-Navigation with Compressed Sensing) consists of a method for 2D translational motion compensation. Here, the use of com- pressed sensing (CS) reconstruction is proposed and investigated to support motion detection by reducing aliasing artifacts. In healthy human subjects, CS demonstrated an improvement in motion detection accuracy with simula¬tions on in vivo data, while improved coronary artery visualization was demon¬strated on in vivo free-breathing acquisitions. However, the motion of the heart induced by respiration has been shown to occur in three dimensions and to be more complex than a simple translation. Therefore, the second study (section 2.3,3D Self-Navigation) consists of a method for 3D affine motion correction rather than 2D only. Here, different techniques were adopted to reduce background signal contribution in respiratory motion tracking, as this can be adversely affected by the static tissue that surrounds the heart. The proposed method demonstrated to improve conspicuity and vi¬sualization of coronary arteries in healthy and cardiovascular disease patient cohorts in comparison to a conventional ID SN method. In the third study (section 2.4, 3D Self-Navigation with Compressed Sensing), the same tracking methods were used to obtain sub-images sorted according to the respiratory position. Then, instead of motion correction, a compressed sensing reconstruction was performed on all sorted sub-image data. This process ex¬ploits the consistency of the sorted data to reduce aliasing artifacts such that the sub-image corresponding to the end-expiratory phase can directly be used to visualize the coronaries. In a healthy volunteer cohort, this strategy improved conspicuity and visualization of the coronary arteries when compared to a con¬ventional ID SN method. For the visualization of the vessel wall and atherosclerotic plaques, the state- of-the-art dual inversion recovery (DIR) technique is able to suppress the signal coming from flowing blood and provide positive wall-lumen contrast. How¬ever, optimal contrast may be difficult to obtain and is subject to RR variability. Furthermore, DIR imaging is time-inefficient and multislice acquisitions may lead to prolonged scanning times. In response and as a fourth study of this thesis (chapter 3, Vessel Wall MRI of the Carotid Arteries), a phase-sensitive DIR method has been implemented and tested in the carotid arteries of a healthy volunteer cohort. By exploiting the phase information of images acquired after DIR, the proposed phase-sensitive method enhances wall-lumen contrast while widens the window of opportunity for image acquisition. As a result, a 3-fold increase in volumetric coverage is obtained at no extra cost in scanning time, while image quality is improved. In conclusion, this thesis presented novel methods to address some of the main challenges for MRI of atherosclerosis: the suppression of motion and flow artifacts for improved visualization of vessel lumens, walls and plaques. Such methods showed to significantly improve image quality in human healthy sub¬jects, as well as scan efficiency and ease-of-use of MRI. Extensive validation is now warranted in patient populations to ascertain their diagnostic perfor¬mance. Eventually, these methods may bring the use of atherosclerosis MRI closer to the clinical practice. Résumé L'athérosclérose est une maladie cardiovasculaire chronique qui implique le épaississement de la paroi des artères, ainsi que la formation de plaques (lé¬sions) provoquant le rétrécissement des lumières, dans des vaisseaux tels que l'aorte, les coronaires et les artères carotides. L'imagerie par résonance magné¬tique (IRM) est une modalité prometteuse pour l'évaluation de l'athérosclérose, car il s'agit d'une procédure non-invasive et conviviale pour les patients, qui n'utilise pas des rayonnements ionisants. L'IRM offre un contraste des tissus mous très élevé sans avoir besoin de médias de contraste intraveineux, tan¬dis que la modification des séquences d'impulsions de RM permet en outre le réglage du contraste pour des besoins diagnostiques spécifiques. À ce titre, l'IRM peut créer des images angiographiques des lumières des vaisseaux pour évaluer les sténoses à la fin du stade de la maladie, ainsi que des images avec suppression du flux sanguin pour une première enquête des parois des vais¬seaux et une caractérisation des plaques d'athérosclérose. Cependant, malgré les grands progrès techniques qui ont eu lieu au cours des deux dernières dé¬cennies, l'IRM est une technique peu sensible et certaines limitations existent encore en termes de précision et de performance. Un des principaux défis pour l'imagerie de l'artère coronaire est le mou¬vement respiratoire. Les navigateurs diaphragmatiques de pointe comptent sur une mesure indirecte de mouvement, effectuent une correction 1D, et ont un temps d'acquisition long et imprévisible. En réponse, les stratégies d'auto- navigation (self-navigation: SN) ont été introduites récemment et offrent 100% d'efficacité d'acquisition et une meilleure facilité d'utilisation. Les SN détectent le mouvement respiratoire directement à partir des données brutes de l'image obtenue au niveau du coeur, et rétrospectivement corrigent ces mêmes données avant la reconstruction finale de l'image. Ainsi, les SN détiennent un poten¬tiel pour une compensation multidimensionnelle du mouvement. A cet égard, cette thèse présente de nouvelles méthodes SN qui estiment les paramètres de mouvement 2D et 3D à partir de sous-images qui sont obtenues à partir des mêmes données brutes qui composent l'image finale. La combinaison de toutes les sous-images corrigées produit une image finale pour la visualisation des coronaires ou les artefacts du mouvement sont réduits. La première étude (section 2.2,2D Self-Navigation with Compressed Sensing) traite d'une méthode pour une compensation 2D de mouvement de translation. Ici, on étudie l'utilisation de la reconstruction d'acquisition comprimée (compressed sensing: CS) pour soutenir la détection de mouvement en réduisant les artefacts de sous-échantillonnage. Chez des sujets humains sains, CS a démontré une amélioration de la précision de la détection de mouvement avec des simula¬tions sur des données in vivo, tandis que la visualisation de l'artère coronaire sur des acquisitions de respiration libre in vivo a aussi été améliorée. Pourtant, le mouvement du coeur induite par la respiration se produit en trois dimensions et il est plus complexe qu'un simple déplacement. Par conséquent, la deuxième étude (section 2.3, 3D Self-Navigation) traite d'une méthode de cor¬rection du mouvement 3D plutôt que 2D uniquement. Ici, différentes tech¬niques ont été adoptées pour réduire la contribution du signal du fond dans le suivi de mouvement respiratoire, qui peut être influencé négativement par le tissu statique qui entoure le coeur. La méthode proposée a démontré une amélioration, par rapport à la procédure classique SN de correction 1D, de la visualisation des artères coronaires dans le groupe de sujets sains et des pa¬tients avec maladies cardio-vasculaires. Dans la troisième étude (section 2.4,3D Self-Navigation with Compressed Sensing), les mêmes méthodes de suivi ont été utilisées pour obtenir des sous-images triées selon la position respiratoire. Au lieu de la correction du mouvement, une reconstruction de CS a été réalisée sur toutes les sous-images triées. Cette procédure exploite la cohérence des données pour réduire les artefacts de sous- échantillonnage de telle sorte que la sous-image correspondant à la phase de fin d'expiration peut directement être utilisée pour visualiser les coronaires. Dans un échantillon de volontaires en bonne santé, cette stratégie a amélioré la netteté et la visualisation des artères coronaires par rapport à une méthode classique SN ID. Pour la visualisation des parois des vaisseaux et de plaques d'athérosclérose, la technique de pointe avec double récupération d'inversion (DIR) est capa¬ble de supprimer le signal provenant du sang et de fournir un contraste posi¬tif entre la paroi et la lumière. Pourtant, il est difficile d'obtenir un contraste optimal car cela est soumis à la variabilité du rythme cardiaque. Par ailleurs, l'imagerie DIR est inefficace du point de vue du temps et les acquisitions "mul- tislice" peuvent conduire à des temps de scan prolongés. En réponse à ce prob¬lème et comme quatrième étude de cette thèse (chapitre 3, Vessel Wall MRI of the Carotid Arteries), une méthode de DIR phase-sensitive a été implémenté et testé

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Almost thirty years ago, as the social sciences underwent their 'discursive turn', Bernardo Secchi (1984) drew, in what he called the 'urban planning narrative', the attention of planners to the production of myths, turning an activity often seen as primarily technical into one centred around the production of images and ideas. This conception of planning practice gave rise to a powerful current of research in English-speaking countries. Efforts were made to both combine the urban planning narrative with storytelling and to establish storytelling as a prescriptive or descriptive model for planning practice. Thus, just as storytelling is supposed to have led democratic communication off track through a pronounced concern for a good story, storytelling applied to the field of urban production may have led to an increasing preoccupation with staging and showmanship for projects to the detriment of their real inclusion in political debate. It is this possible transformation of the territorial action that will be the focus of the articles collected in this special issue of Articulo - Journal of urban research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article analyses how Radha was depicted in miniature paintings between the 16th and 19th century in North India. Interrogating the link between text and image, contrasting poetry, style and historical settings with the visual representations of this central figure, my reflections focus on the changing nature of Radha. Through various examples from miniature paintings of different periods and schools, this article analyses the way the rich personality of Radha was transposed into images. In order to stress the changes brought to this female figure, I compare her to Krishna, the masculine figure who is always at her side. The main goal of the article is to show the normative power of images on the figure of Radha, with normativity being understood as the simplification, iconisation, aestheticisation and stereotypification of a figure with polysemous references.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We aimed to determine whether human subjects' reliance on different sources of spatial information encoded in different frames of reference (i.e., egocentric versus allocentric) affects their performance, decision time and memory capacity in a short-term spatial memory task performed in the real world. Subjects were asked to play the Memory game (a.k.a. the Concentration game) without an opponent, in four different conditions that controlled for the subjects' reliance on egocentric and/or allocentric frames of reference for the elaboration of a spatial representation of the image locations enabling maximal efficiency. We report experimental data from young adult men and women, and describe a mathematical model to estimate human short-term spatial memory capacity. We found that short-term spatial memory capacity was greatest when an egocentric spatial frame of reference enabled subjects to encode and remember the image locations. However, when egocentric information was not reliable, short-term spatial memory capacity was greater and decision time shorter when an allocentric representation of the image locations with respect to distant objects in the surrounding environment was available, as compared to when only a spatial representation encoding the relationships between the individual images, independent of the surrounding environment, was available. Our findings thus further demonstrate that changes in viewpoint produced by the movement of images placed in front of a stationary subject is not equivalent to the movement of the subject around stationary images. We discuss possible limitations of classical neuropsychological and virtual reality experiments of spatial memory, which typically restrict the sensory information normally available to human subjects in the real world.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

RESUME : Bien que les propriétés physiques de la structure de l'ADN aient été intensivement étudiées pendant plus de 50 ans il y a encore beaucoup de questions importantes qui attendent des réponses. Par exemple, qu'arrive-t-il à la structure de la double hélice d'ADN nue (sans protéines liées) lorsqu'elle est fortement courbée, de la même manière que dans les nucléosomes? Cet ADN nu est-il facilement plié (il reste dans le régime élastique) ou réduit-il la contrainte de flexion en formant des sites hyperflexibles «kinks» (il sort du régime élastique en cassant l'empilement des paires de bases à certains endroits) ? La microscopie électronique peut fournir une réponse à cette question par visualisation directe des minicercles d'ADN de la longueur d'un tour de nucléosome (environ 90 paires de bases). Pour que la réponse soit scientifiquement valide, on doit observer les molécules d'ADN lorsqu'elles sont en suspension dans la solution d'intérêt et sans que des colorations, produits chimiques ou fixatifs n'aient été ajoutés, étant donné que ceux-ci peuvent changer les propriétés de l'ADN. La technique de la cryo-microscopie électronique (cryo-EM) développée par le groupe de Jacques Dubochet au début des années 80, permet la visualisation directe des molécules d'ADN suspendues dans des couche minces vitrifiées de solutions aqueuses. Toutefois, le faible contraste qui caractérise la cryo-EM combinée avec la très petite taille des minicercles d'ADN rendent nécessaire l'optimisation de plusieurs étapes, aussi bien dans la préparation des échantillons que dans le processus d'acquisition d'images afin d'obtenir deux clichés stéréo qui permettent la reconstruction 3-D des minicercles d'ADN. Dans la première partie de ma thèse, je décris l'optimisation de certains paramètres pour la cryoEM et des processus d'acquisition d'image utilisant comme objets de test des plasmides et d'autres molécules d'ADN. Dans la deuxième partie, je .décris comment j'ai construit les minicercles d'ADN de 94 bp et comment j'ai introduit des modifications structurelles comme des coupures ou des lacunes. Dans la troisième partie, je décris l'analyse des reconstructions des rninicercles d'ADN. Cette analyse, appuyée par des tests biochimiques, indique fortement que des molécules d'ADN sont capables de former de petites molécules circulaires de 94 bp sans dépasser les limites d'élasticité, indiquant que les minicercles adoptent une forme circulaire régulière où la flexion est redistribuée le long la molécule. ABSTRACT : Although physical properties of DNA structure have been intensively studied for over 50 years there are still many important questions that need to be answered. For example, what happens to protein-free double-stranded DNA when it is strongly bent, as in DNA forming nucleosomes? Is such protein-free DNA smoothly bent (i.e. it remains within elastic limits of DNA rigidity) or does it release its bending stress by forming sharp kinks (i.e. it exits the elastic regime and breaks the stacking between neighbouring base-pairs in localized regions)? Electron microscopy can provide an answer to this question by directly visualizing DNA minicircles that have the size of nucleosome gyres (ca 90 bp). For the answer to be scientifically valid, one needs to observe DNA molecules while they are still suspended in the solution of interest and no staining chemicals or fixatives have been added since these can change the properties of the DNA. CryoEM techniques developed by Jacques Dubochet's group beginning in the 1980's permit direct visualization of DNA molecules suspended in cryo-vitrified layers of aqueous solutions. However, a relatively weak contrast of cryo-EM preparations combined with the very small size of the DNA minicircles made it necessary to optimize many of the steps and parameters of the cryo-EM specimen preparation and image acquisition processes in order to obtain stereo-pairs of images that permit the 3-D reconstruction of the observed DNA minicircles. In the first part of my thesis I describe the optimization of the cryo-EM preparation and the image acquisition processes using plasmid size DNA molecules as a test object. In the second part, I describe how I formed the 94 by DNA minicircles and how I introduced structural modifications like nicks or gaps. In the third part, I describe the cryo-EM analysis of the constructed DNA minicircles. That analysis, supported by biochemical tests, strongly indicates that DNA minicircles as small as 94 by remain within the elastic limits of DNA structure, i.e. the minicircles adopt a regular circular shape where bending is redistributed along the molecules.