980 resultados para complex representations
Resumo:
Ce mémoire s'attarde à la notion d'hyperréalisme en littérature contemporaine et à son incarnation spécifique dans trois romans de Suzanne Jacob : L'Obéissance (1991), Rouge mère et fils (2001) et Fugueuses (2005). Le recours à la théorie et à l'histoire de la peinture est essentiel puisque l'hyperréalisme est d'abord endossé par l'art pictural. De plus, la peinture, la photographie, le cinéma, la musique, la télévision, la sculpture, l'architecture et la littérature sont autant de médiations fortement présentes dans le roman hyperréaliste. Cette présence multiple des médias est essentielle au caractère hyperréaliste d'une œuvre ; la tentative d'intégrer le réel passe par un détour représentationnel. Les manifestations stylistiques et narratives de l'hyperréalisme sont associées à l'intégration de formes empruntées à d'autres arts ou médias comme la fugue et le fait divers. Les effets de l'hyperréalisme sur la narration se manifestent également par un éclatement des focalisations, en témoignent la fragmentation narrative ainsi que l'importance accordée au détail. Enfin, l'hyperréalisme joue sur une tension constante entre continuité et rupture. Les conséquences sont à envisager dans une sorte d'appréhension du réel, tant par le personnage que par le roman, qui doivent composer avec une multiplicité de représentations.
Resumo:
Background: As people age, language-processing ability changes. While several factors modify discourse comprehension ability in older adults, syntactic complexity of auditory discourse has received scant attention. This is despite the widely researched domain of syntactic processing of single sentences in older adults. Aims: The aims of this study were to investigate the ability of healthy older adults to understand stories that differed in syntactic complexity, and its relation to working memory. Methods & Procedures: A total of 51 healthy adults (divided into three age groups) took part. They listened to brief stories (syntactically simple and syntactically complex) and had to respond to false/true comprehension probes following each story. Working memory capacity (digit span, forward and backward) was also measured. Outcomes & Results: Differences were found in the ability of healthy older adults to understand simple and complex discourse. The complex discourse in particular was more sensitive in discerning age-related language patterns. Only the complex discourse task correlated moderately with age. There was no correlation between age and simple discourse. As far as working memory is concerned, moderate correlations were found between working memory and complex discourse. Education did not correlate with discourse, neither simple, nor complex. Conclusions: Older adults may be less efficient in forming syntactically complex representations and this may be influenced by limitations in working memory.
Resumo:
Among all possible realizations of quark and antiquark assembly, the nucleon (the proton and the neutron) is the most stable of all hadrons and consequently has been the subject of intensive studies. Mass, shape, radius and more complex representations of its internal structure are measured since several decades using different probes. The proton (spin 1/2) is described by the electric GE and magnetic GM form factors which characterise its internal structure. The simplest way to measure the proton form factors consists in measuring the angular distribution of the electron-proton elastic scattering accessing the so-called Space-Like region where q2 < 0. Using the crossed channel antiproton proton <--> e+e-, one accesses another kinematical region, the so-called Time-Like region where q2 > 0. However, due to the antiproton proton <--> e+e- threshold q2th, only the kinematical domain q2 > q2th > 0 is available. To access the unphysical region, one may use the antiproton proton --> pi0 e+ e- reaction where the pi0 takes away a part of the system energy allowing q2 to be varied between q2th and almost 0. This thesis aims to show the feasibility of such measurements with the PANDA detector which will be installed on the new high intensity antiproton ring at the FAIR facility at Darmstadt. To describe the antiproton proton --> pi0 e+ e- reaction, a Lagrangian based approach is developed. The 5-fold differential cross section is determined and related to linear combinations of hadronic tensors. Under the assumption of one nucleon exchange, the hadronic tensors are expressed in terms of the 2 complex proton electromagnetic form factors. An extraction method which provides an access to the proton electromagnetic form factor ratio R = |GE|/|GM| and for the first time in an unpolarized experiment to the cosine of the phase difference is developed. Such measurements have never been performed in the unphysical region up to now. Extended simulations were performed to show how the ratio R and the cosine can be extracted from the positron angular distribution. Furthermore, a model is developed for the antiproton proton --> pi0 pi+ pi- background reaction considered as the most dangerous one. The background to signal cross section ratio was estimated under different cut combinations of the particle identification information from the different detectors and of the kinematic fits. The background contribution can be reduced to the percent level or even less. The corresponding signal efficiency ranges from a few % to 30%. The precision on the determination of the ratio R and of the cosine is determined using the expected counting rates via Monte Carlo method. A part of this thesis is also dedicated to more technical work with the study of the prototype of the electromagnetic calorimeter and the determination of its resolution.
Resumo:
Le Siège de Calais, hailed by its author in 1765 as France’s ‘première tragédie nationale’, rolled into Paris like a storm. Pierre-Laurent de Belloy’s play about French bravery during the Hundred Years’ War (1337-1453) appeared on the heels of France’s defeat in the Seven Years’ War (1756-1763). Le Siège de Calais was performed throughout Europe and published numerous times during the second half of the eighteenth century. De Belloy emerged as a national hero, receiving prizes from Louis XV, accolades from the city of Calais, and membership to the prestigious Académie française. Since the French Revolution, however, the popularity of Le Siège de Calais has eclipsed, owing to its overt glorification of France’s royal machine. Several hundred years later, the play warrants a fresh look from a holistic perspective. De Belloy’s tragedy and the varied responses it provoked – many of which are included in this edition – offer complex representations of French political history and patriotic sentiment. Le Siège de Calais reveals conflicting images of gender roles, political debate and family values during the twilight of the Ancien régime; it also constituted one of the last moments when serious drama asserted its role as a popular force.
Resumo:
A rhetorical approach to the fiction of war offers an appropriate vehicle by which one may encounter and interrogate such literature and the cultural metanarratives that exist therein. My project is a critical analysis—one that relies heavily upon Kenneth Burke’s dramatistic method and his concepts of scapegoating, the comic corrective, and hierarchical psychosis—of three war novels published in 2012 (The Yellow Birds by Kevin Powers, FOBBIT by David Abrams, and Billy Lynn’s Long Halftime Walk by Ben Fountain). This analysis assumes a rhetorical screen in order to subvert and redirect the grand narratives the United States perpetuates in art form whenever it goes to war. Kenneth Burke’s concept of ad bellum purificandum (the purification of war) sought to bridge the gap between war experience and the discourse that it creates in both art and criticism. My work extends that project. I examine the symbolic incongruity of convenient symbols that migrate from war to war (“Geronimo” was used as code for Osama bin Laden’s death during the S.E.A.L team raid; “Indian Country” stands for any dangerous land in Iraq; hajji is this generation’s epithet for the enemy other). Such an examination can weaken our cultural “symbol mongering,” to borrow a phrase from Walker Percy. These three books, examined according to Burke’s methodology, exhibit a wide range of approaches to the soldier’s tale. Notably, however, whether they refigure the grand narratives of modern culture or recast the common redemptive war narrative into more complex representations, this examination shows how one can grasp, contend, and transcend the metanarrative of the typical, redemptive war story.
Resumo:
Authors from Burrough (1992) to Heuvelink et al. (2007) have highlighted the importance of GIS frameworks which can handle incomplete knowledge in data inputs, in decision rules and in the geometries and attributes modelled. It is particularly important for this uncertainty to be characterised and quantified when GI data is used for spatial decision making. Despite a substantial and valuable literature on means of representing and encoding uncertainty and its propagation in GI (e.g.,Hunter and Goodchild 1993; Duckham et al. 2001; Couclelis 2003), no framework yet exists to describe and communicate uncertainty in an interoperable way. This limits the usability of Internet resources of geospatial data, which are ever-increasing, based on specifications that provide frameworks for the ‘GeoWeb’ (Botts and Robin 2007; Cox 2006). In this paper we present UncertML, an XML schema which provides a framework for describing uncertainty as it propagates through many applications, including online risk management chains. This uncertainty description ranges from simple summary statistics (e.g., mean and variance) to complex representations such as parametric, multivariate distributions at each point of a regular grid. The philosophy adopted in UncertML is that all data values are inherently uncertain, (i.e., they are random variables, rather than values with defined quality metadata).
Resumo:
Peer reviewed
Resumo:
Peer reviewed
Resumo:
This study used both content and frame analyses to test news-media representations of homelessness in The Courier-Mail newspaper for evidence of restricted journalism practice. Specifically, it sought signs of either direct manipulation of issue representation based on ideological grounds, and also evidence of news organisations prioritising low-cost news production over Public Sphere journalistic news values. The study found that news stories from the earlier parts of the longitudinal study showed stereotypical misrepresentations of homelessness for public deliberation which might be attributed to either, or both of the nominated restricting factors. However news stories from the latter part of the study saw a distinct change in the way the issue was represented, indicating a journalistic capacity to thoughtfully and sensitively represent a complex social issue to the public. Further study is recommended to ascertain how and why this change occurred, so that journalistic practice might be further improved.
Resumo:
Humans develop rich mental representations that guide their behavior in a variety of everyday tasks. However, it is unknown whether these representations, often formalized as priors in Bayesian inference, are specific for each task or subserve multiple tasks. Current approaches cannot distinguish between these two possibilities because they cannot extract comparable representations across different tasks [1-10]. Here, we develop a novel method, termed cognitive tomography, that can extract complex, multidimensional priors across tasks. We apply this method to human judgments in two qualitatively different tasks, "familiarity" and "odd one out," involving an ecologically relevant set of stimuli, human faces. We show that priors over faces are structurally complex and vary dramatically across subjects, but are invariant across the tasks within each subject. The priors we extract from each task allow us to predict with high precision the behavior of subjects for novel stimuli both in the same task as well as in the other task. Our results provide the first evidence for a single high-dimensional structured representation of a naturalistic stimulus set that guides behavior in multiple tasks. Moreover, the representations estimated by cognitive tomography can provide independent, behavior-based regressors for elucidating the neural correlates of complex naturalistic priors. © 2013 The Authors.
Resumo:
The modulation of neural activity in visual cortex is thought to be a key mechanism of visual attention. The investigation of attentional modulation in high-level visual areas, however, is hampered by the lack of clear tuning or contrast response functions. In the present functional magnetic resonance imaging study we therefore systematically assessed how small voxel-wise biases in object preference across hundreds of voxels in the lateral occipital complex were affected when attention was directed to objects. We found that the strength of attentional modulation depended on a voxel's object preference in the absence of attention, a pattern indicative of an amplificatory mechanism. Our results show that such attentional modulation effectively increased the mutual information between voxel responses and object identity. Further, these local modulatory effects led to improved information-based object readout at the level of multi-voxel activation patterns and to an increased reproducibility of these patterns across repeated presentations. We conclude that attentional modulation enhances object coding in local and distributed object representations of the lateral occipital complex.
Resumo:
Complex networks have been extensively used in the last decade to characterize and analyze complex systems, and they have been recently proposed as a novel instrument for the analysis of spectra extracted from biological samples. Yet, the high number of measurements composing spectra, and the consequent high computational cost, make a direct network analysis unfeasible. We here present a comparative analysis of three customary feature selection algorithms, including the binning of spectral data and the use of information theory metrics. Such algorithms are compared by assessing the score obtained in a classification task, where healthy subjects and people suffering from different types of cancers should be discriminated. Results indicate that a feature selection strategy based on Mutual Information outperforms the more classical data binning, while allowing a reduction of the dimensionality of the data set in two orders of magnitude
Resumo:
This paper demonstrates that in order to understand and design for interactions in complex work environments, a variety of representational artefacts must be developed and employed. A study was undertaken to explore the design of better interaction technologies to support patient record keeping in a dental surgery. The domain chosen is a challenging real context that exhibits problems that could potentially be solved by ubiquitous computing and multi-modal interaction technologies. Both transient and durable representations were used to develop design understandings. We describe the representations, the kinds of insights developed from the representations and the way that the multiple representations interact and carry forward in the design process.
Resumo:
RatSLAM is a system for vision-based Simultaneous Localisation and Mapping (SLAM) inspired by models of the rodent hippocampus. The system can produce stable representations of large complex environments during robot experiments in both indoor and outdoor environments. These representations are both topological and metric in nature, and can involve multiple representations of the same place as well as discontinuities. In this paper we describe a new technique known as experience mapping that can be used online with the RatSLAM system to produce world representations known as experience maps. These maps group together multiple place representations and are spatially continuous. A number of experiments have been conducted in simulation and a real world office environment. These experiments demonstrate the high degree to which experience maps are representative of the spatial arrangement of the environment.
Resumo:
Gabor representations have been widely used in facial analysis (face recognition, face detection and facial expression detection) due to their biological relevance and computational properties. Two popular Gabor representations used in literature are: 1) Log-Gabor and 2) Gabor energy filters. Even though these representations are somewhat similar, they also have distinct differences as the Log-Gabor filters mimic the simple cells in the visual cortex while the Gabor energy filters emulate the complex cells, which causes subtle differences in the responses. In this paper, we analyze the difference between these two Gabor representations and quantify these differences on the task of facial action unit (AU) detection. In our experiments conducted on the Cohn-Kanade dataset, we report an average area underneath the ROC curve (A`) of 92.60% across 17 AUs for the Gabor energy filters, while the Log-Gabor representation achieved an average A` of 96.11%. This result suggests that small spatial differences that the Log-Gabor filters pick up on are more useful for AU detection than the differences in contours and edges that the Gabor energy filters extract.