997 resultados para OBJECT DEFINITION


Relevância:

60.00% 60.00%

Publicador:

Resumo:

L’objectif de ce mémoire est de porter un regard critique sur l’exposition du patrimoine culturel immatériel kanak, dernièrement proposée au musée du quai Branly. Ayant pour objet de corriger l’élitisme de l’eurocentrisme, le postcolonialisme est un courant de pensée qui vise à repositionner les acteurs et enjeux marginaux. Interdisciplinaire, le discours postcolonial suscite une pluralité de perspectives pour inclure la voix des « autres ». Dans notre étude, nous choisissons de traiter du propos dans un cadre singulier. Notre approche se consacrera à l’interprétation de l’ « autre » par le musée du quai Branly, de manière à comprendre comment, aujourd’hui, la particularité du patrimoine culturel kanak y est exposée. Ce mémoire se propose d’effectuer un retour sur la venue du quai Branly dans le cadre de quelques problèmes récurrents concernant la perception et le traitement de l’objet nonoccidental, retour qui semble nécessaire à l’établissement d’un bilan sur les pratiques expositionnelles du patrimoine culturel immatériel dans l’exposition « Kanak. L’art est une parole ». Pour ce faire, les enjeux soulevés seront abordés dans les perspectives de l’histoire de l’art, de l’anthropologie et de la muséologie. Par la muséographie qu’il met en place, le musée fait part au public de son parti-pris quant au discours didactique qu’il souhaite lui transmettre. Les choix effectués pour la mise en exposition et sa contextualisation expriment la définition que le musée donne aux objets qu’il contient. Le musée du quai Branly est un cas particulier. Promu au rang de musée d’ « art premier » à son ouverture, il expose des objets ethnographiques venus d’Afrique, d’Océanie, des Amériques ou encore d’Asie pour leurs qualités esthétiques. Loin de nos préceptes occidentaux, la difficulté de ce musée est de rendre compte de l’histoire de ces objets. L’exemple de l’expression artistique kanak soulève le fond du problème dans la mesure où elle prend forme à travers des ressources orales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabalho apresenta uma extensão do padrão ODMG para o suporte ao versionamento de objetos e características temporais. Essa extensão, denominada TV_ODMG, é baseada no Modelo Temporal de Versões (TVM), que é um modelo de dados orientado a objetos desenvolvido para armazenar as versões do objeto e, para cada versão, o histórico dos valores dos atributos e dos relacionamentos dinâmicos. O TVM difere de outros modelos de dados temporais por apresentar duas diferentes ordens de tempo, ramificado para o objeto e linear para cada versão. O usuário pode também especificar, durante a modelagem, classes normais (sem tempo e versões), o que permite a integração desse modelo com outras modelagens existentes. Neste trabalho, os seguintes componentes da arquitetura do padrão ODMG foram estendidos: o Modelo de Objetos, a ODL (Object Definition Language) e a OQL (Object Query Language). Adicionalmente, foi desenvolvido um conjunto de regras para o mapeamento do TV_ODMG para o ODMG a fim de permitir o uso de qualquer ODBMS para suportar a extensão proposta.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sleep-related (SR) crashes are an endemic problem the world over. However, police officers report difficulties in identifying sleepiness as a crash contributing factor. One approach to improving the sensitivity of SR crash identification is by applying a proxy definition post hoc to crash reports. To identify the prominent characteristics of SR crashes and highlight the influence of proxy definitions, ten years of Queensland (Australia) police reports of crashes occurring in ≥100 km/h speed zones were analysed. In Queensland, two approaches are routinely taken to identifying SR crashes. First, attending police officers identify crash causal factors; one possible option is ‘fatigue/fell asleep’. Second, a proxy definition is applied to all crash reports. Those meeting the definition are considered SR and added to the police-reported SR crashes. Of the 65,204 vehicle operators involved in crashes 3449 were police-reported as SR. Analyses of these data found that male drivers aged 16–24 years within the first two years of unsupervised driving were most likely to have a SR crash. Collision with a stationary object was more likely in SR than in not-SR crashes. Using the proxy definition 9739 (14.9%) crashes were classified as SR. Using the proxy definition removes the findings that SR crashes are more likely to involve males and be of high severity. Additionally, proxy defined SR crashes are no less likely at intersections than not-SR crashes. When interpreting crash data it is important to understand the implications of SR identification because strategies aimed at reducing the road toll are informed by such data. Without the correct interpretation, funding could be misdirected. Improving sleepiness identification should be a priority in terms of both improvement to police and proxy reporting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dans l'apprentissage machine, la classification est le processus d’assigner une nouvelle observation à une certaine catégorie. Les classifieurs qui mettent en œuvre des algorithmes de classification ont été largement étudié au cours des dernières décennies. Les classifieurs traditionnels sont basés sur des algorithmes tels que le SVM et les réseaux de neurones, et sont généralement exécutés par des logiciels sur CPUs qui fait que le système souffre d’un manque de performance et d’une forte consommation d'énergie. Bien que les GPUs puissent être utilisés pour accélérer le calcul de certains classifieurs, leur grande consommation de puissance empêche la technologie d'être mise en œuvre sur des appareils portables tels que les systèmes embarqués. Pour rendre le système de classification plus léger, les classifieurs devraient être capable de fonctionner sur un système matériel plus compact au lieu d'un groupe de CPUs ou GPUs, et les classifieurs eux-mêmes devraient être optimisés pour ce matériel. Dans ce mémoire, nous explorons la mise en œuvre d'un classifieur novateur sur une plate-forme matérielle à base de FPGA. Le classifieur, conçu par Alain Tapp (Université de Montréal), est basé sur une grande quantité de tables de recherche qui forment des circuits arborescents qui effectuent les tâches de classification. Le FPGA semble être un élément fait sur mesure pour mettre en œuvre ce classifieur avec ses riches ressources de tables de recherche et l'architecture à parallélisme élevé. Notre travail montre que les FPGAs peuvent implémenter plusieurs classifieurs et faire les classification sur des images haute définition à une vitesse très élevée.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel mathematical framework inspired on Morse Theory for topological triangle characterization in 2D meshes is introduced that is useful for applications involving the creation of mesh models of objects whose geometry is not known a priori. The framework guarantees a precise control of topological changes introduced as a result of triangle insertion/removal operations and enables the definition of intuitive high-level operators for managing the mesh while keeping its topological integrity. An application is described in the implementation of an innovative approach for the detection of 2D objects from images that integrates the topological control enabled by geometric modeling with traditional image processing techniques. (C) 2008 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a finite-difference time-domain (FDTD) simulator for electromagnetic analysis and design applications in MRI. It is intended to be a complete FDTD model of an MRI system including all RF and low-frequency field generating units and electrical models of the patient. The pro-ram has been constructed in an object-oriented framework. The design procedure is detailed and the numerical solver has been verified against analytical solutions for simple cases and also applied to various field calculation problems. In particular, the simulator is demonstrated for inverse RF coil design, optimized source profile generation, and parallel imaging in high-frequency situations. The examples show new developments enabled by the simulator and demonstrate that the proposed FDTD framework can be used to analyze large-scale computational electromagnetic problems in modern MRI engineering. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we describe a novel, extensible visualization system currently under development at Aston University. We introduce modern programming methods, such as the use of data driven programming, design patterns, and the careful definition of interfaces to allow easy extension using plug-ins, to 3D landscape visualization software. We combine this with modern developments in computer graphics, such as vertex and fragment shaders, to create an extremely flexible, extensible real-time near photorealistic visualization system. In this paper we show the design of the system and the main sub-components. We stress the role of modern programming practices and illustrate the benefits these bring to 3D visualization. © 2006 Springer-Verlag Berlin Heidelberg.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditionally, importance has been measured using subjective measures. The present thesis explores the possibility of a second type of importance, designated as “associative importance”. A new measure, the IIAT, was designed to capture the strength of association between an object and the attribute of importance. This thesis then evaluated the validity of the IIAT via an intervention paradigm in 2 studies, and by using the measure to predict a memory outcome in 2 other studies. Subjective measures of importance were also included in these studies and correlations between subjective measures and IIAT results were examined. Across all 4 studies, subjective-objective correlations were weak to modest and non-significant. The intervention studies provided promising evidence that interventions do affect associative importance as measured by the IIAT. The prediction studies provided somewhat mixed, but encouraging evidence that the IIAT may be able to predict memory performance. Notably, subjective measures were not able to predict memory performance at all, whereas the IIAT was able to predict some memory indices. Overall, there is some evidence supporting the existence of an associative importance construct, and that the IIAT provides valid results that are nonetheless different from that of subjective measures of attitude importance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Metaphor is a multi-stage programming language extension to an imperative, object-oriented language in the style of C# or Java. This paper discusses some issues we faced when applying multi-stage language design concepts to an imperative base language and run-time environment. The issues range from dealing with pervasive references and open code to garbage collection and implementing cross-stage persistence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The next phase envisioned for the World Wide Web is automated ad-hoc interaction between intelligent agents, web services, databases and semantic web enabled applications. Although at present this appears to be a distant objective, there are practical steps that can be taken to advance the vision. We propose an extension to classical conceptual models to allow the definition of application components in terms of public standards and explicit semantics, thus building into web-based applications, the foundation for shared understanding and interoperability. The use of external definitions and the need to store outsourced type information internally, brings to light the issue of object identity in a global environment, where object instances may be identified by multiple externally controlled identification schemes. We illustrate how traditional conceptual models may be augmented to recognise and deal with multiple identities.