905 resultados para Human-computer interaction -- Design
Resumo:
The aim of this thesis is to utilize the technology developed at LUT and to provide an easy tool for high-speed solid-rotor induction machine preliminary design. Computer aided design tool MathCAD has been chosen as the environment for realizing the calculation program. Four versions of the design program have been made depending on the motor rotor type. The first rotor type is an axially slitted solid-rotor with steel end rings. The next one is an axially slitted solid-rotor with copper end rings. The third machine type is a solid rotor with deep, rectangular copper bars and end rings (squirrel cage). And the last one is a solid-rotor with round copper bars and end rings (squirrel cage). Each type of rotor has its own specialties but a general thread of design is common. This paper follows the structure of the calculating program and explains some features and formulas. The attention is concentrated on the difference between laminated and solid-rotor machine design principles. There is no deep analysis of the calculation ways are presented. References for all solution methods appearing during the design procedure are given for more detailed studying. This thesis pays respect to the latest innovations in solid-rotor machines theory. Rotor ends’ analytical calculation follows the latest knowledge in this field. Correction factor for adjusting the rotor impedance is implemented. The purpose of the created design program is to calculate the preliminary dimensions of the machine according to initial data. Obtained results are not recommended for exact machine development. Further more detailed design should be done in a finite element method application. Hence, this thesis is a practical tool for the prior evaluating of the high-speed machine with different solid-rotor types parameters.
Resumo:
This Master’s Thesis is dedicated to the simulation of new p-type pixel strip detector with enhanced multiplication effect. It is done for high-energy physics experiments upgrade such as Super Large Hadron Collider especially for Compact Muon Solenoid particle track silicon detectors. These detectors are used in very harsh radiation environment and should have good radiation hardness. The device engineering technology for developing more radiation hard particle detectors is used for minimizing the radiation degradation. New detector structure with enhanced multiplication effect is proposed in this work. There are studies of electric field and electric charge distribution of conventional and new p-type detector under reverse voltage bias and irradiation. Finally, the dependence of the anode current from the applied cathode reverse voltage bias under irradiation is obtained in this Thesis. For simulation Silvaco Technology Computer Aided Design software was used. Athena was used for creation of doping profiles and device structures and Atlas was used for getting electrical characteristics of the studied devices. The program codes for this software are represented in Appendixes.
Resumo:
La maladie des artères périphériques (MAP) se manifeste par une réduction (sténose) de la lumière de l’artère des membres inférieurs. Elle est causée par l’athérosclérose, une accumulation de cellules spumeuses, de graisse, de calcium et de débris cellulaires dans la paroi artérielle, généralement dans les bifurcations et les ramifications. Par ailleurs, la MAP peut être causée par d`autres facteurs associés comme l’inflammation, une malformation anatomique et dans de rares cas, au niveau des artères iliaques et fémorales, par la dysplasie fibromusculaire. L’imagerie ultrasonore est le premier moyen de diagnostic de la MAP. La littérature clinique rapporte qu’au niveau de l’artère fémorale, l’écho-Doppler montre une sensibilité de 80 à 98 % et une spécificité de 89 à 99 % à détecter une sténose supérieure à 50 %. Cependant, l’écho-Doppler ne permet pas une cartographie de l’ensemble des artères des membres inférieurs. D’autre part, la reconstruction 3D à partir des images échographiques 2D des artères atteintes de la MAP est fortement opérateur dépendant à cause de la grande variabilité des mesures pendant l’examen par les cliniciens. Pour planifier une intervention chirurgicale, les cliniciens utilisent la tomodensitométrie (CTA), l’angiographie par résonance magnétique (MRA) et l’angiographie par soustraction numérique (DSA). Il est vrai que ces modalités sont très performantes. La CTA montre une grande précision dans la détection et l’évaluation des sténoses supérieures à 50 % avec une sensibilité de 92 à 97 % et une spécificité entre 93 et 97 %. Par contre, elle est ionisante (rayon x) et invasive à cause du produit de contraste, qui peut causer des néphropathies. La MRA avec injection de contraste (CE MRA) est maintenant la plus utilisée. Elle offre une sensibilité de 92 à 99.5 % et une spécificité entre 64 et 99 %. Cependant, elle sous-estime les sténoses et peut aussi causer une néphropathie dans de rares cas. De plus les patients avec stents, implants métalliques ou bien claustrophobes sont exclus de ce type d`examen. La DSA est très performante mais s`avère invasive et ionisante. Aujourd’hui, l’imagerie ultrasonore (3D US) s’est généralisée surtout en obstétrique et échocardiographie. En angiographie il est possible de calculer le volume de la plaque grâce à l’imagerie ultrasonore 3D, ce qui permet un suivi de l’évolution de la plaque athéromateuse au niveau des vaisseaux. L’imagerie intravasculaire ultrasonore (IVUS) est une technique qui mesure ce volume. Cependant, elle est invasive, dispendieuse et risquée. Des études in vivo ont montré qu’avec l’imagerie 3D-US on est capable de quantifier la plaque au niveau de la carotide et de caractériser la géométrie 3D de l'anastomose dans les artères périphériques. Par contre, ces systèmes ne fonctionnent que sur de courtes distances. Par conséquent, ils ne sont pas adaptés pour l’examen de l’artère fémorale, à cause de sa longueur et de sa forme tortueuse. L’intérêt pour la robotique médicale date des années 70. Depuis, plusieurs robots médicaux ont été proposés pour la chirurgie, la thérapie et le diagnostic. Dans le cas du diagnostic artériel, seuls deux prototypes sont proposés, mais non commercialisés. Hippocrate est le premier robot de type maitre/esclave conçu pour des examens des petits segments d’artères (carotide). Il est composé d’un bras à 6 degrés de liberté (ddl) suspendu au-dessus du patient sur un socle rigide. À partir de ce prototype, un contrôleur automatisant les déplacements du robot par rétroaction des images échographiques a été conçu et testé sur des fantômes. Le deuxième est le robot de la Colombie Britannique conçu pour les examens à distance de la carotide. Le mouvement de la sonde est asservi par rétroaction des images US. Les travaux publiés avec les deux robots se limitent à la carotide. Afin d’examiner un long segment d’artère, un système robotique US a été conçu dans notre laboratoire. Le système possède deux modes de fonctionnement, le mode teach/replay (voir annexe 3) et le mode commande libre par l’utilisateur. Dans ce dernier mode, l’utilisateur peut implémenter des programmes personnalisés comme ceux utilisés dans ce projet afin de contrôler les mouvements du robot. Le but de ce projet est de démontrer les performances de ce système robotique dans des conditions proches au contexte clinique avec le mode commande libre par l’utilisateur. Deux objectifs étaient visés: (1) évaluer in vitro le suivi automatique et la reconstruction 3D en temps réel d’une artère en utilisant trois fantômes ayant des géométries réalistes. (2) évaluer in vivo la capacité de ce système d'imagerie robotique pour la cartographie 3D en temps réel d'une artère fémorale normale. Pour le premier objectif, la reconstruction 3D US a été comparée avec les fichiers CAD (computer-aided-design) des fantômes. De plus, pour le troisième fantôme, la reconstruction 3D US a été comparée avec sa reconstruction CTA, considéré comme examen de référence pour évaluer la MAP. Cinq chapitres composent ce mémoire. Dans le premier chapitre, la MAP sera expliquée, puis dans les deuxième et troisième chapitres, l’imagerie 3D ultrasonore et la robotique médicale seront développées. Le quatrième chapitre sera consacré à la présentation d’un article intitulé " A robotic ultrasound scanner for automatic vessel tracking and three-dimensional reconstruction of B-mode images" qui résume les résultats obtenus dans ce projet de maîtrise. Une discussion générale conclura ce mémoire. L’article intitulé " A 3D ultrasound imaging robotic system to detect and quantify lower limb arterial stenoses: in vivo feasibility " de Marie-Ange Janvier et al dans l’annexe 3, permettra également au lecteur de mieux comprendre notre système robotisé. Ma contribution dans cet article était l’acquisition des images mode B, la reconstruction 3D et l’analyse des résultats pour le patient sain.
Resumo:
Historiquement, les animaux sauvages ont toujours représenté une ressource pour les hommes, assurant la sécurité alimentaire des sociétés locales et traditionnelles. L’exploitation touristique de la faune implique dès lors une évolution dans les modes de vie, la culture et les identités locales. L’objectif de cette recherche doctorale est d’analyser le récréotourisme faunique. Les activités récréotouristiques autour de la faune sauvage traduisent une requalification de la ressource faune, ce qui a des impacts à la fois sur les espaces humains et non humains, les jeux de construction territoriale et sur les rapports développés à la faune sauvage. Ce travail analyse les rapports que les sociétés entretiennent avec la faune sauvage à travers les activités récréotouristiques de chasse et de vision. Ces deux formes de tourisme sont généralement opposées car le tourisme de vision est présenté comme un usage non-consomptif de la ressource alors que le tourisme de chasse est reconnu comme un usage consomptif de la ressource. Dépassant certaines idées reçues sur les pratiques de la chasse et une approche manichéenne entre ces différentes activités, il convient d’interroger les distinctions et / ou le rapport dialogique entre ces pratiques. Afin de conduire cette recherche, le choix d’une analyse comparative a été retenu, laquelle se propose de mettre en perspective différentes études de cas en France et au Canada. Ce travail comparatif permet de mieux comprendre les enjeux touristiques et territoriaux associés à la gestion de la faune sauvage et de penser la transférabilité des processus observés entre différents terrains d’études. D’un point de vue méthodologique, ce travail doctoral nous a conduite à définir un cadre analytique organisé autour de quatre entrées croisant des (i) aspects conceptuels, (ii) l’analyse d’archives, (iii) des méthodes d’observation ainsi que (iv) des outils d’analyse des rapports homme / faune via l’analyse de discours des populations touristiques. La première partie de ce travail présente le contexte théorique de l’étude et la démarche systémique de cette recherche (chapitre 1, 2 et 3). En termes de résultat, ces présupposés méthodologiques et théoriques nous ont permis d’analyser comment les dynamiques du récréotourisme faunique agissent, réagissent et rétroagissent sur l’ensemble du système territorial. Ainsi, la deuxième partie interroge l’organisation socio-spatiale des activités récréotouristiques de chasse et de vision (chapitre 4 et 5). Ces différentes formes de tourisme sont analysées en prenant en compte l’implantation de ces activités au sein des territoires, les attentes touristiques de la part des visiteurs, et les effets des différentes pratiques sur les populations fauniques. La troisième et dernière partie s’intéresse à l’évolution des rapports hommes / faune sauvage dans le temps et l’espace au regard des activités récréotouristiques développées. Le chapitre 6 s’intéresse aux rapports dialectiques entre processus de patrimonialisation et les usages acceptés ou non de la ressource faunique, alors que le chapitre 7 propose une réflexion sur les rapports hommes / animaux à l’échelle de l’individu en interrogeant l’éthique de chacun dans ses usages, ses comportements et ses pratiques développés autour de la faune sauvage.
Resumo:
Cette recherche porte un regard critique sur les interfaces de spatialisation sonore et positionne la composition de musique spatiale, un champ d’étude en musique, à l’avant plan d’une recherche en design. Il détaille l’approche de recherche qui est centrée sur le processus de composition de musique spatiale et les modèles mentaux de compositeurs électroacoustiques afin de livrer des recommandations de design pour le développement d’une interface de spatialisation musicale nommée Centor. Cette recherche montre qu’un processus de design mené à l’intersection du design d’interface, du design d’interaction et de la théorie musicale peut mener à une proposition pertinente et innovatrice pour chacun des domaines d’étude. Nous présentons la recherche et le développement du concept de spatialisation additive, une méthode de spatialisation sonore par patrons qui applique le vocabulaire spectromorphologique de Denis Smalley. C’est un concept d’outil de spatialisation pour le studio qui complémente les interfaces de composition actuelles et ouvre un nouveau champ de possibilités pour l’exploration spatiale en musique électroacoustique. La démarche de recherche présentée ici se veut une contribution au domaine du design d’interfaces musicales, spécifiquement les interfaces de spatialisation, mais propose aussi un processus de design pour la création d’interfaces numériques d’expression artistique.
Resumo:
This proposed thesis is entitled “Plasma Polymerised Organic Thin Films: A study on the Structural, Electrical, and Nonlinear Optical Properties for Possible Applications. Polymers and polymer based materials find enormous applications in the realm of electronics and optoelectronics. They are employed as both active and passive components in making various devices. Enormous research activities are going on in this area for the last three decades or so, and many useful contributions are made quite accidentally. Conducting polymers is such a discovery, and eversince the discovery of conducting polyacetylene, a new branch of science itself has emerged in the form of synthetic metals. Conducting polymers are useful materials for many applications like polymer displays, high density data storage, polymer FETs, polymer LEDs, photo voltaic devices and electrochemical cells. With the emergence of molecular electronics and its potential in finding useful applications, organic thin films are receiving an unusual attention by scientists and engineers alike. This is evident from the vast literature pertaining to this field appearing in various journals. Recently, computer aided design of organic molecules have added further impetus to the ongoing research activities in this area. Polymers, especially, conducting polymers can be prepared both in the bulk and in the thinfilm form. However, many applications necessitate that they are grown in the thin film form either as free standing or on appropriate substrates. As far as their bulk counterparts are concerned, they can be prepared by various polymerisation techniques such as chemical routes and electrochemical means. A survey of the literature reveals that polymers like polyaniline, polypyrrole, polythiophene, have been investigated with a view to studying their structural electrical and optical properties. Among the various alternate techniques employed for the preparation of polymer thin films, the method of plasma polymerisation needs special attention in this context. The technique of plasma polymerisation is an inexpensive method and often requires very less infra structure. This method includes the employment of ac, rf, dc, microwave and pulsed sources. They produce pinhole free homogeneous films on appropriate substrates under controlled conditions. In conventional plasma polymerisation set up, the monomer is fed into an evacuated chamber and an ac/rf/dc/ w/pulsed discharge is created which enables the monomer species to dissociate, leading to the formation of polymer thin films. However, it has been found that the structure and hence the properties exhibited by plasma polymerized thin films are quite different from that of their counterparts produced by other thin film preparation techniques such as electrochemical deposition or spin coating. The properties of these thin films can be tuned only if the interrelationship between the structure and other properties are understood from a fundamental point of view. So very often, a through evaluation of the various properties is a pre-requisite for tailoring the properties of the thin films for applications. It has been found that conjugation is a necessary condition for enhancing the conductivity of polymer thin films. RF technique of plasma polymerisation is an excellent tool to induce conjugation and this modifies the electrical properties too. Both oxidative and reductive doping can be employed to modify the electrical properties of the polymer thin films for various applications. This is where organic thin films based on polymers scored over inorganic thin films, where in large area devices can be fabricated with organic semiconductors which is difficult to achieve by inorganic materials. For such applications, a variety of polymers have been synthesized such as polyaniline, polythiophene, polypyrrole etc. There are newer polymers added to this family every now and then. There are many virgin areas where plasma polymers are yet to make a foray namely low-k dielectrics or as potential nonlinear optical materials such as optical limiters. There are also many materials which are not been prepared by the method of plasma polymerisation. Some of the materials which are not been dealt with are phenyl hydrazine and tea tree oil. The advantage of employing organic extracts like tea tree oil monomers as precursors for making plasma polymers is that there can be value addition to the already existing uses and possibility exists in converting them to electronic grade materials, especially semiconductors and optically active materials for photonic applications. One of the major motivations of this study is to synthesize plasma polymer thin films based on aniline, phenyl hydrazine, pyrrole, tea tree oil and eucalyptus oil by employing both rf and ac plasma polymerisation techniques. This will be carried out with the objective of growing thin films on various substrates such as glass, quartz and indium tin oxide (ITO) coated glass. There are various properties namely structural, electrical, dielectric permittivity, nonlinear optical properties which are to be evaluated to establish the relationship with the structure and the other properties. Special emphasis will be laid in evaluating the optical parameters like refractive index (n), extinction coefficient (k), the real and imaginary components of dielectric constant and the optical transition energies of the polymer thin films from the spectroscopic ellipsometric studies. Apart from evaluating these physical constants, it is also possible to predict whether a material exhibit nonlinear optical properties by ellipsometric investigations. So further studies using open aperture z-scan technique in order to evaluate the nonlinear optical properties of a few selected samples which are potential nonlinear optical materials is another objective of the present study. It will be another endeavour to offer an appropriate explanation for the nonlinear optical properties displayed by these films. Doping of plasma polymers is found to modify both the electrical conductivity and optical properties. Iodine is found to modify the properties of the polymer thin films. However insitu iodine doping is tricky and the film often looses its stability because of the escape of iodine. An appropriate insitu technique of doping will be developed to dope iodine in to the plasma polymerized thin films. Doping of polymer thin films with iodine results in improved and modified optical and electrical properties. However it requires tools like FTIR and UV-Vis-NIR spectroscopy to elucidate the structural and optical modifications imparted to the polymer films. This will be attempted here to establish the role of iodine in the modification of the properties exhibited by the films
Resumo:
The rapid growth in high data rate communication systems has introduced new high spectral efficient modulation techniques and standards such as LTE-A (long term evolution-advanced) for 4G (4th generation) systems. These techniques have provided a broader bandwidth but introduced high peak-to-average power ratio (PAR) problem at the high power amplifier (HPA) level of the communication system base transceiver station (BTS). To avoid spectral spreading due to high PAR, stringent requirement on linearity is needed which brings the HPA to operate at large back-off power at the expense of power efficiency. Consequently, high power devices are fundamental in HPAs for high linearity and efficiency. Recent development in wide bandgap power devices, in particular AlGaN/GaN HEMT, has offered higher power level with superior linearity-efficiency trade-off in microwaves communication. For cost-effective HPA design to production cycle, rigorous computer aided design (CAD) AlGaN/GaN HEMT models are essential to reflect real response with increasing power level and channel temperature. Therefore, large-size AlGaN/GaN HEMT large-signal electrothermal modeling procedure is proposed. The HEMT structure analysis, characterization, data processing, model extraction and model implementation phases have been covered in this thesis including trapping and self-heating dispersion accounting for nonlinear drain current collapse. The small-signal model is extracted using the 22-element modeling procedure developed in our department. The intrinsic large-signal model is deeply investigated in conjunction with linearity prediction. The accuracy of the nonlinear drain current has been enhanced through several issues such as trapping and self-heating characterization. Also, the HEMT structure thermal profile has been investigated and corresponding thermal resistance has been extracted through thermal simulation and chuck-controlled temperature pulsed I(V) and static DC measurements. Higher-order equivalent thermal model is extracted and implemented in the HEMT large-signal model to accurately estimate instantaneous channel temperature. Moreover, trapping and self-heating transients has been characterized through transient measurements. The obtained time constants are represented by equivalent sub-circuits and integrated in the nonlinear drain current implementation to account for complex communication signals dynamic prediction. The obtained verification of this table-based large-size large-signal electrothermal model implementation has illustrated high accuracy in terms of output power, gain, efficiency and nonlinearity prediction with respect to standard large-signal test signals.
Resumo:
In the vision of Mark Weiser on ubiquitous computing, computers are disappearing from the focus of the users and are seamlessly interacting with other computers and users in order to provide information and services. This shift of computers away from direct computer interaction requires another way of applications to interact without bothering the user. Context is the information which can be used to characterize the situation of persons, locations, or other objects relevant for the applications. Context-aware applications are capable of monitoring and exploiting knowledge about external operating conditions. These applications can adapt their behaviour based on the retrieved information and thus to replace (at least a certain amount) the missing user interactions. Context awareness can be assumed to be an important ingredient for applications in ubiquitous computing environments. However, context management in ubiquitous computing environments must reflect the specific characteristics of these environments, for example distribution, mobility, resource-constrained devices, and heterogeneity of context sources. Modern mobile devices are equipped with fast processors, sufficient memory, and with several sensors, like Global Positioning System (GPS) sensor, light sensor, or accelerometer. Since many applications in ubiquitous computing environments can exploit context information for enhancing their service to the user, these devices are highly useful for context-aware applications in ubiquitous computing environments. Additionally, context reasoners and external context providers can be incorporated. It is possible that several context sensors, reasoners and context providers offer the same type of information. However, the information providers can differ in quality levels (e.g. accuracy), representations (e.g. position represented in coordinates and as an address) of the offered information, and costs (like battery consumption) for providing the information. In order to simplify the development of context-aware applications, the developers should be able to transparently access context information without bothering with underlying context accessing techniques and distribution aspects. They should rather be able to express which kind of information they require, which quality criteria this information should fulfil, and how much the provision of this information should cost (not only monetary cost but also energy or performance usage). For this purpose, application developers as well as developers of context providers need a common language and vocabulary to specify which information they require respectively they provide. These descriptions respectively criteria have to be matched. For a matching of these descriptions, it is likely that a transformation of the provided information is needed to fulfil the criteria of the context-aware application. As it is possible that more than one provider fulfils the criteria, a selection process is required. In this process the system has to trade off the provided quality of context and required costs of the context provider against the quality of context requested by the context consumer. This selection allows to turn on context sources only if required. Explicitly selecting context services and thereby dynamically activating and deactivating the local context provider has the advantage that also the resource consumption is reduced as especially unused context sensors are deactivated. One promising solution is a middleware providing appropriate support in consideration of the principles of service-oriented computing like loose coupling, abstraction, reusability, or discoverability of context providers. This allows us to abstract context sensors, context reasoners and also external context providers as context services. In this thesis we present our solution consisting of a context model and ontology, a context offer and query language, a comprehensive matching and mediation process and a selection service. Especially the matching and mediation process and the selection service differ from the existing works. The matching and mediation process allows an autonomous establishment of mediation processes in order to transfer information from an offered representation into a requested representation. In difference to other approaches, the selection service selects not only a service for a service request, it rather selects a set of services in order to fulfil all requests which also facilitates the sharing of services. The approach is extensively reviewed regarding the different requirements and a set of demonstrators shows its usability in real-world scenarios.
Resumo:
The Scheme86 and the HP Precision Architectures represent different trends in computer processor design. The former uses wide micro-instructions, parallel hardware, and a low latency memory interface. The latter encourages pipelined implementation and visible interlocks. To compare the merits of these approaches, algorithms frequently encountered in numerical and symbolic computation were hand-coded for each architecture. Timings were done in simulators and the results were evaluated to determine the speed of each design. Based on these measurements, conclusions were drawn as to which aspects of each architecture are suitable for a high- performance computer.
Resumo:
A pioneer team of students of the University of Girona decided to design and develop an autonomous underwater vehicle (AUV) called ICTINEU-AUV to face the Student Autonomous Underwater Challenge-Europe (SAUC-E). The prototype has evolved from the initial computer aided design (CAD) model to become an operative AUV in the short period of seven months. The open frame and modular design principles together with the compatibility with other robots previously developed at the lab have provided the main design philosophy. Hence, at the robot's core, two networked computers give access to a wide set of sensors and actuators. The Gentoo/Linux distribution was chosen as the onboard operating system. A software architecture based on a set of distributed objects with soft real time capabilities was developed and a hybrid control architecture including mission control, a behavioural layer and a robust map-based localization algorithm made ICTINEU-AUV the winning entry
Resumo:
Resumen tomado del autor
Resumo:
Resumen tomado de la publicación
Resumo:
La asesoría de empresas Ribas Álvarez tiene, actualmente, un problema con la gestión de documentos internos, que se realiza mediante correo interno y una aplicación sencilla de indexación de archivos (HTML); sin ningún tipo de supervisión ni restricción. Esta empresa dispone de un cierto número de trabajadores, los cuales pertenecen a diferentes secciones (privadas o públicas) dentro de la empresa. La información que circula dentro de la empresa, no tiene ningún tipo de seguridad pudiendo cualquier trabajador, disponer de ella aunque no le sea de utilidad. Se quiere crear una aplicación que cumpla con las necesidades que la empresa desea para la administración y gestión de documentos internos, con un control de usuarios y seguridad de acceso a esta aplicación. El objetivo básico de la aplicación seria la creación y gestión de una intranet de control y seguimiento de documentos para una asesoría de empresas
Resumo:
This paper presents a genesis of the French research field of Architecturology, from its creation to the current researches developed from it, at ARIAM-LAREA (National School of Architecture of Paris-la-Villette Laboratory of modeling for computer aids of cognitive activity of conception). Architecturology has been thought at the creation of French Schools of Architecture that has been initiated with the French movement of 1968 May. Its major aim is to build specific knowledge on architecture for learning architecture. The first book of the beginnings of this scientific field is “Sur l’espace architectural” written by Ph. Boudon and published in 1971. It’s currently constituted with a scientific systemic language and a paradigm that help to explain cognitive activity of design named by it, conception. This scientific language has been published in “Enseigner la conception architecturale: cours d’architecturologie” written by Ph. Boudon, Ph. Deshayes, F. Pousin and F. Shatz, and published in 1994 and in 2000, in “Echelle(s)” published in 2002 and which gathers different articles of Ph. Boudon and, in different articles of the team of LAREA - Ph. Boudon, Ph. Deshayes, F. Pousin, F. Shatz and C. Lecourtois. From this scientific language and the paradigm of Architecturology, I develop methods for extending the field of knowledge of this point of view by doing researches in architecture. These methods are gathered into the concept of Applied Architecturology. In 2005, LAREA has merged with a research team interested in Computer Aided Design, named ARIAM. To create ARIAM-LAREA, we have built a new research program on Computer Aided Conception where we use Applied Architecturology for 1) producing new knowledge on implications of Computer in cognitive activity of design and 2) developing new software to Support some operations of conception. This paper exposes my current research work and three theses that I co-lead at ARIAMLAREA on this object.
Resumo:
Eye tracking has become a preponderant technique in the evaluation of user interaction and behaviour with study objects in defined contexts. Common eye tracking related data representation techniques offer valuable input regarding user interaction and eye gaze behaviour, namely through fixations and saccades measurement. However, these and other techniques may be insufficient for the representation of acquired data in specific studies, namely because of the complexity of the study object being analysed. This paper intends to contribute with a summary of data representation and information visualization techniques used in data analysis within different contexts (advertising, websites, television news and video games). Additionally, several methodological approaches are presented in this paper, which resulted from several studies developed and under development at CETAC.MEDIA - Communication Sciences and Technologies Research Centre. In the studies described, traditional data representation techniques were insufficient. As a result, new approaches were necessary and therefore, new forms of representing data, based on common techniques were developed with the objective of improving communication and information strategies. In each of these studies, a brief summary of the contribution to their respective area will be presented, as well as the data representation techniques used and some of the acquired results.