811 resultados para Architecture and Complexity
Resumo:
Asphaltenes are blamed for various problems in the petroleum industry, especially formation of solid deposits and stabilization of water-in-oil emulsions. Many studies have been conducted to characterize chemical structures of asphaltenes and assess their phase behavior in crude oil or in model-systems of asphaltenes extracted from oil or asphaltic residues from refineries. However, due to the diversity and complexity of these structures, there is still much to be investigated. In this study, asphaltene (sub)fractions were extracted from an asphaltic residue (AR02), characterized by NMR, elemental analysis, X-ray fluorescence and MS-TOF, and compared to asphaltene subfractions obtained from another asphaltic residue (AR01) described in a previous article. The (sub)fractions obtained from the two residues were used to prepare model-systems containing 1 wt% of asphaltenes in toluene and their phase behavior was evaluated by measuring asphaltene precipitation onset using optical microscopy. The results obtained indicated minor differences between the asphaltene fractions obtained from the asphaltic residues of distinct origins, with respect to aromaticity, elemental composition (CHN), presence and content of heteroelements and average molar mass. Regarding stability, minor differences in molecule polarity appear to promote major differences in the phase behavior of each of the asphaltene fractions isolated.
Resumo:
As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.
Resumo:
The morphofunctional aspects of oogenesis of Poecilia vivipara were studied aiming to understand the reproductive biology and development of species with internal fertilization, particularly those belonging to the family Poeciliidae. The stages of gonadal maturation and follicular development were characterized using mesoscopic, histological, histochemical, and lectin cytochemical analyses. Through mesoscopic evaluation the ovarian development was classified in six phases of development: immature, in maturation I, in maturation II, mature I, mature II, and post-spawn. Based on microscopic examination of the ovaries, we identified the presence of oocytes types I and II during the previtellogenic phase and types III, IV, and V during the vitellogenic phase. As oogenesis proceeded the oocyte cytosol increased in volume and presented increased cytoplasmic granule accumulation, characterizing vitellogenesis. The zona radiata (ZR) increased in thickness and complexity, and the follicular epithelium, which was initially thin and consisting of pavimentous cells, in type III oocytes exhibited cubic simple cells. The histochemical and cytochemical analyses revealed alterations in the composition of the molecular structures that form the ovarian follicle throughout the gonadal development. Our study demonstrated differences in the female reproductive system among fish species with internal and external fertilization and we suggest P. vivipara can be used as experimental model to test environmental toxicity.
Resumo:
The necessity of EC (Electronic Commerce) and enterprise systems integration is perceived from the integrated nature of enterprise systems. The proven benefits of EC to provide competitive advantages to the organizations force enterprises to adopt and integrate EC with their enterprise systems. Integration is a complex task to facilitate seamless flow of information and data between different systems within and across enterprises. Different systems have different platforms, thus to integrate systems with different platforms and infrastructures, integration technologies, such as middleware, SOA (Service-Oriented Architecture), ESB (Enterprise Service Bus), JCA (J2EE Connector Architecture), and B2B (Business-to-Business) integration standards are required. Huge software vendors, such as Oracle, IBM, Microsoft, and SAP suggest various solutions to address EC and enterprise systems integration problems. There are limited numbers of literature about the integration of EC and enterprise systems in detail. Most of the studies in this area have focused on the factors which influence the adoption of EC by enterprise or other studies provide limited information about a specific platform or integration methodology in general. Therefore, this thesis is conducted to cover the technical details of EC and enterprise systems integration and covers both the adoption factors and integration solutions. In this study, many literature was reviewed and different solutions were investigated. Different enterprise integration approaches as well as most popular integration technologies were investigated. Moreover, various methodologies of integrating EC and enterprise systems were studied in detail and different solutions were examined. In this study, the influential factors to adopt EC in enterprises were studied based on previous literature and categorized to technical, social, managerial, financial, and human resource factors. Moreover, integration technologies were categorized based on three levels of integration, which are data, application, and process. In addition, different integration approaches were identified and categorized based on their communication and platform. Also, different EC integration solutions were investigated and categorized based on the identified integration approaches. By considering different aspects of integration, this study is a great asset to the architectures, developers, and system integrators in order to integrate and adopt EC with enterprise systems.
Resumo:
Technological innovations, the development of the internet, and globalization have increased the number and complexity of web applications. As a result, keeping web user interfaces understandable and usable (in terms of ease-of-use, effectiveness, and satisfaction) is a challenge. As part of this, designing userintuitive interface signs (i.e., the small elements of web user interface, e.g., navigational link, command buttons, icons, small images, thumbnails, etc.) is an issue for designers. Interface signs are key elements of web user interfaces because ‘interface signs’ act as a communication artefact to convey web content and system functionality, and because users interact with systems by means of interface signs. In the light of the above, applying semiotic (i.e., the study of signs) concepts on web interface signs will contribute to discover new and important perspectives on web user interface design and evaluation. The thesis mainly focuses on web interface signs and uses the theory of semiotic as a background theory. The underlying aim of this thesis is to provide valuable insights to design and evaluate web user interfaces from a semiotic perspective in order to improve overall web usability. The fundamental research question is formulated as What do practitioners and researchers need to be aware of from a semiotic perspective when designing or evaluating web user interfaces to improve web usability? From a methodological perspective, the thesis follows a design science research (DSR) approach. A systematic literature review and six empirical studies are carried out in this thesis. The empirical studies are carried out with a total of 74 participants in Finland. The steps of a design science research process are followed while the studies were designed and conducted; that includes (a) problem identification and motivation, (b) definition of objectives of a solution, (c) design and development, (d) demonstration, (e) evaluation, and (f) communication. The data is collected using observations in a usability testing lab, by analytical (expert) inspection, with questionnaires, and in structured and semi-structured interviews. User behaviour analysis, qualitative analysis and statistics are used to analyze the study data. The results are summarized as follows and have lead to the following contributions. Firstly, the results present the current status of semiotic research in UI design and evaluation and highlight the importance of considering semiotic concepts in UI design and evaluation. Secondly, the thesis explores interface sign ontologies (i.e., sets of concepts and skills that a user should know to interpret the meaning of interface signs) by providing a set of ontologies used to interpret the meaning of interface signs, and by providing a set of features related to ontology mapping in interpreting the meaning of interface signs. Thirdly, the thesis explores the value of integrating semiotic concepts in usability testing. Fourthly, the thesis proposes a semiotic framework (Semiotic Interface sign Design and Evaluation – SIDE) for interface sign design and evaluation in order to make them intuitive for end users and to improve web usability. The SIDE framework includes a set of determinants and attributes of user-intuitive interface signs, and a set of semiotic heuristics to design and evaluate interface signs. Finally, the thesis assesses (a) the quality of the SIDE framework in terms of performance metrics (e.g., thoroughness, validity, effectiveness, reliability, etc.) and (b) the contributions of the SIDE framework from the evaluators’ perspective.
Resumo:
In the 70's, pancreatic islet transplantation arose as an attractive alternative to restore normoglycemia; however, the scarcity of donors and difficulties with allotransplants, even under immunosuppressive treatment, greatly hampered the use of this alternative. Several materials and devices have been developed to circumvent the problem of islet rejection by the recipient, but, so far, none has proved to be totally effective. A major barrier to transpose is the highly organized islet architecture and its physical and chemical setting in the pancreatic parenchyma. In order to tackle this problem, we assembled a multidisciplinary team that has been working towards setting up the Human Pancreatic Islets Unit at the Chemistry Institute of the University of São Paulo, to collect and process pancreas from human donors, upon consent, in order to produce purified, viable and functional islets to be used in transplants. Collaboration with the private enterprise has allowed access to the latest developed biomaterials for islet encapsulation and immunoisolation. Reasoning that the natural islet microenvironment should be mimicked for optimum viability and function, we set out to isolate extracellular matrix components from human pancreas, not only for analytical purposes, but also to be used as supplementary components of encapsulating materials. A protocol was designed to routinely culture different pancreatic tissues (islets, parenchyma and ducts) in the presence of several pancreatic extracellular matrix components and peptide growth factors to enrich the beta cell population in vitro before transplantation into patients. In addition to representing a therapeutic promise, this initiative is an example of productive partnership between the medical and scientific sectors of the university and private enterprises.
Resumo:
Thesis: A liquid-cooled, direct-drive, permanent-magnet, synchronous generator with helical, double-layer, non-overlapping windings formed from a copper conductor with a coaxial internal coolant conduit offers an excellent combination of attributes to reliably provide economic wind power for the coming generation of wind turbines with power ratings between 5 and 20MW. A generator based on the liquid-cooled architecture proposed here will be reliable and cost effective. Its smaller size and mass will reduce build, transport, and installation costs. Summary: Converting wind energy into electricity and transmitting it to an electrical power grid to supply consumers is a relatively new and rapidly developing method of electricity generation. In the most recent decade, the increase in wind energy’s share of overall energy production has been remarkable. Thousands of land-based and offshore wind turbines have been commissioned around the globe, and thousands more are being planned. The technologies have evolved rapidly and are continuing to evolve, and wind turbine sizes and power ratings are continually increasing. Many of the newer wind turbine designs feature drivetrains based on Direct-Drive, Permanent-Magnet, Synchronous Generators (DD-PMSGs). Being low-speed high-torque machines, the diameters of air-cooled DD-PMSGs become very large to generate higher levels of power. The largest direct-drive wind turbine generator in operation today, rated just below 8MW, is 12m in diameter and approximately 220 tonne. To generate higher powers, traditional DD-PMSGs would need to become extraordinarily large. A 15MW air-cooled direct-drive generator would be of colossal size and tremendous mass and no longer economically viable. One alternative to increasing diameter is instead to increase torque density. In a permanent magnet machine, this is best done by increasing the linear current density of the stator windings. However, greater linear current density results in more Joule heating, and the additional heat cannot be removed practically using a traditional air-cooling approach. Direct liquid cooling is more effective, and when applied directly to the stator windings, higher linear current densities can be sustained leading to substantial increases in torque density. The higher torque density, in turn, makes possible significant reductions in DD-PMSG size. Over the past five years, a multidisciplinary team of researchers has applied a holistic approach to explore the application of liquid cooling to permanent-magnet wind turbine generator design. The approach has considered wind energy markets and the economics of wind power, system reliability, electromagnetic behaviors and design, thermal design and performance, mechanical architecture and behaviors, and the performance modeling of installed wind turbines. This dissertation is based on seven publications that chronicle the work. The primary outcomes are the proposal of a novel generator architecture, a multidisciplinary set of analyses to predict the behaviors, and experimentation to demonstrate some of the key principles and validate the analyses. The proposed generator concept is a direct-drive, surface-magnet, synchronous generator with fractional-slot, duplex-helical, double-layer, non-overlapping windings formed from a copper conductor with a coaxial internal coolant conduit to accommodate liquid coolant flow. The novel liquid-cooling architecture is referred to as LC DD-PMSG. The first of the seven publications summarized in this dissertation discusses the technological and economic benefits and limitations of DD-PMSGs as applied to wind energy. The second publication addresses the long-term reliability of the proposed LC DD-PMSG design. Publication 3 examines the machine’s electromagnetic design, and Publication 4 introduces an optimization tool developed to quickly define basic machine parameters. The static and harmonic behaviors of the stator and rotor wheel structures are the subject of Publication 5. And finally, Publications 6 and 7 examine steady-state and transient thermal behaviors. There have been a number of ancillary concrete outcomes associated with the work including the following. X Intellectual Property (IP) for direct liquid cooling of stator windings via an embedded coaxial coolant conduit, IP for a lightweight wheel structure for lowspeed, high-torque electrical machinery, and IP for numerous other details of the LC DD-PMSG design X Analytical demonstrations of the equivalent reliability of the LC DD-PMSG; validated electromagnetic, thermal, structural, and dynamic prediction models; and an analytical demonstration of the superior partial load efficiency and annual energy output of an LC DD-PMSG design X A set of LC DD-PMSG design guidelines and an analytical tool to establish optimal geometries quickly and early on X Proposed 8 MW LC DD-PMSG concepts for both inner and outer rotor configurations Furthermore, three technologies introduced could be relevant across a broader spectrum of applications. 1) The cost optimization methodology developed as part of this work could be further improved to produce a simple tool to establish base geometries for various electromagnetic machine types. 2) The layered sheet-steel element construction technology used for the LC DD-PMSG stator and rotor wheel structures has potential for a wide range of applications. And finally, 3) the direct liquid-cooling technology could be beneficial in higher speed electromotive applications such as vehicular electric drives.
Resumo:
Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.
Resumo:
This dissertation centres on the themes of knowledge creation, interdisciplinarity and knowledge work. My research approaches interdisciplinary knowledge creation (IKC) as practical situated activity. I argue that by approaching IKC from the practice-based perspective makes it possible to “deconstruct” how knowledge creation actually happens, and demystify its strong intellectual, mentalistic and expertise-based connotations. I have rendered the work of the observed knowledge workers into something ordinary, accessible and routinized. Consequently this has made it possible to grasp the pragmatic challenges as well the concrete drivers of such activity. Thus the effective way of organizing such activities becomes a question of organizing and leading effective everyday practices. To achieve that end, I have conducted ethnographic research of one explicitly interdisciplinary space within higher education, Aalto Design Factory in Helsinki, Finland, where I observed how students from different disciplines collaborated in new product development projects. I argue that IKC is a multi-dimensional construct that intertwines a particular way of doing; a way of experiencing; a way of embodied being; and a way of reflecting on the very doing itself. This places emphasis not only the practices themselves, but also on the way the individual experiences the practices, as this directly affects how the individual practices. My findings suggest that in order to effectively organize and execute knowledge creation activities organizations need to better accept and manage the emergent diversity and complexity inherent in such activities. In order to accomplish this, I highlight the importance of understanding and using a variety of (material) objects, the centrality of mundane everyday practices, the acceptance of contradictions and negotiations well as the role of management that is involved and engaged. To succeed in interdisciplinary knowledge creation is to lead not only by example, but also by being very much present in the very everyday practices that make it happen.
Resumo:
Abstract This study evaluated the chemical and volatile composition of jujube wines fermented with Saccharomyces cerevisiae A1.25 with and without pulp contact and protease treatment during fermentation. Yeast cell population, total reducing sugar and methanol contents had significant differences between nonextracted and extracted wine. The nonextracted wines had significantly higher concentrations of ethyl 9-hexadecenoate, ethyl palmitate and ethyl oleate than the extracted wines. Pulp contact also could enhance phenylethyl alcohol, furfuryl alcohol, ethyl palmitat and ethyl oleate. Furthermore, protease treatment can accelerate the release of fusel oils. The first principal component separated the wine from the extracted juice without protease from other samples based on the higher concentrations of medium-chain fatty acids and medium-chain ethyl esters. Sensory evaluation showed pulp contact and protease could improve the intensity and complexity of wine aroma due to the increase of the assimilable nitrogen.
Resumo:
The perovskite crystal structure is host to many different materials from insulating to superconducting providing a diverse range of intrinsic character and complexity. A better fundamental description of these materials in terms of their electronic, optical and magnetic properties undoubtedly precedes an effective realization of their application potential. SmTiOa, a distorted perovskite has a strongly localized electronic structure and undergoes an antiferromagnetic transition at 50 K in its nominally stoichiometric form. Sr2Ru04 is a layered perovskite superconductor (ie. Tc % 1 K) bearing the same structure as the high-tem|>erature superconductor La2_xSrrCu04. Polarized reflectance measurements were carried out on both of these materials revealing several interesting features in the far-infrared range of the spectrum. In the case of SmTiOa, although insulating, evidence indicates the presence of a finite background optical conductivity. As the temperature is lowered through the ordering temperature a resonance feature appears to narrow and strengthen near 120 cm~^ A nearby phonon mode appears to also couple to this magnetic transition as revealed by a growing asymmetry in the optica] conductivity. Experiments on a doped sample with a greater itinerant character and lower Neel temperature = 40 K also indicate the presence of this strongly temperature dependent mode even at twice the ordering temperature. Although the mode appears to be sensitive to the magnetic transition it is unclear whether a magnon assignment is appropriate. At very least, evidence suggests an interesting interaction between magnetic and electronic excitations. Although Sr2Ru04 is highly anisotropic it is metallic in three-dimensions at low temperatures and reveals its coherent transport in an inter-plane Drude-like component to the highest temperatures measured (ie. 90 K). An extended Drude analysis is used to probe the frequency dependent scattering character revealing a peak in both the mass enhancement and scattering rate near 80 cm~* and 100 cm~* respectively. All of these experimental observations appear relatively consistent with a Fermi-liquid picture of charge transport. To supplement the optical measurements a resistivity station was set up with an event driven object oriented user interface. The program controls a Keithley Current Source, HP Nano-Voltmeter and Switching Unit as well as a LakeShore Temperature Controller in order to obtain a plot of the Resistivity as a function of temperature. The system allows for resistivity measurements ranging from 4 K to 290 K using an external probe or between 0.4 K to 295 K using a Helium - 3 Cryostat. Several materials of known resistivity have confirmed the system to be robust and capable of measuring metallic samples distinguishing features of several fiQ-cm.
Resumo:
The phenomenon of communitas has been described as a moment 'in and out of time' in which a collective of individuals may be experienced by one as equal and individuated in an environment stripped of structural attributes (Turner, 1 969). In these moments, emotional bonds form and an experience of perceived 'oneness' and synergy may be described. As a result of the perceived value of these experiences, it has been suggested by Sharpe (2005) that more clearly understanding how this phenomenon may be purposefully facilitated would be beneficial for leisure service providers. Consequently, the purpose of this research endeavor was to examine the ways in which a particular leisure service provider systematically employs specific methods and sets specific parameters with the intention of guiding participants toward experiences associated with communitas or "shared spirit" as described by the organization. A qualitative case study taking a phenomenological approach was employed in order to capture the depth and complexity of both the phenomenon and the purposefiil negotiation of experiences in guiding participants toward this phenomenon. The means through which these experiences were intentionally facilitated was recreational music making in a group drumming context. As such, an organization which employs specific methods of rhythm circle facilitation as well as trains other facilitators all over the world was chosen purposely for their recognition as the most respectable and credible in this field. The specific facilitator was chosen based on high recommendation by the organization due to her level of experience and expertise. Two rhythm circles were held, and participants were chosen randomly by the facilitator. Data was collected through observation in the first circle and participant- observation in the second, as well as through focus groups with circle participants. Interviews with the facilitator were held both initially to gain broad understanding of concepts and phenomenon as well as after each circle to reflect on each circle specifically. Data was read repeatedly to drawn out patterns which emerged and were coded and organized accordingly. It was found that this specific process or system of implementation lead to experiences associated with communitas by participants. In order to more clearly understand this process and the ways in which experiences associated with communitas manifest as a result of deliberate facilitator actions, these objective facilitator actions were plotted along a continuum relating to subjective participant experiences. These findings were then linked to the literature with regards to specific characteristics of communitas. In so doing, the intentional manifestation of these experiences may be more clearly understood for ftiture facilitators in many contexts. Beyond this, findings summarized important considerations with regards to specific technical and communication competencies which were found to be essential to fostering these experiences for participants within each group. Findings surrounding the maintenance of a fluid negotiation of certain transition points within a group rhythm event overall were also highlighted, and this fluidity was found to be essential to the experience of absorption and engagement in the activity and experience. Emergent themes of structure, control, and consciousness have been presented as they manifested and were found to affect experiences within this study. Discussions surrounding the ethics and authenticity of these particular methods and their implementation has also been generated throughout. In conclusion, there was a breadth as well as depth of knowledge found in unpacking this complex process of guiding individuals toward experiences associated with communitas. The implications of these findings contribute in broadening the current theoretical as well as practical understanding as to how certain intentional parameters may be set and methods employed which may lead to experiences of communitas, and as well contribute a greater knowledge to conceptualizing the manifestation of these experiences when broken down.
Resumo:
The increasing variety and complexity of video games allows players to choose how to behave and represent themselves within these virtual environments. The focus of this dissertation was to examine the connections between the personality traits (specifically, HEXACO traits and psychopathic traits) of video game players and player-created and controlled game-characters (i.e., avatars), and the link between traits and behavior in video games. In Study 1 (n = 198), the connections between player personality traits and behavior in a Massively Multiplayer Online Roleplaying Game (World of Warcraft) were examined. Six behavior components were found (i.e., Player-versus-Player, Social Player-versus-Environment, Working, Helping, Immersion, and Core Content), and each was related to relevant personality traits. For example, Player-versus-Player behaviors were negatively related to Honesty-Humility and positively related to psychopathic traits, and Immersion behaviors (i.e., exploring, role-playing) were positively related to Openness to Experience. In Study 2 (n = 219), the connections between player personality traits and in-game behavior in video games were examined in university students. Four behavior components were found (i.e., Aggressing, Winning, Creating, and Helping), and each was related to at least one personality trait. For example, Aggressing was negatively related to Honesty-Humility and positively related to psychopathic traits. In Study 3 (n = 90), the connections between player personality traits and avatar personality traits were examined in World of Warcraft. Positive player-avatar correlations were observed for all personality traits except Extraversion. Significant mean differences between players and avatars were observed for all traits except Conscientiousness; avatars had higher mean scores on Extraversion and psychopathic traits, but lower mean scores on the remaining traits. In Study 4, the connections between player personality traits, avatar traits, and observed behaviors in a life-simulation video game (The Sims 3) were examined in university students (n = 93). Participants created two avatars and used these avatars to play The Sims 3. Results showed that the selection of certain avatar traits was related to relevant player personality traits (e.g., participants who chose the Friendly avatar trait were higher in Honesty-Humility, Emotionality, and Agreeableness, and lower in psychopathic traits). Selection of certain character-interaction behaviors was related to relevant player personality traits (e.g., participants with higher levels of psychopathic traits used more Mean and fewer Friendly interactions). Together, the results of the four studies suggest that individuals generally behave and represent themselves in video games in ways that are consistent with their real-world tendencies.
Resumo:
Cette étude traite de la complexité des enjeux de la mise en lumière urbaine et de sa conception. Le but est de déceler les mécanismes opératoires du projet d’éclairage afin de générer une analyse et une compréhension de ce type d’aménagement. Cette recherche met à jour les enjeux lumineux à différents niveaux comme l’urbanisme, l’environnement, la culture, la communication, la vision et la perception mais aussi au niveau des acteurs et de leurs pratiques sur le terrain. En utilisant une approche qualitative déductive, cette recherche théorique cherche à mieux comprendre les différentes significations du phénomène lumineux : comment dans la réalité terrain ces enjeux de la lumière sont compris, interprétés et traduits au travers de la réalisation des projets et des processus mis en place pour répondre aux besoins d’éclairage ? La pertinence de cette recherche est de questionner les enjeux complexes de la mise en lumière afin de savoir comment concevoir un « bon éclairage ». Comment se déroule un projet d’éclairage de sa conception à sa réalisation ? Quels sont les différents acteurs, leurs modes d’intervention et leurs perceptions du projet d’éclairage ? Le but est de vérifier comment ces enjeux se concrétisent sur le terrain, notamment au travers de l’activité et de l’interprétation des professionnels. Nous souhaitons créer un modèle opératoire qui rende compte des enjeux et du processus de ce type de projet. Modèle qui servira alors de repère pour la compréhension des mécanismes à l’œuvre comme le contexte, les acteurs, les moyens et les finalités des projets. Une étude des recherches théoriques nous permettra de comprendre la polysémie du phénomène lumineux afin d’en déceler la complexité des enjeux et de créer une première interprétation de ce type de projet. Nous déterminerons théoriquement ce que recouvre la notion de « bon éclairage » qui nous permettra de créer une grille analytique pour comparer notre approche avec la réalité sur le terrain. Ces recherches seront ensuite confrontées au recueil des données des études de cas, des stages en urbanisme et en conception lumière, et des interviews de professionnels dans le domaine. Nous confronterons les enjeux définis théoriquement aux collectes de données issues du terrain. Ces données seront collectées à partir de projets réalisés avec les professionnels durant la recherche immersive. La recherche-action nous permettra de collaborer avec les professionnels pour comprendre comment ils sélectionnent, déterminent et répondent aux enjeux des projets d’éclairage. Nous verrons grâce aux entretiens semi-dirigés comment les acteurs perçoivent leurs propres activités et nous interprèterons les données à l’aide de la « théorisation ancrée » pour dégager le sens de leurs discours. Nous analyserons alors les résultats de ces données de manière interprétative afin de déterminer les points convergeant et divergent entre les enjeux théoriques définis en amont et les enjeux définis en aval par la recherche-terrain. Cette comparaison nous permettra de créer une interprétation des enjeux de la mise en lumière urbaine dans toutes leurs complexités, à la fois du point de vue théorique et pratique. Cette recherche qualitative et complexe s’appuie sur une combinaison entre une étude phénoménologique et les méthodologies proposées par la « théorisation ancrée ». Nous procéderons à une combinaison de données issues de la pratique terrain et de la perception de cette pratique par les acteurs de l’éclairage. La recherche d’un « bon éclairage » envisage donc par une nouvelle compréhension l’amélioration des outils de réflexion et des actions des professionnels. En termes de résultat nous souhaitons créer un modèle opératoire de la mise en lumière qui définirait quels sont les différents éléments constitutifs de ces projets, leurs rôles et les relations qu’ils entretiennent entre eux. Modèle qui mettra en relief les éléments qui déterminent la qualité du projet d’éclairage et qui permettra de fournir un outil de compréhension. La contribution de ce travail de recherche est alors de fournir par cette nouvelle compréhension un repère méthodologique et analytique aux professionnels de l’éclairage mais aussi de faire émerger l’importance du phénomène de mise en lumière en suscitant de nouveaux questionnements auprès des activités liées au design industriel, à l’architecture et à l’urbanisme.
Resumo:
Cette étude a été réalisée dans le cadre d’une maîtrise en Aménagement. Elle s’efforce à démontrer que l’étape d’un projet que l’on appelle problématisation, c’est-à-dire l’étape de construction des problèmes à être résolus, permet de s’assurer que les actions entreprises soient en cohérence et en pertinence avec le contexte du projet. Nous constatons désormais que nous ne pouvons plus nous contenter d’évaluer les projets sur la seule base de son efficience, c’est-à-dire la concordance de ses résultats avec les objectifs prévus. Dans ces circonstances, nous émettons l’hypothèse que la problématisation fait appel à des compétences particulières et généralement peu utilisées par rapport à d’autres étapes de la réalisation d’un projet. À cet égard, nous avons réalisé un travail de recherche exploratoire sur ce thème en ayant comme objectif d’obtenir une compréhension des compétences mobilisées lors de la problématisation en situation de projet en général et d’identifier plus spécifiquement ces compétences dans une situation de projet en particulier, celle des projets de coopération internationale. Pour y arriver, nous avons procédé à la construction d’un référentiel d’emploi et d’activités pour en déduire un référentiel de compétences de la problématisation. Pour ce faire, nous avons réalisé une étude de cas sur les projets de stage de coopération internationale. L’utilisation de la technique de l’« instruction au sosie » et d’une rechercheintervention nous ont permis de dégager les principaux résultats suivant: la problématisation fait appel à des compétences particulières de gestion de l’information et de médiation. Les compétences générales de problématisation que les responsables des stages dans les organisations de coopération internationale v doivent maîtriser sont : être capable de générer les disponibilités de projets à partir de données primaires et secondaires; être capable de faire des choix et de justifier ces choix en fonction de l’analyse des données; être capable de présenter des informations écrites claires, respectueuses des idées des partenaires en fonction du langage de projet utilisé par le public auquel s’adresse la proposition; être capable d’utiliser les commentaires des évaluateurs pour améliorer un projet et être capable de mener à terme un projet. La contribution principale de ce travail de recherche réside dans la proposition d’un outil précieux pour le recrutement et la sélection, l’évaluation du rendement, la formation et le perfectionnement des acteurs de la problématisation.