821 resultados para 380303 Computer Perception, Memory and Attention
Resumo:
Background A complete explanation of the mechanisms by which Pb2+ exerts toxic effects on developmental central nervous system remains unknown. Glutamate is critical to the developing brain through various subtypes of ionotropic or metabotropic glutamate receptors (mGluRs). Ionotropic N-methyl-D-aspartate receptors have been considered as a principal target in lead-induced neurotoxicity. The relationship between mGluR3/mGluR7 and synaptic plasticity had been verified by many recent studies. The present study aimed to examine the role of mGluR3/mGluR7 in lead-induced neurotoxicity. Methods Twenty-four adult and female rats were randomly selected and placed on control or 0.2% lead acetate during gestation and lactation. Blood lead and hippocampal lead levels of pups were analyzed at weaning to evaluate the actual lead content at the end of the exposure. Impairments of short -term memory and long-term memory of pups were assessed by tests using Morris water maze and by detection of hippocampal ultrastructural alterations on electron microscopy. The impact of lead exposure on mGluR3 and mGluR7 mRNA expression in hippocampal tissue of pups were investigated by quantitative real-time polymerase chain reaction and its potential role in lead neurotoxicity were discussed. Results Lead levels of blood and hippocampi in the lead-exposed rats were significantly higher than those in the controls (P < 0.001). In tests using Morris Water Maze, the overall decrease in goal latency and swimming distance was taken to indicate that controls had shorter latencies and distance than lead-exposed rats (P = 0.001 and P < 0.001 by repeated-measures analysis of variance). On transmission electron microscopy neuronal ultrastructural alterations were observed and the results of real-time polymerase chain reaction showed that exposure to 0.2% lead acetate did not substantially change gene expression of mGluR3 and mGluR7 mRNA compared with controls. Conclusion Exposure to lead before and after birth can damage short-term and long-term memory ability of young rats and hippocampal ultrastructure. However, the current study does not provide evidence that the expression of rat hippocampal mGluR3 and mGluR7 can be altered by systemic administration of lead during gestation and lactation, which are informative for the field of lead-induced developmental neurotoxicity noting that it seems not to be worthwhile to include mGluR3 and mGluR7 in future studies. Background
Resumo:
This paper presents a simple and intuitive approach to determining the kinematic parameters of a serial-link robot in Denavit– Hartenberg (DH) notation. Once a manipulator’s kinematics is parameterized in this form, a large body of standard algorithms and code implementations for kinematics, dynamics, motion planning, and simulation are available. The proposed method has two parts. The first is the “walk through,” a simple procedure that creates a string of elementary translations and rotations, from the user-defined base coordinate to the end-effector. The second step is an algebraic procedure to manipulate this string into a form that can be factorized as link transforms, which can be represented in standard or modified DH notation. The method allows for an arbitrary base and end-effector coordinate system as well as an arbitrary zero joint angle pose. The algebraic procedure is amenable to computer algebra manipulation and a Java program is available as supplementary downloadable material.
Resumo:
The human-technology nexus is a strong focus of Information Systems (IS) research; however, very few studies have explored this phenomenon in anaesthesia. Anaesthesia has a long history of adoption of technological artifacts, ranging from early apparatus to present-day information systems such as electronic monitoring and pulse oximetry. This prevalence of technology in modern anaesthesia and the rich human-technology relationship provides a fertile empirical setting for IS research. This study employed a grounded theory approach that began with a broad initial guiding question and, through simultaneous data collection and analysis, uncovered a core category of technology appropriation. This emergent basic social process captures a central activity of anaesthestists and is supported by three major concepts: knowledge-directed medicine, complementary artifacts and culture of anaesthesia. The outcomes of this study are: (1) a substantive theory that integrates the aforementioned concepts and pertains to the research setting of anaesthesia and (2) a formal theory, which further develops the core category of appropriation from anaesthesia-specific to a broader, more general perspective. These outcomes fulfill the objective of a grounded theory study, being the formation of theory that describes and explains observed patterns in the empirical field. In generalizing the notion of appropriation, the formal theory is developed using the theories of Karl Marx. This Marxian model of technology appropriation is a three-tiered theoretical lens that examines appropriation behaviours at a highly abstract level, connecting the stages of natural, species and social being to the transition of a technology-as-artifact to a technology-in-use via the processes of perception, orientation and realization. The contributions of this research are two-fold: (1) the substantive model contributes to practice by providing a model that describes and explains the human-technology nexus in anaesthesia, and thereby offers potential predictive capabilities for designers and administrators to optimize future appropriations of new anaesthetic technological artifacts; and (2) the formal model contributes to research by drawing attention to the philosophical foundations of appropriation in the work of Marx, and subsequently expanding the current understanding of contemporary IS theories of adoption and appropriation.
Resumo:
Computer aided technologies, medical imaging, and rapid prototyping has created new possibilities in biomedical engineering. The systematic variation of scaffold architecture as well as the mineralization inside a scaffold/bone construct can be studied using computer imaging technology and CAD/CAM and micro computed tomography (CT). In this paper, the potential of combining these technologies has been exploited in the study of scaffolds and osteochondral repair. Porosity, surface area per unit volume and the degree of interconnectivity were evaluated through imaging and computer aided manipulation of the scaffold scan data. For the osteochondral model, the spatial distribution and the degree of bone regeneration were evaluated. In this study the versatility of two softwares Mimics (Materialize), CTan and 3D realistic visualization (Skyscan) were assessed, too.
Resumo:
Despite recent public attention to e-health as a solution to rising healthcare costs and an ageingpopulation, there have been relatively few studies examining the geographical pattern of e-health usage. This paper argues for an equitable approach to e-health and attention to the way in which e-health initiatives can produce locational health inequalities, particularly in socioeconomically disadvantaged areas. In this paper, we use a case study to demonstrate geographical variation in Internet accessibility, Internet status and prevalence of chronic diseases within a small district. There are signifi cant disparities in access to health information within socioeconomically disadvantaged areas. The most vulnerable people in these areas are likely to have limited availability of, or access to Internet healthcare resources. They are also more likely to have complex chronic diseases and, therefore, be in greatest need of these resources. This case study demonstrates the importance of an equitable approach to e-health information technologies and telecommunications infrastructure.
Resumo:
Hazard perception in driving is the one of the few driving-specific skills associated with crash involvement. However, this relationship has only been examined in studies where the majority of individuals were younger than 65. We present the first data revealing an association between hazard perception and self-reported crash involvement in drivers aged 65 and over. In a sample of 271 drivers, we found that individuals whose mean response time to traffic hazards was slower than 6.68 seconds (the ROC-curve derived pass mark for the test) were 2.32 times (95% CI 1.46, 3.22) more likely to have been involved in a self-reported crash within the previous five years than those with faster response times. This likelihood ratio became 2.37 (95% CI 1.49, 3.28) when driving exposure was controlled for. As a comparison, individuals who failed a test of useful field of view were 2.70 (95% CI 1.44, 4.44) times more likely to crash than those who passed. The hazard perception test and the useful field of view measure accounted for separate variance in crash involvement. These findings indicate that hazard perception testing and training could be potentially useful for road safety interventions for this age group.
Resumo:
This article investigates virtual reality representations of performance in London’s late sixteenth-century Rose Theatre, a venue that, by means of current technology, can once again challenge perceptions of space, performance, and memory. The VR model of The Rose represents a virtual recreation of this venue in as much detail as possible and attempts to recover graphic demonstrations of the trace memories of the performance modes of the day. The VR model is based on accurate archeological and theatre historical records and is easy to navigate. The introduction of human figures onto The Rose’s stage via motion capture allows us to explore the relationships between space, actor and environment. The combination of venue and actors facilitates a new way of thinking about how the work of early modern playwrights can be stored and recalled. This virtual theatre is thus activated to intersect productively with contemporary studies in performance; as such, our paper provides a perspective on and embodiment of the relation between technology, memory and experience. It is, at its simplest, a useful archiving project for theatrical history, but it is directly relevant to contemporary performance practice as well. Further, it reflects upon how technology and ‘re-enactments’ of sorts mediate the way in which knowledge and experience are transferred, and even what may be considered ‘knowledge.’ Our work provides opportunities to begin addressing what such intermedial confrontations might produce for ‘remembering, experiencing, thinking and imagining.’ We contend that these confrontations will enhance live theatre performance rather than impeding or disrupting contemporary performance practice. Our ‘paper’ is in the form of a video which covers the intellectual contribution while also permitting a demonstration of the interventions we are discussing.
Resumo:
Automation technology can provide construction firms with a number of competitive advantages. Technology strategy guides a firm's approach to all technology, including automation. Engineering management educators, researchers, and construction industry professionals need improved understanding of how technology affects results, and how to better target investments to improve competitive performance. A more formal approach to the concept of technology strategy can benefit the construction manager in his efforts to remain competitive in increasingly hostile markets. This paper recommends consideration of five specific dimensions of technology strategy within the overall parameters of market conditions, firm capabilities and goals, and stage of technology evolution. Examples of the application of this framework in the formulation of technology strategy are provided for CAD applications, co-ordinated positioning technology and advanced falsework and formwork mechanisation to support construction field operations. Results from this continuing line of research can assist managers in making complex and difficult decisions regarding reengineering construction processes in using new construction technology and benefit future researchers by providing new tools for analysis. Through managing technology to best suit the existing capabilities of their firm, and addressing the market forces, engineering managers can better face the increasingly competitive environment in which they operate.
Resumo:
Understanding the motion characteristics of on-site objects is desirable for the analysis of construction work zones, especially in problems related to safety and productivity studies. This article presents a methodology for rapid object identification and tracking. The proposed methodology contains algorithms for spatial modeling and image matching. A high-frame-rate range sensor was utilized for spatial data acquisition. The experimental results indicated that an occupancy grid spatial modeling algorithm could quickly build a suitable work zone model from the acquired data. The results also showed that an image matching algorithm is able to find the most similar object from a model database and from spatial models obtained from previous scans. It is then possible to use the matched information to successfully identify and track objects.
Resumo:
The creative work of this study is a novel-length work of literary fiction called Keeping House (published as Grace's Table, by University of Queensland Press, April 2014). Grace has not had twelve people at her table for a long time. Hers isn't the kind of family who share regular Sunday meals. As Grace prepares the feast, she reflects on her life, her marriage and her friendships. When the three generations of her family come together, simmering tensions from the past threaten to boil over. The one thing that no one can talk about is the one thing that no one can forget. Grace's Table is a moving and often funny novel using food as a language to explore the power of memory and the family rituals that define us. The exegetical component of this study does not adhere to traditional research pedagogies. Instead, it follows the model of what the literature describes as fictocriticism. It is the intention that the exegesis be read as a hybrid genre; one that combines creative practice and theory and blurs the boundaries between philosophy and fiction. In offering itself as an alternative to the exegetical canon it provides a model for the multiplicity of knowledge production suited to the discipline of practice-led research. The exegesis mirrors structural elements of the creative work by inviting twelve guests into the domestic space of the novel to share a meal. The guests, chosen for their diverse thinking, enable examination of the various agents of power involved in the delivery of food. Their ideas cross genders, ages and time periods; their motivations and opinions often collide. Some are more concerned with the spatial politics of where food is consumed, others with its actual preparation and consumption. Each, however, provides a series of creative reflective conversations throughout the meal which help to answer the research question: How can disempowered women take authority within their domestic space? Michel de Certeau must defend his "operational tactics" or "art of the weak" 1 as a means by which women can subvert the colonisation of their domestic space against Michel Foucault's ideas about the functions of a "disciplinary apparatus". 2 Erving Goffman argues that the success of de Certeau's "tactics" depends upon his theories of "performance" and "masquerade" 3; a claim de Certeau refutes. Doreen Massey and the author combine forces in arguing for space, time and politics to be seen as interconnected, non-static and often contested. The author calls for identity, or sense of self, to be considered a further dimension which impacts on the function of spatial models. Yu-Fi Tuan speaks of the intimacy of kitchens; Gaston Bachelard the power of daydreams; and Jean Anthelme Brillat-Savarin gives the reader a taste of the nourishing arts. Roland Barthes forces the author to reconsider her function as a writer and her understanding of the reader's relationship with a text. Fictional characters from two texts have a place at the table – Marian from The Edible Woman by Margaret Atwood 4 and Lilian from Lilian's Story by Kate Grenville. 5 Each explores how they successfully subverted expectations of their gender. The author interprets and applies elements of the conversations to support Grace's tactics in the novel as well as those related to her own creative research practice. Grace serves her guests, reflecting on what is said and how it relates to her story. Over coffee, the two come together to examine what each has learned.
Resumo:
The aim of this project was to implement a just-in-time hints help system into a real time strategy (RTS) computer game that would deliver information to the user at the time that it would be of the most benefit. The goal of this help system is to improve the user’s learning in terms of their rate of learning, retention and avoidance of stagnation. The first stage of this project was implementing a computer game to incorporate four different types of skill that the user must acquire, namely motor, perceptual, declarative knowledge and strategic. Subsequently, the just-in-time hints help system was incorporated into the game to assess the user’s knowledge and deliver hints accordingly. The final stage of the project was to test the effectiveness of this help system by conducting two phases of testing. The goal of this testing was to demonstrate an increase in the user’s assessment of the helpfulness of the system from phase one to phase two. The results of this testing showed that there was no significant difference in the user’s responses in the two phases. However, when the results were analysed with respect to several categories of hints that were identified, it became apparent that patterns in the data were beginning to emerge. The conclusions of the project were that further testing with a larger sample size would be required to provide more reliable results and that factors such as the user’s skill level and different types of goals should be taken into account.
Resumo:
Inadequate air quality and the inhalation of airborne pollutants pose many risks to human health and wellbeing, and are listed among the top environmental risks worldwide. The importance of outdoor air quality was recognised in the 1950s and indoor air quality emerged as an issue some time later and was soon recognised as having an equal, if not greater importance than outdoor air quality. Identification of ambient air pollution as a health hazard was followed by steps, undertaken by a broad range of national and international professional and government organisations, aimed at reduction or elimination of the hazard. However, the process of achieving better air quality is still in progress. The last 10 years or so have seen an unprecedented increase in the interest in, and attention to, airborne particles, with a special focus on their finer size fractions, including ultrafine (< 0.1 m) and their subset, nano particles (< 0.05 m). This paper discusses the current status of scientific knowledge on the links between air quality and health, with a particular focus on airborne particulate matter, and the directions taken by national and international bodies to improve air quality.
Resumo:
The literature abounds with descriptions of failures in high-profile projects and a range of initiatives has been generated to enhance project management practice (e.g., Morris, 2006). Estimating from our own research, there are scores of other project failures that are unrecorded. Many of these failures can be explained using existing project management theory; poor risk management, inaccurate estimating, cultures of optimism dominating decision making, stakeholder mismanagement, inadequate timeframes, and so on. Nevertheless, in spite of extensive discussion and analysis of failures and attention to the presumed causes of failure, projects continue to fail in unexpected ways. In the 1990s, three U.S. state departments of motor vehicles (DMV) cancelled major projects due to time and cost overruns and inability to meet project goals (IT-Cortex, 2010). The California DMV failed to revitalize their drivers’ license and registration application process after spending $45 million. The Oregon DMV cancelled their five year, $50 million project to automate their manual, paper-based operation after three years when the estimates grew to $123 million; its duration stretched to eight years or more and the prototype was a complete failure. In 1997, the Washington state DMV cancelled their license application mitigation project because it would have been too big and obsolete by the time it was estimated to be finished. There are countless similar examples of projects that have been abandoned or that have not delivered the requirements.
Resumo:
The practice of robotics and computer vision each involve the application of computational algorithms to data. The research community has developed a very large body of algorithms but for a newcomer to the field this can be quite daunting. For more than 10 years the author has maintained two open-source MATLAB® Toolboxes, one for robotics and one for vision. They provide implementations of many important algorithms and allow users to work with real problems, not just trivial examples. This new book makes the fundamental algorithms of robotics, vision and control accessible to all. It weaves together theory, algorithms and examples in a narrative that covers robotics and computer vision separately and together. Using the latest versions of the Toolboxes the author shows how complex problems can be decomposed and solved using just a few simple lines of code. The topics covered are guided by real problems observed by the author over many years as a practitioner of both robotics and computer vision. It is written in a light but informative style, it is easy to read and absorb, and includes over 1000 MATLAB® and Simulink® examples and figures. The book is a real walk through the fundamentals of mobile robots, navigation, localization, arm-robot kinematics, dynamics and joint level control, then camera models, image processing, feature extraction and multi-view geometry, and finally bringing it all together with an extensive discussion of visual servo systems.