926 resultados para Computer storage devices
Resumo:
This case study examines the impact of a computer information system as it was being implemented in one Ontario hospital. The attitudes of a cross section of the hospital staff acted as a barometer to measure their perceptions of the implementation process. With The Mississauga Hospital in the early stages of an extensive computer implementation project, the opportunity existed to identify staff attitudes about the computer system, overall knowledge and compare the findings with the literature. The goal of the study was to develop a greater base about the affective domain in the relationship between people and the computer system. Eight exploratory questions shaped the focus of the investigation. Data were collected from three sources: a survey questionnaire, focused interviews, and internal hospital documents. Both quantitative and qualitative data were analyzed. Instrumentation in the study consisted of a survey distributed at two points in time to randomly selected hospital employees who represented all staff levels.Other sources of data included hospital documents, and twenty-five focused interviews with staff who replied to both surveys. Leavitt's socio-technical system, with its four subsystems: task, structure, technology, and people was used to classify staff responses to the research questions. The study findings revealed that the majority of respondents felt positive about using the computer as part of their jobs. No apparent correlations were found between sex, age, or staff group and feelings about using the computer. Differences in attitudes, and attitude changes were found in potential relationship to the element of time. Another difference was found in staff group and perception of being involved in the decision making process. These findings and other evidence about the role of change agents in this change process help to emphasize that planning change is one thing, managing the transition is another.
Resumo:
Le réalisme des objets en infographie exige de simuler adéquatement leur apparence sous divers éclairages et à différentes échelles. Une solution communément adoptée par les chercheurs consiste à mesurer avec l’aide d’appareils calibrés la réflectance d’un échantillon de surface réelle, pour ensuite l’encoder sous forme d’un modèle de réflectance (BRDF) ou d’une texture de réflectances (BTF). Malgré des avancées importantes, les données ainsi mises à la portée des artistes restent encore très peu utilisées. Cette réticence pourrait s’expliquer par deux raisons principales : (1) la quantité et la qualité de mesures disponibles et (2) la taille des données. Ce travail propose de s’attaquer à ces deux problèmes sous l’angle de la simulation. Nous conjecturons que le niveau de réalisme du rendu en infographie produit déjà des résultats satisfaisants avec les techniques actuelles. Ainsi, nous proposons de précalculer et encoder dans une BTF augmentée les effets d’éclairage sur une géométrie, qui sera par la suite appliquée sur les surfaces. Ce précalcul de rendu et textures étant déjà bien adopté par les artistes, il pourra mieux s’insérer dans leurs réalisations. Pour nous assurer que ce modèle répond aussi aux exigences des représentations multi-échelles, nous proposons aussi une adaptation des BTFs à un encodage de type MIP map.
Resumo:
This proposed thesis is entitled “Plasma Polymerised Organic Thin Films: A study on the Structural, Electrical, and Nonlinear Optical Properties for Possible Applications. Polymers and polymer based materials find enormous applications in the realm of electronics and optoelectronics. They are employed as both active and passive components in making various devices. Enormous research activities are going on in this area for the last three decades or so, and many useful contributions are made quite accidentally. Conducting polymers is such a discovery, and eversince the discovery of conducting polyacetylene, a new branch of science itself has emerged in the form of synthetic metals. Conducting polymers are useful materials for many applications like polymer displays, high density data storage, polymer FETs, polymer LEDs, photo voltaic devices and electrochemical cells. With the emergence of molecular electronics and its potential in finding useful applications, organic thin films are receiving an unusual attention by scientists and engineers alike. This is evident from the vast literature pertaining to this field appearing in various journals. Recently, computer aided design of organic molecules have added further impetus to the ongoing research activities in this area. Polymers, especially, conducting polymers can be prepared both in the bulk and in the thinfilm form. However, many applications necessitate that they are grown in the thin film form either as free standing or on appropriate substrates. As far as their bulk counterparts are concerned, they can be prepared by various polymerisation techniques such as chemical routes and electrochemical means. A survey of the literature reveals that polymers like polyaniline, polypyrrole, polythiophene, have been investigated with a view to studying their structural electrical and optical properties. Among the various alternate techniques employed for the preparation of polymer thin films, the method of plasma polymerisation needs special attention in this context. The technique of plasma polymerisation is an inexpensive method and often requires very less infra structure. This method includes the employment of ac, rf, dc, microwave and pulsed sources. They produce pinhole free homogeneous films on appropriate substrates under controlled conditions. In conventional plasma polymerisation set up, the monomer is fed into an evacuated chamber and an ac/rf/dc/ w/pulsed discharge is created which enables the monomer species to dissociate, leading to the formation of polymer thin films. However, it has been found that the structure and hence the properties exhibited by plasma polymerized thin films are quite different from that of their counterparts produced by other thin film preparation techniques such as electrochemical deposition or spin coating. The properties of these thin films can be tuned only if the interrelationship between the structure and other properties are understood from a fundamental point of view. So very often, a through evaluation of the various properties is a pre-requisite for tailoring the properties of the thin films for applications. It has been found that conjugation is a necessary condition for enhancing the conductivity of polymer thin films. RF technique of plasma polymerisation is an excellent tool to induce conjugation and this modifies the electrical properties too. Both oxidative and reductive doping can be employed to modify the electrical properties of the polymer thin films for various applications. This is where organic thin films based on polymers scored over inorganic thin films, where in large area devices can be fabricated with organic semiconductors which is difficult to achieve by inorganic materials. For such applications, a variety of polymers have been synthesized such as polyaniline, polythiophene, polypyrrole etc. There are newer polymers added to this family every now and then. There are many virgin areas where plasma polymers are yet to make a foray namely low-k dielectrics or as potential nonlinear optical materials such as optical limiters. There are also many materials which are not been prepared by the method of plasma polymerisation. Some of the materials which are not been dealt with are phenyl hydrazine and tea tree oil. The advantage of employing organic extracts like tea tree oil monomers as precursors for making plasma polymers is that there can be value addition to the already existing uses and possibility exists in converting them to electronic grade materials, especially semiconductors and optically active materials for photonic applications. One of the major motivations of this study is to synthesize plasma polymer thin films based on aniline, phenyl hydrazine, pyrrole, tea tree oil and eucalyptus oil by employing both rf and ac plasma polymerisation techniques. This will be carried out with the objective of growing thin films on various substrates such as glass, quartz and indium tin oxide (ITO) coated glass. There are various properties namely structural, electrical, dielectric permittivity, nonlinear optical properties which are to be evaluated to establish the relationship with the structure and the other properties. Special emphasis will be laid in evaluating the optical parameters like refractive index (n), extinction coefficient (k), the real and imaginary components of dielectric constant and the optical transition energies of the polymer thin films from the spectroscopic ellipsometric studies. Apart from evaluating these physical constants, it is also possible to predict whether a material exhibit nonlinear optical properties by ellipsometric investigations. So further studies using open aperture z-scan technique in order to evaluate the nonlinear optical properties of a few selected samples which are potential nonlinear optical materials is another objective of the present study. It will be another endeavour to offer an appropriate explanation for the nonlinear optical properties displayed by these films. Doping of plasma polymers is found to modify both the electrical conductivity and optical properties. Iodine is found to modify the properties of the polymer thin films. However insitu iodine doping is tricky and the film often looses its stability because of the escape of iodine. An appropriate insitu technique of doping will be developed to dope iodine in to the plasma polymerized thin films. Doping of polymer thin films with iodine results in improved and modified optical and electrical properties. However it requires tools like FTIR and UV-Vis-NIR spectroscopy to elucidate the structural and optical modifications imparted to the polymer films. This will be attempted here to establish the role of iodine in the modification of the properties exhibited by the films
Resumo:
Die ubiquitäre Datenverarbeitung ist ein attraktives Forschungsgebiet des vergangenen und aktuellen Jahrzehnts. Es handelt von unaufdringlicher Unterstützung von Menschen in ihren alltäglichen Aufgaben durch Rechner. Diese Unterstützung wird durch die Allgegenwärtigkeit von Rechnern ermöglicht die sich spontan zu verteilten Kommunikationsnetzwerken zusammen finden, um Informationen auszutauschen und zu verarbeiten. Umgebende Intelligenz ist eine Anwendung der ubiquitären Datenverarbeitung und eine strategische Forschungsrichtung der Information Society Technology der Europäischen Union. Das Ziel der umbebenden Intelligenz ist komfortableres und sichereres Leben. Verteilte Kommunikationsnetzwerke für die ubiquitäre Datenverarbeitung charakterisieren sich durch Heterogenität der verwendeten Rechner. Diese reichen von Kleinstrechnern, eingebettet in Gegenstände des täglichen Gebrauchs, bis hin zu leistungsfähigen Großrechnern. Die Rechner verbinden sich spontan über kabellose Netzwerktechnologien wie wireless local area networks (WLAN), Bluetooth, oder UMTS. Die Heterogenität verkompliziert die Entwicklung und den Aufbau von verteilten Kommunikationsnetzwerken. Middleware ist eine Software Technologie um Komplexität durch Abstraktion zu einer homogenen Schicht zu reduzieren. Middleware bietet eine einheitliche Sicht auf die durch sie abstrahierten Ressourcen, Funktionalitäten, und Rechner. Verteilte Kommunikationsnetzwerke für die ubiquitäre Datenverarbeitung sind durch die spontane Verbindung von Rechnern gekennzeichnet. Klassische Middleware geht davon aus, dass Rechner dauerhaft miteinander in Kommunikationsbeziehungen stehen. Das Konzept der dienstorienterten Architektur ermöglicht die Entwicklung von Middleware die auch spontane Verbindungen zwischen Rechnern erlaubt. Die Funktionalität von Middleware ist dabei durch Dienste realisiert, die unabhängige Software-Einheiten darstellen. Das Wireless World Research Forum beschreibt Dienste die zukünftige Middleware beinhalten sollte. Diese Dienste werden von einer Ausführungsumgebung beherbergt. Jedoch gibt es noch keine Definitionen wie sich eine solche Ausführungsumgebung ausprägen und welchen Funktionsumfang sie haben muss. Diese Arbeit trägt zu Aspekten der Middleware-Entwicklung für verteilte Kommunikationsnetzwerke in der ubiquitären Datenverarbeitung bei. Der Schwerpunkt liegt auf Middleware und Grundlagentechnologien. Die Beiträge liegen als Konzepte und Ideen für die Entwicklung von Middleware vor. Sie decken die Bereiche Dienstfindung, Dienstaktualisierung, sowie Verträge zwischen Diensten ab. Sie sind in einem Rahmenwerk bereit gestellt, welches auf die Entwicklung von Middleware optimiert ist. Dieses Rahmenwerk, Framework for Applications in Mobile Environments (FAME²) genannt, beinhaltet Richtlinien, eine Definition einer Ausführungsumgebung, sowie Unterstützung für verschiedene Zugriffskontrollmechanismen um Middleware vor unerlaubter Benutzung zu schützen. Das Leistungsspektrum der Ausführungsumgebung von FAME² umfasst: • minimale Ressourcenbenutzung, um auch auf Rechnern mit wenigen Ressourcen, wie z.B. Mobiltelefone und Kleinstrechnern, nutzbar zu sein • Unterstützung für die Anpassung von Middleware durch Änderung der enthaltenen Dienste während die Middleware ausgeführt wird • eine offene Schnittstelle um praktisch jede existierende Lösung für das Finden von Diensten zu verwenden • und eine Möglichkeit der Aktualisierung von Diensten zu deren Laufzeit um damit Fehlerbereinigende, optimierende, und anpassende Wartungsarbeiten an Diensten durchführen zu können Eine begleitende Arbeit ist das Extensible Constraint Framework (ECF), welches Design by Contract (DbC) im Rahmen von FAME² nutzbar macht. DbC ist eine Technologie um Verträge zwischen Diensten zu formulieren und damit die Qualität von Software zu erhöhen. ECF erlaubt das aushandeln sowie die Optimierung von solchen Verträgen.
Resumo:
Presentation given at the Al-Azhar Engineering First Conference, AEC’89, Dec. 9-12 1989, Cairo, Egypt. The paper presented at AEC'89 suggests an infinite storage scheme divided into one volume which is online and an arbitrary number of off-line volumes arranged into a linear chain which hold records which haven't been accessed recently. The online volume holds the records in sorted order (e.g. as a B-tree) and contains shortest prefixes of keys of records already pushed offline. As new records enter, older ones are retired to the first volume which is going offline next. Statistical arguments are given for the rate at which an off-line volume needs to be fetched to reload a record which had been retired before. The rate depends on the distribution of access probabilities as a function of time. Applications are medical records, production records or other data which need to be kept for a long time for legal reasons.
Resumo:
El següent projecte consisteix en analitzar com funciona un sistema SAN, per tal de veure com es pot obtenir un millor rendiment. L’objectiu principal es saber com es comportarà la nostra SAN muntada amb iSCSI a través de la xarxa, volem veure quines són les operacions, les dades i els resultats que comporta crear una RAID a través de discos no locals d’un ordinador i a través d’una xarxa LAN
Resumo:
This paper describes a prototype grid infrastructure, called the eMinerals minigrid, for molecular simulation scientists. which is based on an integration of shared compute and data resources. We describe the key components, namely the use of Condor pools, Linux/Unix clusters with PBS and IBM's LoadLeveller job handling tools, the use of Globus for security handling, the use of Condor-G tools for wrapping globus job submit commands, Condor's DAGman tool for handling workflow, the Storage Resource Broker for handling data, and the CCLRC dataportal and associated tools for both archiving data with metadata and making data available to other workers.
Resumo:
It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.
Resumo:
“Point and click” interactions remain one of the key features of graphical user interfaces (GUIs). People with motion-impairments, however, can often have difficulty with accurate control of standard pointing devices. This paper discusses work that aims to reveal the nature of these difficulties through analyses that consider the cursor’s path of movement. A range of cursor measures was applied, and a number of them were found to be significant in capturing the differences between able-bodied users and motion-impaired users, as well as the differences between a haptic force feedback condition and a control condition. The cursor measures found in the literature, however, do not make up a comprehensive list, but provide a starting point for analysing cursor movements more completely. Six new cursor characteristics for motion-impaired users are introduced to capture aspects of cursor movement different from those already proposed.
Resumo:
People with motion-impairments can often have difficulty with accurate control of standard pointing devices for computer input. The nature of the difficulties may vary, so to be most effective, methods of assisting cursor control must be suited to each user's needs. The work presented here involves a study of cursor trajectories as a means of assessing the requirements of motion-impaired computer users. A new cursor characteristic is proposed that attempts to capture difficulties with moving the cursor in a smooth trajectory. A study was conducted to see if haptic tunnels could improve performance in "point and click" tasks. Results indicate that the tunnels reduced times to target for those users identified by the new characteristic as having the most difficulty moving in a smooth trajectory. This suggests that cursor characteristics have potential applications in performing assessments of a user's cursor control capabilities which can then be used to determine appropriate methods of assistance.
Resumo:
Experiments demonstrating human enhancement through the implantation of technology in healthy humans have been performed for over a decade by some academic research groups. More recently, technology enthusiasts have begun to realize the potential of implantable technology such as glass capsule RFID transponders. In this paper it is argued that implantable RFID devices have evolved to the point whereby we should consider the devices themselves as simple computers. Presented here is the infection with a computer virus of an RFID device implanted in a human. Coupled with our developing concept of what constitutes the human body and its boundaries, it is argued that this study has given rise to the world’s first human infected with a computer virus. It has taken the wider academic community some time to agree that meaningful discourse on the topic of implantable technology is of value. As developments in medical technologies point to greater possibilities for enhancement, this shift in thinking is not too soon in coming.
Resumo:
A forum is a valuable tool to foster reflection in an in-depth discussion; however, it forces the course mediator to continually pay close attention in order to coordinate learners` activities. Moreover, monitoring a forum is time consuming given that it is impossible to know in advance when new messages are going to be posted. Additionally, a forum may be inactive for a long period and suddenly receive a burst of messages forcing forum mediators to frequently log on in order to know how the discussion is unfolding to intervene whenever it is necessary. Mediators also need to deal with a large amount of messages to identify off-pattern situations. This work presents a piece of action research that investigates how to improve coordination support in a forum using mobile devices for mitigating mediator`s difficulties in following the status of a forum. Based on summarized information extracted from message meta-data, mediators consult visual information summaries on PDAs and receive textual notifications in their mobile phone. This investigation revealed that mediators used the mobile-based coordination support to keep informed on what is taking place within the forum without the need to log on their desktop computer. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
6 x 8cm(2) electrochromic devices (ECDs) with the configuration K-glass/EC-layer/electrotype/ion-storage (IS) layer/K-glass, have been assembled using Nb2O5:Mo EC layers, a (CeO2)(0.81)-TiO2 IS-layer and a new gelatin electrolyte containing Li+ ions. The structure of the electrolyte is X-ray amorphous. Its ionic conductivity passed by a maximum of 1.5 x 10(-5) S/CM for a lithium concentration of 0.3g/15ml. The value increases with temperature and follows an Arrhenius law with an activation energy of 49.5 kJ/mol. All solid-state devices show a reversible gray coloration, a long-term stability of more than 25,000 switching cycles (+/- 2.0 V/90 s), a transmission change at 550 nm between 60% (bleached state) and 40% (colored state) corresponding to a change of the optical density (Delta OD = 0. 15) with a coloration efficiency increasing from 10cm(2)/C (initial cycle) to 23cm(2)/C (25,000th cycle). (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Bin planning (arrangements) is a key factor in the timber industry. Improper planning of the storage bins may lead to inefficient transportation of resources, which threaten the overall efficiency and thereby limit the profit margins of sawmills. To address this challenge, a simulation model has been developed. However, as numerous alternatives are available for arranging bins, simulating all possibilities will take an enormous amount of time and it is computationally infeasible. A discrete-event simulation model incorporating meta-heuristic algorithms has therefore been investigated in this study. Preliminary investigations indicate that the results achieved by GA based simulation model are promising and better than the other meta-heuristic algorithm. Further, a sensitivity analysis has been done on the GA based optimal arrangement which contributes to gaining insights and knowledge about the real system that ultimately leads to improved and enhanced efficiency in sawmill yards. It is expected that the results achieved in the work will support timber industries in making optimal decisions with respect to arrangement of storage bins in a sawmill yard.