12 resultados para Equipment Apparatus Devices and Instrumentation
em Helda - Digital Repository of University of Helsinki
Resumo:
Plastic surfaces are a group of materials used for many purposes. The present study was focused on methods for investigation of surface topography, wearing and cleanability of polyvinyl chloride (PVC) model surfaces and industrial plastic surfaces. Contact profilometry, scanning electron microscopy (SEM) and atomic force microscopy (AFM) are powerful methods for studying the topography of plastic surfaces. Although they have their own limitations, they are together an effective tool providing useful information on surface topography, especially when studying laboratory-made PVC model surfaces with known chemical compositions and structures. All examined laboratory-made PVC plastic surfaces examined in this work could be considered as smooth according to both AFM and profilometer measurements because height differences are in the nanoscale on every surface. Industrial plastic surfaces are a complex group of materials because of their chemical and topographical heterogeneity, but they are nevertheless important reference materials when developing cleaning and wearing methods. According to the results of this study the Soiling and Wearing Drum and the Frick-Taber methods are very useful when simulating three-body wearing of plastic surfaces. Both the investigated wearing methods can be used to compare the wearing of different plastic materials using appropriate evaluation methods of wearing and industrial use. In this study, physical methods were developed and adapted from other fields of material research to cleanability studies. The thesis focuses on the methodology for investigating the cleanability of plastic surfaces under realistic conditions, where surface topography and the effect of wear cleanability were among the major topics. A colorimetric method proved to be suitable for examining the cleanability of the industrial plastic surfaces. The results were utilized to evaluate the relationship between cleanability and the surface properties of plastic surfaces. The devices and methods used in the work can be utilized both in material research and product development.
Resumo:
The light emitted by flat panel displays (FPD) can be generated in many different ways, such as for example alternating current thin film electroluminescence (ACTFEL), liquid crystal display (LCD), light emitting diode (LED), or plasma display panel (PDP) technologies. In this work, the focus was on ACTFEL devices and the goal was to develop new thin film processes for light emitting materials in ACTFEL devices. The films were deposited with the atomic layer deposition (ALD) method, which has been utilized in the manufacturing of ACTFEL displays since the mid-1980s. The ALD method is based on surface-controlled self-terminated reactions and a maximum of one layer of the desired material can be prepared during one deposition cycle. Therefore, the film thickness can be controlled simply by adjusting the number of deposition cycles. In addition, both large areas and deep trench structures can be covered uniformly. During this work, new ALD processes were developed for the following thin film materials: BaS, CuxS, MnS, PbS, SrS, SrSe, SrTe, SrS1-xSex, ZnS, and ZnS1-xSex. In addition, several ACTFEL devices were prepared where the light emitting material was BaS, SrS, SrS1-xSex, ZnS, or ZnS1-xSex thin film that was doped with Ce, Cu, Eu, Mn, or Pb. The sulfoselenide films were made by substituting the elemental selenium for sulfur on the substrate surface during film deposition. In this way, it was possible to replace a maximum of 90% of the sulfur with selenium, and the XRD analyses indicated that the films were solid solutions. The polycrystalline BaS, SrS, and ZnS thin films were deposited at 180-400, 120-460, and 280-500 °C, respectively, and the processes had a wide temperature range where the growth rate of the films was independent of the deposition temperature. The electroluminescence studies showed that the doped sulfoselenide films resulted in low emission intensity. However, the emission intensities and emission colors of the doped SrS, BaS, and ZnS films were comparable with those found in earlier studies. It was also shown that the electro-optical properties of the different ZnS:Mn devices were different as a consequence of different ZnS:Mn processes. Finally, it was concluded that because the higher deposition temperature seemed to result in a higher emission intensity, the thermal stability of the reactants has a significant role when the light emitting materials of ACTFEL devices are deposited with the ALD method.
Resumo:
In recent years, XML has been widely adopted as a universal format for structured data. A variety of XML-based systems have emerged, most prominently SOAP for Web services, XMPP for instant messaging, and RSS and Atom for content syndication. This popularity is helped by the excellent support for XML processing in many programming languages and by the variety of XML-based technologies for more complex needs of applications. Concurrently with this rise of XML, there has also been a qualitative expansion of the Internet's scope. Namely, mobile devices are becoming capable enough to be full-fledged members of various distributed systems. Such devices are battery-powered, their network connections are based on wireless technologies, and their processing capabilities are typically much lower than those of stationary computers. This dissertation presents work performed to try to reconcile these two developments. XML as a highly redundant text-based format is not obviously suitable for mobile devices that need to avoid extraneous processing and communication. Furthermore, the protocols and systems commonly used in XML messaging are often designed for fixed networks and may make assumptions that do not hold in wireless environments. This work identifies four areas of improvement in XML messaging systems: the programming interfaces to the system itself and to XML processing, the serialization format used for the messages, and the protocol used to transmit the messages. We show a complete system that improves the overall performance of XML messaging through consideration of these areas. The work is centered on actually implementing the proposals in a form usable on real mobile devices. The experimentation is performed on actual devices and real networks using the messaging system implemented as a part of this work. The experimentation is extensive and, due to using several different devices, also provides a glimpse of what the performance of these systems may look like in the future.
Resumo:
Current smartphones have a storage capacity of several gigabytes. More and more information is stored on mobile devices. To meet the challenge of information organization, we turn to desktop search. Users often possess multiple devices, and synchronize (subsets of) information between them. This makes file synchronization more important. This thesis presents Dessy, a desktop search and synchronization framework for mobile devices. Dessy uses desktop search techniques, such as indexing, query and index term stemming, and search relevance ranking. Dessy finds files by their content, metadata, and context information. For example, PDF files may be found by their author, subject, title, or text. EXIF data of JPEG files may be used in finding them. User–defined tags can be added to files to organize and retrieve them later. Retrieved files are ranked according to their relevance to the search query. The Dessy prototype uses the BM25 ranking function, used widely in information retrieval. Dessy provides an interface for locating files for both users and applications. Dessy is closely integrated with the Syxaw file synchronizer, which provides efficient file and metadata synchronization, optimizing network usage. Dessy supports synchronization of search results, individual files, and directory trees. It allows finding and synchronizing files that reside on remote computers, or the Internet. Dessy is designed to solve the problem of efficient mobile desktop search and synchronization, also supporting remote and Internet search. Remote searches may be carried out offline using a downloaded index, or while connected to the remote machine on a weak network. To secure user data, transmissions between the Dessy client and server are encrypted using symmetric encryption. Symmetric encryption keys are exchanged with RSA key exchange. Dessy emphasizes extensibility. Also the cryptography can be extended. Users may tag their files with context tags and control custom file metadata. Adding new indexed file types, metadata fields, ranking methods, and index types is easy. Finding files is done with virtual directories, which are views into the user’s files, browseable by regular file managers. On mobile devices, the Dessy GUI provides easy access to the search and synchronization system. This thesis includes results of Dessy synchronization and search experiments, including power usage measurements. Finally, Dessy has been designed with mobility and device constraints in mind. It requires only MIDP 2.0 Mobile Java with FileConnection support, and Java 1.5 on desktop machines.
Resumo:
The subject of this work is the poetics of «The Wax Effigy», a short novel or novella by Jurii Tynianov, Russian writer, literary critic, historian of literature and prominent literary theoretician. The plot structure of the novel is based upon a real event, the creation by Bartolomeo Carlo Rastrelli in 1725 of a wax sculpture of the first Russian emperor, Peter the Great. «Construction of the Sham» consists of three chapters, an introduction and a conclusion. Due to the fact that Tynianov was at the same time a prose writer and theoretician of literature it seemed important to consider the reception of his prose and his works on literary theory in relationship to each other. The introduction is devoted to this task. The first chapter is about the history of the creation of the novel and its reception. Tynianov stopped writing one short story in order to write the novel; these two works have some common traits. It seems almost obvious that his work on the first text was a real step toward the creation of the second. In the first story there is an opposition of dead/alive which is semantic prefiguring of a central motif in «The Wax Effigy». An analysis of the reception of the novel demonstrated that almost every critic writing about the novel has described it as nonsense. Critics considered Tynianov's work in terms of «devices» and «content» and could not understand how devices are related to the content of the novel: the novel was thought as a signifier without any signified. Implicitly, critics thought the signified of the novel as a traditional one of the historical novel, as the historiosophical «idea», embodied in the system of literary devices. In this case literature becomes something instrumental, a kind of expression of extraliterary content. In contradistinction to that Tynianov considered literary semantics as an effect of the literary structure. From his point of view the literary sense is immanent to the process of signification accomplished inside the literary text. The second chapter is devoted to a rhetorical analysis of the opposition dead/alive. Tynianov systematically compares both terms of the opposition. As a result of this strategy the wax effigy of the dead emperor becomes «as if» alive and the world of living people «as if» dead. The qualifier «as if» refers to the fact that Tynianov creates an ambiguous semantic system. This rhetoric is related to European Romanticism and his «fantastic literature» (Merimé, Hoffmann, Maupassant etc.). But Tynianov demonstrates a linguistic origin of the strange fantoms created by romantics; he demystifies these idols by parodying the fantastic literature, that is, showing «how it was done». At the same time, the opposition mentioned above refers to his idea of «incongruity» which plays a prominent role in Tynianov s theory but has never been conceptualised. The incongruity is a inner collision of the literary text; from Tynianov's point of view the meaning of the work of literature is always a dynamic collision of semantically heterogeneous elements struggling with each other. In «The Wax Effigy» Tynianov creates a metalevel of the work demonstrating the process of creation of the literary sense. The third chapter is a reconstruction of Tynianov's conception of the historical prose, specifically of the mechanisms by which historical facts are transformed into literary events. Tynianov thought that the task of the historical novelist is to depict his hero as an actor, to demonstrate that as a wearer of many masks he is a creator of appearances, ambiguities. Here, in the «figure of fiction» (Andrei Belyi), the very idea of the historical prose and rhetoric employed in «The Wax Effigy», history and literature meet each other. In his last theoretical work, «On parody» Tynianov writes about the so-called sham structure of parody. In his opinion every parody is a text about other texts and «serious» work which could be read at the same time as a text about «reality». This twofold structure of parody is that of «The Wax Effigy»: that text speaks about ambiguities of the history and about ambiguities of the literary sense, about social reality of the past and - about the working of the literature itself. «The Wax Effigy» is written as a autoreflective text, as an experiment in literary semantics, as a system of literary ambiguities - of hero, rhetoric and the text itself. The meaning of the novel is created not by the embodiment extraliterary idea, but by the process of signification accomplished inside the work of literature. In this sense Tynianov's novel is parody, a break with the tradition of the historical novel preceding «The Wax Effigy».
Resumo:
The use of head-mounted displays (HMDs) can produce both positive and negative experiences. In an effort increase positive experiences and avoid negative ones, researchers have identified a number of variables that may cause sickness and eyestrain, although the exact nature of the relationship to HMDs may vary, depending on the tasks and the environments. Other non-sickness-related aspects of HMDs, such as users opinions and future decisions associated with task enjoyment and interest, have attracted little attention in the research community. In this thesis, user experiences associated with the use of monocular and bi-ocular HMDs were studied. These include eyestrain and sickness caused by current HMDs, the advantages and disadvantages of adjustable HMDs, HMDs as accessories for small multimedia devices, and the impact of individual characteristics and evaluated experiences on reported outcomes and opinions. The results indicate that today s commercial HMDs do not induce serious sickness or eyestrain. Reported adverse symptoms have some influence on HMD-related opinions, but the nature of the impact depends on the tasks and the devices used. As an accessory to handheld devices and as a personal viewing device, HMDs may increase use duration and enable users to perform tasks not suitable for small screens. Well-designed and functional, adjustable HMDs, especially monocular HMDs, increase viewing comfort and usability, which in turn may have a positive effect on product-related satisfaction. The role of individual characteristics in understanding HMD-related experiences has not changed significantly. Explaining other HMD-related experiences, especially forward-looking interests, also requires understanding more stable individual traits and motivations.
Resumo:
Pressurised hot water extraction (PHWE) exploits the unique temperature-dependent solvent properties of water minimising the use of harmful organic solvents. Water is environmentally friendly, cheap and easily available extraction medium. The effects of temperature, pressure and extraction time in PHWE have often been studied, but here the emphasis was on other parameters important for the extraction, most notably the dimensions of the extraction vessel and the stability and solubility of the analytes to be extracted. Non-linear data analysis and self-organising maps were employed in the data analysis to obtain correlations between the parameters studied, recoveries and relative errors. First, pressurised hot water extraction (PHWE) was combined on-line with liquid chromatography-gas chromatography (LC-GC), and the system was applied to the extraction and analysis of polycyclic aromatic hydrocarbons (PAHs) in sediment. The method is of superior sensitivity compared with the traditional methods, and only a small 10 mg sample was required for analysis. The commercial extraction vessels were replaced by laboratory-made stainless steel vessels because of some problems that arose. The performance of the laboratory-made vessels was comparable to that of the commercial ones. In an investigation of the effect of thermal desorption in PHWE, it was found that at lower temperatures (200ºC and 250ºC) the effect of thermal desorption is smaller than the effect of the solvating property of hot water. At 300ºC, however, thermal desorption is the main mechanism. The effect of the geometry of the extraction vessel on recoveries was studied with five specially constructed extraction vessels. In addition to the extraction vessel geometry, the sediment packing style and the direction of water flow through the vessel were investigated. The geometry of the vessel was found to have only minor effect on the recoveries, and the same was true of the sediment packing style and the direction of water flow through the vessel. These are good results because these parameters do not have to be carefully optimised before the start of extractions. Liquid-liquid extraction (LLE) and solid-phase extraction (SPE) were compared as trapping techniques for PHWE. LLE was more robust than SPE and it provided better recoveries and repeatabilities than did SPE. Problems related to blocking of the Tenax trap and unrepeatable trapping of the analytes were encountered in SPE. Thus, although LLE is more labour intensive, it can be recommended over SPE. The stabilities of the PAHs in aqueous solutions were measured using a batch-type reaction vessel. Degradation was observed at 300ºC even with the shortest heating time. Ketones and quinones and other oxidation products were observed. Although the conditions of the stability studies differed considerably from the extraction conditions in PHWE, the results indicate that the risk of analyte degradation must be taken into account in PHWE. The aqueous solubilities of acenaphthene, anthracene and pyrene were measured, first below and then above the melting point of the analytes. Measurements below the melting point were made to check that the equipment was working, and the results were compared with those obtained earlier. Good agreement was found between the measured and literature values. A new saturation cell was constructed for the solubility measurements above the melting point of the analytes because the flow-through saturation cell could not be used above the melting point. An exponential relationship was found between the solubilities measured for pyrene and anthracene and temperature.
Resumo:
Delay and disruption tolerant networks (DTNs) are computer networks where round trip delays and error rates are high and disconnections frequent. Examples of these extreme networks are space communications, sensor networks, connecting rural villages to the Internet and even interconnecting commodity portable wireless devices and mobile phones. Basic elements of delay tolerant networks are a store-and-forward message transfer resembling traditional mail delivery, an opportunistic and intermittent routing, and an extensible cross-region resource naming service. Individual nodes of the network take an active part in routing the traffic and provide in-network data storage for application data that flows through the network. Application architecture for delay tolerant networks differs also from those used in traditional networks. It has become feasible to design applications that are network-aware and opportunistic, taking an advantage of different network connection speeds and capabilities. This might change some of the basic paradigms of network application design. DTN protocols will also support in designing applications which depend on processes to be persistent over reboots and power failures. DTN protocols could also be applicable to traditional networks in cases where high tolerance to delays or errors would be desired. It is apparent that challenged networks also challenge the traditional strictly layered model of network application design. This thesis provides an extensive introduction to delay tolerant networking concepts and applications. Most attention is given to challenging problems of routing and application architecture. Finally, future prospects of DTN applications and implementations are envisioned through recent research results and an interview with an active researcher of DTN networks.
Resumo:
The study examines various uses of computer technology in acquisition of information for visually impaired people. For this study 29 visually impaired persons took part in a survey about their experiences concerning acquisition of infomation and use of computers, especially with a screen magnification program, a speech synthesizer and a braille display. According to the responses, the evolution of computer technology offers an important possibility for visually impaired people to cope with everyday activities and interacting with the environment. Nevertheless, the functionality of assistive technology needs further development to become more usable and versatile. Since the challenges of independent observation of environment were emphasized in the survey, the study led into developing a portable text vision system called Tekstinäkö. Contrary to typical stand-alone applications, Tekstinäkö system was constructed by combining devices and programs that are readily available on consumer market. As the system operates, pictures are taken by a digital camera and instantly transmitted to a text recognition program in a laptop computer that talks out loud the text using a speech synthesizer. Visually impaired test users described that even unsure interpretations of the texts in the environment given by Tekstinäkö system are at least a welcome addition to complete perception of the environment. It became clear that even with a modest development work it is possible to bring new, useful and valuable methods to everyday life of disabled people. Unconventional production process of the system appeared to be efficient as well. Achieved results and the proposed working model offer one suggestion for giving enough attention to easily overlooked needs of the people with special abilities. ACM Computing Classification System (1998): K.4.2 Social Issues: Assistive technologies for persons with disabilities I.4.9 Image processing and computer vision: Applications Keywords: Visually impaired, computer-assisted, information, acquisition, assistive technology, computer, screen magnification program, speech synthesizer, braille display, survey, testing, text recognition, camera, text, perception, picture, environment, trasportation, guidance, independence, vision, disabled, blind, speech, synthesizer, braille, software engineering, programming, program, system, freeware, shareware, open source, Tekstinäkö, text vision, TopOCR, Autohotkey, computer engineering, computer science
Resumo:
Mobile RFID services for the Internet of Things can be created by using RFID as an enabling technology in mobile devices. Humans, devices, and things are the content providers and users of these services. Mobile RFID services can be either provided on mobile devices as stand-alone services or combined with end-to-end systems. When different service solution scenarios are considered, there are more than one possible architectural solution in the network, mobile, and back-end server areas. Combining the solutions wisely by applying the software architecture and engineering principles, a combined solution can be formulated for certain application specific use cases. This thesis illustrates these ideas. It also shows how generally the solutions can be used in real world use case scenarios. A case study is used to add further evidence.
Resumo:
Silicon strip detectors are fast, cost-effective and have an excellent spatial resolution. They are widely used in many high-energy physics experiments. Modern high energy physics experiments impose harsh operation conditions on the detectors, e.g., of LHC experiments. The high radiation doses cause the detectors to eventually fail as a result of excessive radiation damage. This has led to a need to study radiation tolerance using various techniques. At the same time, a need to operate sensors approaching the end their lifetimes has arisen. The goal of this work is to demonstrate that novel detectors can survive the environment that is foreseen for future high-energy physics experiments. To reach this goal, measurement apparatuses are built. The devices are then used to measure the properties of irradiated detectors. The measurement data are analyzed, and conclusions are drawn. Three measurement apparatuses built as a part of this work are described: two telescopes measuring the tracks of the beam of a particle accelerator and one telescope measuring the tracks of cosmic particles. The telescopes comprise layers of reference detectors providing the reference track, slots for the devices under test, the supporting mechanics, electronics, software, and the trigger system. All three devices work. The differences between these devices are discussed. The reconstruction of the reference tracks and analysis of the device under test are presented. Traditionally, silicon detectors have produced a very clear response to the particles being measured. In the case of detectors nearing the end of their lifefimes, this is no longer true. A new method benefitting from the reference tracks to form clusters is presented. The method provides less biased results compared to the traditional analysis, especially when studying the response of heavily irradiated detectors. Means to avoid false results in demonstrating the particle-finding capabilities of a detector are also discussed. The devices and analysis methods are primarily used to study strip detectors made of Magnetic Czochralski silicon. The detectors studied were irradiated to various fluences prior to measurement. The results show that Magnetic Czochralski silicon has a good radiation tolerance and is suitable for future high-energy physics experiments.
Resumo:
A better understanding of vacuum arcs is desirable in many of today's 'big science' projects including linear colliders, fusion devices, and satellite systems. For the Compact Linear Collider (CLIC) design, radio-frequency (RF) breakdowns occurring in accelerating cavities influence efficiency optimisation and cost reduction issues. Studying vacuum arcs both theoretically as well as experimentally under well-defined and reproducible direct-current (DC) conditions is the first step towards exploring RF breakdowns. In this thesis, we have studied Cu DC vacuum arcs with a combination of experiments, a particle-in-cell (PIC) model of the arc plasma, and molecular dynamics (MD) simulations of the subsequent surface damaging mechanism. We have also developed the 2D Arc-PIC code and the physics model incorporated in it, especially for the purpose of modelling the plasma initiation in vacuum arcs. Assuming the presence of a field emitter at the cathode initially, we have identified the conditions for plasma formation and have studied the transitions from field emission stage to a fully developed arc. The 'footing' of the plasma is the cathode spot that supplies the arc continuously with particles; the high-density core of the plasma is located above this cathode spot. Our results have shown that once an arc plasma is initiated, and as long as energy is available, the arc is self-maintaining due to the plasma sheath that ensures enhanced field emission and sputtering. The plasma model can already give an estimate on how the time-to-breakdown changes with the neutral evaporation rate, which is yet to be determined by atomistic simulations. Due to the non-linearity of the problem, we have also performed a code-to-code comparison. The reproducibility of plasma behaviour and time-to-breakdown with independent codes increased confidence in the results presented here. Our MD simulations identified high-flux, high-energy ion bombardment as a possible mechanism forming the early-stage surface damage in vacuum arcs. In this mechanism, sputtering occurs mostly in clusters, as a consequence of overlapping heat spikes. Different-sized experimental and simulated craters were found to be self-similar with a crater depth-to-width ratio of about 0.23 (sim) - 0.26 (exp). Experiments, which we carried out to investigate the energy dependence of DC breakdown properties, point at an intrinsic connection between DC and RF scaling laws and suggest the possibility of accumulative effects influencing the field enhancement factor.