284 resultados para Frames per second
em Queensland University of Technology - ePrints Archive
Resumo:
Objectives. To evaluate the performance of the dynamic-area high-speed videokeratoscopy technique in the assessment of tear film surface quality with and without the presence of soft contact lenses on eye. Methods. Retrospective data from a tear film study using basic high-speed videokeratoscopy, captured at 25 frames per second, (Kopf et al., 2008, J Optom) were used. Eleven subjects had tear film analysis conducted in the morning, midday and evening on the first and seventh day of one week of no lens wear. Five of the eleven subjects then completed an extra week of hydrogel lens wear followed by a week of silicone hydrogel lens wear. Analysis was performed on a 6 second period of the inter-blink recording. The dynamic-area high-speed videokeratoscopy technique uses the maximum available area of Placido ring pattern reflected from the tear interface and eliminates regions of disturbance due to shadows from the eyelashes. A value of tear film surface quality was derived using image rocessing techniques, based on the quality of the reflected ring pattern orientation. Results. The group mean tear film surface quality and the standard deviations for each of the conditions (bare eye, hydrogel lens, and silicone hydrogel lens) showed a much lower coefficient of variation than previous methods (average reduction of about 92%). Bare eye measurements from the right and left eyes of eleven individuals showed high correlation values (Pearson’s correlation r = 0.73, p < 0.05). Repeated measures ANOVA across the 6 second period of measurement in the normal inter-blink period for the bare eye condition showed no statistically significant changes. However, across the 6 second inter-blink period with both contact lenses, statistically significant changes were observed (p < 0.001) for both types of contact lens material. Overall, wearing hydrogel and silicone hydrogel lenses caused the tear film surface quality to worsen compared with the bare eye condition (repeated measures ANOVA, p < 0.0001 for both hydrogel and silicone hydrogel). Conclusions. The results suggest that the dynamic-area method of high-speed videokeratoscopy was able to distinguish and quantify the subtle, but systematic worsening of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions.
Resumo:
In this paper we describe the recent development of a low-bandwidth wireless camera sensor network. We propose a simple, yet effective, network architecture which allows multiple cameras to be connected to the network and synchronize their communication schedules. Image compression of greater than 90% is performed at each node running on a local DSP coprocessor, resulting in nodes using 1/8th the energy compared to streaming uncompressed images. We briefly introduce the Fleck wireless node and the DSP/camera sensor, and then outline the network architecture and compression algorithm. The system is able to stream color QVGA images over the network to a base station at up to 2 frames per second. © 2007 IEEE.
Resumo:
As the graphics race subsides and gamers grow weary of predictable and deterministic game characters, game developers must put aside their “old faithful” finite state machines and look to more advanced techniques that give the users the gaming experience they crave. The next industry breakthrough will be with characters that behave realistically and that can learn and adapt, rather than more polygons, higher resolution textures and more frames-per-second. This paper explores the various artificial intelligence techniques that are currently being used by game developers, as well as techniques that are new to the industry. The techniques covered in this paper are finite state machines, scripting, agents, flocking, fuzzy logic and fuzzy state machines decision trees, neural networks, genetic algorithms and extensible AI. This paper introduces each of these technique, explains how they can be applied to games and how commercial games are currently making use of them. Finally, the effectiveness of these techniques and their future role in the industry are evaluated.
Resumo:
High magnification and large depth of field with a temporal resolution of less than 100 microseconds are possible using the present invention which combines a linear electron beam produced by a tungsten filament from an SX-40A Scanning Electron Microscope (SEM), a magnetic deflection coil with lower inductance resulting from reducing the number of turns of the saddle-coil wires, while increasing the diameter of the wires, a fast scintillator, photomultiplier tube, photomultiplier tube base, and signal amplifiers and a high speed data acquisition system which allows for a scan rate of 381 frames per second and 256.times.128 pixel density in the SEM image at a data acquisition rate of 25 MHz. The data acquisition and scan position are fully coordinated. A digitizer and a digital waveform generator which generates the sweep signals to the scan coils run off the same clock to acquire the signal in real-time.
Resumo:
The palette of fluorescent proteins (FPs) has grown exponentially over the past decade, and as a result, live imaging of cells expressing fluorescently tagged proteins is becoming more and more mainstream. Spinning disk confocal (SDC) microscopy is a high-speed optical sectioning technique and a method of choice to observe and analyze intracellular FP dynamics at high spatial and temporal resolution. In an SDC system, a rapidly rotating pinhole disk generates thousands of points of light that scan the specimen simultaneously, which allows direct capture of the confocal image with low-noise scientific grade-cooled charge-coupled device cameras, and can achieve frame rates of up to 1000 frames per second. In this chapter, we describe important components of a state-of-the-art spinning disk system optimized for live cell microscopy and provide a rationale for specific design choices. We also give guidelines of how other imaging techniques such as total internal reflection microscopy or spatially controlled photoactivation can be coupled with SDC imaging and provide a short protocol on how to generate cell lines stably expressing fluorescently tagged proteins by lentivirus-mediated transduction.
Resumo:
The Java programming language has potentially significant advantages for wireless sensor nodes but there is currently no feature-rich, open source virtual machine available. In this paper we present Darjeeling, a system comprising offline tools and a memory efficient run-time. The offline post-compiler tool analyzes, links and consolidates Java class files into loadable modules. The runtime implements a modified Java VM that supports multithreading and is designed specifically to operate in constrained execution environments such as wireless sensor network nodes and supports inheritance, threads, garbage collection, and loadable modules. We have demonstrated Java running on AVR128 and MSP430 microcontrollers at speeds of up to 70,000 JVM instructions per second.
Resumo:
The Java programming language enjoys widespread popularity on platforms ranging from servers to mobile phones. While efforts have been made to run Java on microcontroller platforms, there is currently no feature-rich, open source virtual machine available. In this paper we present Darjeeling, a system comprising offline tools and a memory efficient runtime. The offline post-compiler tool analyzes, links and consolidates Java class files into loadable modules. The runtime implements a modified Java VM that supports multithreading and is designed specifically to operate in constrained execution environments such as wireless sensor network nodes. Darjeeling improves upon existing work by supporting inheritance, threads, garbage collection, and loadable modules while keeping memory usage to a minimum. We have demonstrated Java running on AVR128 and MSP430 micro-controllers at speeds of up to 70,000 JVM instructions per second.
Resumo:
Gradual authentication is a principle proposed by Meadows as a way to tackle denial-of-service attacks on network protocols by gradually increasing the confidence in clients before the server commits resources. In this paper, we propose an efficient method that allows a defending server to authenticate its clients gradually with the help of some fast-to-verify measures. Our method integrates hash-based client puzzles along with a special class of digital signatures supporting fast verification. Our hash-based client puzzle provides finer granularity of difficulty and is proven secure in the puzzle difficulty model of Chen et al. (2009). We integrate this with the fast-verification digital signature scheme proposed by Bernstein (2000, 2008). These schemes can be up to 20 times faster for client authentication compared to RSA-based schemes. Our experimental results show that, in the Secure Sockets Layer (SSL) protocol, fast verification digital signatures can provide a 7% increase in connections per second compared to RSA signatures, and our integration of client puzzles with client authentication imposes no performance penalty on the server since puzzle verification is a part of signature verification.
Resumo:
In public places, crowd size may be an indicator of congestion, delay, instability, or of abnormal events, such as a fight, riot or emergency. Crowd related information can also provide important business intelligence such as the distribution of people throughout spaces, throughput rates, and local densities. A major drawback of many crowd counting approaches is their reliance on large numbers of holistic features, training data requirements of hundreds or thousands of frames per camera, and that each camera must be trained separately. This makes deployment in large multi-camera environments such as shopping centres very costly and difficult. In this chapter, we present a novel scene-invariant crowd counting algorithm that uses local features to monitor crowd size. The use of local features allows the proposed algorithm to calculate local occupancy statistics, scale to conditions which are unseen in the training data, and be trained on significantly less data. Scene invariance is achieved through the use of camera calibration, allowing the system to be trained on one or more viewpoints and then deployed on any number of new cameras for testing without further training. A pre-trained system could then be used as a ‘turn-key’ solution for crowd counting across a wide range of environments, eliminating many of the costly barriers to deployment which currently exist.
Resumo:
The Web has become a worldwide repository of information which individuals, companies, and organizations utilize to solve or address various information problems. Many of these Web users utilize automated agents to gather this information for them. Some assume that this approach represents a more sophisticated method of searching. However, there is little research investigating how Web agents search for online information. In this research, we first provide a classification for information agent using stages of information gathering, gathering approaches, and agent architecture. We then examine an implementation of one of the resulting classifications in detail, investigating how agents search for information on Web search engines, including the session, query, term, duration and frequency of interactions. For this temporal study, we analyzed three data sets of queries and page views from agents interacting with the Excite and AltaVista search engines from 1997 to 2002, examining approximately 900,000 queries submitted by over 3,000 agents. Findings include: (1) agent sessions are extremely interactive, with sometimes hundreds of interactions per second (2) agent queries are comparable to human searchers, with little use of query operators, (3) Web agents are searching for a relatively limited variety of information, wherein only 18% of the terms used are unique, and (4) the duration of agent-Web search engine interaction typically spans several hours. We discuss the implications for Web information agents and search engines.
Resumo:
Many substation applications require accurate time-stamping. The performance of systems such as Network Time Protocol (NTP), IRIG-B and one pulse per second (1-PPS) have been sufficient to date. However, new applications, including IEC 61850-9-2 process bus and phasor measurement, require accuracy of one microsecond or better. Furthermore, process bus applications are taking time synchronisation out into high voltage switchyards where cable lengths may have an impact on timing accuracy. IEEE Std 1588, Precision Time Protocol (PTP), is the means preferred by the smart grid standardisation roadmaps (from both the IEC and US National Institute of Standards and Technology) of achieving this higher level of performance, and integrates well into Ethernet based substation automation systems. Significant benefits of PTP include automatic path length compensation, support for redundant time sources and the cabling efficiency of a shared network. This paper benchmarks the performance of established IRIG-B and 1-PPS synchronisation methods over a range of path lengths representative of a transmission substation. The performance of PTP using the same distribution system is then evaluated and compared to the existing methods to determine if the performance justifies the additional complexity. Experimental results show that a PTP timing system maintains the synchronising performance of 1-PPS and IRIG-B timing systems, when using the same fibre optic cables, and further meets the needs of process buses in large substations.
Resumo:
PURPOSE: To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. METHODS: 36 visually normal participants (aged 19 – 80 years), completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields. and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus, and sensitivity for displacement in a random-dot kinematogram (Dmin). Participants also completed a hazard perception test (HPT) which measured participants’ response times to hazards embedded in video recordings of real world driving which has been shown to be linked to crash risk. RESULTS: Dmin for the random-dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random-dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. CONCLUSION: These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception in order to develop better interventions to improve road safety.
Resumo:
A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.
Resumo:
Amongst the most prominent uses of Twitter at present is its role in the discussion of widely televised events: Twitter’s own statistics for 2011, for example, list major entertainment spectacles (the MTV Music Awards, the BET Awards) and sports matches (the UEFA Champions League final, the FIFA Women’s World Cup final) amongst the events generating the most tweets per second during the year (Twitter, 2011). User activities during such televised events constitute a specific, unique category of Twitter use, which differs clearly from the other major events which generate a high rate of tweets per second (such as crises and breaking news, from the Japanese earthquake and tsunami to the death of Steve Jobs), as preliminary research has shown. During such major media events, by contrast, Twitter is used most predominantly as a technology of fandom instead: it serves in the first place as a backchannel to television and other streaming audiovisual media, enabling users offer their own running commentary on the universally shared media text of the event broadcast as it unfolds live. Centrally, this communion of fans around the shared text is facilitated by the use of Twitter hashtags – unifying textual markers which are now often promoted to prospective audiences by the broadcasters well in advance of the live event itself. This paper examines the use of Twitter as a technology for the expression of shared fandom in the context of a major, internationally televised annual media event: the Eurovision Song Contest. It constitutes a highly publicised, highly choreographed media spectacle whose eventual outcomes are unknown ahead of time and attracts a diverse international audience. Our analysis draws on comprehensive datasets for the ‘official’ event hashtags, #eurovision, #esc, and #sbseurovision. Using innovative methods which combine qualitative and quantitative approaches to the analysis of Twitter datasets containing several hundreds of thousands, we examine overall patterns of participation to discover how audiences express their fandom throughout the event. Minute-by-minute tracking of Twitter activity during the live broadcasts enables us to identify the most resonant moments during each event; we also examine the networks of interaction between participants to detect thematically or geographically determined clusters of interaction, and to identify the most visible and influential participants in each network. Such analysis is able to provide a unique insight into the use of Twitter as a technology for fandom and for what in cultural studies research is called ‘audiencing’: the public performance of belonging to the distributed audience for a shared media event. Our work thus contributes to the examination of fandom practices led by Henry Jenkins (2006) and other scholars, and points to Twitter as an important new medium facilitating the connection and communion of such fans.
Resumo:
The first representative chemical, structural, and morphological analysis of the solid particles from a single collection surface has been performed. This collection surface sampled the stratosphere between 17 and 19km in altitude in the summer of 1981, and therefore before the 1982 eruptions of El Chichón. A particle collection surface was washed free of all particles with rinses of Freon and hexane, and the resulting wash was directed through a series of vertically stacked Nucleopore filters. The size cutoff for the solid particle collection process in the stratosphere is found to be considerably less than 1 μm. The total stratospheric number density of solid particles larger than 1μm in diameter at the collection time is calculated to be about 2.7×10−1 particles per cubic meter, of which approximately 95% are smaller than 5μm in diameter. Previous classification schemes are expanded to explicitly recognize low atomic number material. With the single exception of the calcium-aluminum-silicate (CAS) spheres all solid particle types show a logarithmic increase in number concentration with decreasing diameter. The aluminum-rich particles are unique in showing bimodal size distributions. In addition, spheres constitute only a minor fraction of the aluminum-rich material. About 2/3 of the particles examined were found to be shards of rhyolitic glass. This abundant volcanic material could not be correlated with any eruption plume known to have vented directly to the stratosphere. The micrometeorite number density calculated from this data set is 5×10−2 micrometeorites per cubic meter of air, an order of magnitude greater than the best previous estimate. At the collection altitude, the maximum collision frequency of solid particles >5μm in average diameter is calculated to be 6.91×10−16 collisions per second, which indicates negligible contamination of extraterrestrial particles in the stratosphere by solid anthropogenic particles.