936 resultados para swd: Graphic hardware
Resumo:
Traditional area-based matching techniques make use of similarity metrics such as the Sum of Absolute Differences(SAD), Sum of Squared Differences (SSD) and Normalised Cross Correlation (NCC). Non-parametric matching algorithms such as the rank and census rely on the relative ordering of pixel values rather than the pixels themselves as a similarity measure. Both traditional area-based and non-parametric stereo matching techniques have an algorithmic structure which is amenable to fast hardware realisation. This investigation undertakes a performance assessment of these two families of algorithms for robustness to radiometric distortion and random noise. A generic implementation framework is presented for the stereo matching problem and the relative hardware requirements for the various metrics investigated.
Resumo:
The mining environment, being complex, irregular and time varying, presents a challenging prospect for stereo vision. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This paper evaluates a number of matching techniques for possible use in a stereo vision sensor for mining automation applications. Area-based techniques have been investigated because they have the potential to yield dense maps, are amenable to fast hardware implementation, and are suited to textured scenes. In addition, two non-parametric transforms, namely, the rank and census, have been investigated. Matching algorithms using these transforms were found to have a number of clear advantages, including reliability in the presence of radiometric distortion, low computational complexity, and amenability to hardware implementation.
Resumo:
A frame-rate stereo vision system, based on non-parametric matching metrics, is described. Traditional metrics, such as normalized cross-correlation, are expensive in terms of logic. Non-parametric measures require only simple, parallelizable, functions such as comparators, counters and exclusive-or, and are thus very well suited to implementation in reprogrammable logic.
Resumo:
A fundamental problem faced by stereo matching algorithms is the matching or correspondence problem. A wide range of algorithms have been proposed for the correspondence problem. For all matching algorithms, it would be useful to be able to compute a measure of the probability of correctness, or reliability of a match. This paper focuses in particular on one class for matching algorithms, which are based on the rank transform. The interest in these algorithms for stereo matching stems from their invariance to radiometric distortion, and their amenability to fast hardware implementation. This work differs from previous work in that it derives, from first principles, an expression for the probability of a correct match. This method was based on an enumeration of all possible symbols for matching. The theoretical results for disparity error prediction, obtained using this method, were found to agree well with experimental results. However, disadvantages of the technique developed in this chapter are that it is not easily applicable to real images, and also that it is too computationally expensive for practical window sizes. Nevertheless, the exercise provides an interesting and novel analysis of match reliability.
Resumo:
In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays (FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri-diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri-Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.
Resumo:
The use of Trusted Platform Module (TPM) is be- coming increasingly popular in many security sys- tems. To access objects protected by TPM (such as cryptographic keys), several cryptographic proto- cols, such as the Object Specific Authorization Pro- tocol (OSAP), can be used. Given the sensitivity and the importance of those objects protected by TPM, the security of this protocol is vital. Formal meth- ods allow a precise and complete analysis of crypto- graphic protocols such that their security properties can be asserted with high assurance. Unfortunately, formal verification of these protocols are limited, de- spite the abundance of formal tools that one can use. In this paper, we demonstrate the use of Coloured Petri Nets (CPN) - a type of formal technique, to formally model the OSAP. Using this model, we then verify the authentication property of this protocol us- ing the state space analysis technique. The results of analysis demonstrates that as reported by Chen and Ryan the authentication property of OSAP can be violated.
Resumo:
WHAT: An interactive installation with full body interface, digital projection, multi-touch sensitive screen surfaces, interactive 3D gaming software, motorised dioramas, 4.1 spatial sound & new furniture forms - investigating the cultural dimensions of sustainability through the lens of 'time'. “Time is change, time is finitude. Humans are a finite species. Every decision we make today brings that end closer, or alternatively pushes it further away. Nothing can be neutral”. Tony Fry DETAILS: Each participant/viewer lies comfortably on their back. Directly above them is a semi-transparent Perspex screen that displays projected 3D imagery and is simultaneously sensitive to the lightest of finger touches. Depending upon the ever changing qualities of the projected image on this screen the participant can see through its surface to a series of physical dioramas suspended above, lit by subtle LED spotlighting. This diorama consists of a slowly rotating series of physical environments, which also include several animatronic components, allowing the realtime composition of whimsical ‘landscapes’ of both 'real' and 'virtual' media. Through subtle, non-didactic touch-sensitive interactivity the participant then has influence over both the 3D graphic imagery, the physical movements of the diorama and the 4 channel immersive soundscape, creating an uncanny blend of physical and virtual media. Five speakers positioned around the room deliver a rich interactive soundscape that responds both audibly and physically to interactions.
Resumo:
Ethnography is now a well-established research methodology for virtual environments, and the vast majority of accounts have one aspect in common, whether textual or graphic environments – that of the embodied avatar. In this article, I first discuss the applicability of such a methodology to non-avatar environments such as Eve Online, considering where the methodology works and the issues that arise in its implementation – particularly for the consideration of sub-communities within the virtual environment. Second, I consider what alternative means exist for getting at the information that is obtained through an ethnographic study of the virtual environment. To that end, I consider the practical and ethical implications of utilizing existing accounts, the importance of the meta-game discourse, including those sources outside of the control of the environment developer, and finally the utility in combining personal observations with accounts of other ethnographers, both within and between environments.
Resumo:
Most commonly, residents are always arguing about the satisfaction of sustainability and quality of their high rise residential property. This paper aim is to maintain the best quality satisfaction of the door hardware by introducing the whole life cycle costing approach to the property manager of the public housing in Johor. This paper looks into the current situation of ironmongeries (door hardware) of 2 public housings in Johor, Malaysia and testing the whole life cycle costing approach towards them. The calculation and the literature review are conducted. The questionnaire surveys of 2 public housings were conducted to make clear the occupants’ evaluation about the actual quality conditions of the ironmongeries in their house. As a result, the quality of door hardware based on the whole life cycle costing approach is one of the best among their previous decision making tool that have been applied. Practitioners can benefit from this paper as it provides information on calculating the whole life costing and making the decisions about ironmongeries selection of their properties.
Resumo:
miRDeep and its varieties are widely used to quantify known and novel micro RNA (miRNA) from small RNA sequencing (RNAseq). This article describes miRDeep*, our integrated miRNA identification tool, which is modeled off miRDeep, but the precision of detecting novel miRNAs is improved by introducing new strategies to identify precursor miRNAs. miRDeep* has a user-friendly graphic interface and accepts raw data in FastQ and Sequence Alignment Map (SAM) or the binary equivalent (BAM) format. Known and novel miRNA expression levels, as measured by the number of reads, are displayed in an interface, which shows each RNAseq read relative to the pre-miRNA hairpin. The secondary pre-miRNA structure and read locations for each predicted miRNA are shown and kept in a separate figure file. Moreover, the target genes of known and novel miRNAs are predicted using the TargetScan algorithm, and the targets are ranked according to the confidence score. miRDeep* is an integrated standalone application where sequence alignment, pre-miRNA secondary structure calculation and graphical display are purely Java coded. This application tool can be executed using a normal personal computer with 1.5 GB of memory. Further, we show that miRDeep* outperformed existing miRNA prediction tools using our LNCaP and other small RNAseq datasets. miRDeep* is freely available online at http://www.australianprostatecentre.org/research/software/mirdeep-star
Resumo:
The feasibility of using an in-hardware implementation of a genetic algorithm (GA) to solve the computationally expensive travelling salesman problem (TSP) is explored, especially in regard to hardware resource requirements for problem and population sizes. We investigate via numerical experiments whether a small population size might prove sufficient to obtain reasonable quality solutions for the TSP, thereby permitting relatively resource efficient hardware implementation on field programmable gate arrays (FPGAs). Software experiments on two TSP benchmarks involving 48 and 532 cities were used to explore the extent to which population size can be reduced without compromising solution quality, and results show that a GA allowed to run for a large number of generations with a smaller population size can yield solutions of comparable quality to those obtained using a larger population. This finding is then used to investigate feasible problem sizes on a targeted Virtex-7 vx485T-2 FPGA platform via exploration of hardware resource requirements for memory and data flow operations.
Resumo:
This paper presents a novel evolutionary computation approach to three-dimensional path planning for unmanned aerial vehicles (UAVs) with tactical and kinematic constraints. A genetic algorithm (GA) is modified and extended for path planning. Two GAs are seeded at the initial and final positions with a common objective to minimise their distance apart under given UAV constraints. This is accomplished by the synchronous optimisation of subsequent control vectors. The proposed evolutionary computation approach is called synchronous genetic algorithm (SGA). The sequence of control vectors generated by the SGA constitutes to a near-optimal path plan. The resulting path plan exhibits no discontinuity when transitioning from curve to straight trajectories. Experiments and results show that the paths generated by the SGA are within 2% of the optimal solution. Such a path planner when implemented on a hardware accelerator, such as field programmable gate array chips, can be used in the UAV as on-board replanner, as well as in ground station systems for assisting in high precision planning and modelling of mission scenarios.
Resumo:
What are the information practices of teen content creators? In the United States over two thirds of teens have participated in creating and sharing content in online communities that are developed for the purpose of allowing users to be producers of content. This study investigates how teens participating in digital participatory communities find and use information as well as how they experience the information. From this investigation emerged a model of their information practices while creating and sharing content such as film-making, visual art work, story telling, music, programming, and web site design in digital participatory communities. The research uses grounded theory methodology in a social constructionist framework to investigate the research problem: what are the information practices of teen content creators? Data was gathered through semi-structured interviews and observation of teen’s digital communities. Analysis occurred concurrently with data collection, and the principle of constant comparison was applied in analysis. As findings were constructed from the data, additional data was collected until a substantive theory was constructed and no new information emerged from data collection. The theory that was constructed from the data describes five information practices of teen content creators. The five information practices are learning community, negotiating aesthetic, negotiating control, negotiating capacity, and representing knowledge. In describing the five information practices there are three necessary descriptive components, the community of practice, the experiences of information and the information actions. The experiences of information include information as participation, inspiration, collaboration, process, and artifact. Information actions include activities that occur in the categories of gathering, thinking and creating. The experiences of information and information actions intersect in the information practices, which are situated within the specific community of practice, such as a digital participatory community. Finally, the information practices interact and build upon one another and this is represented in a graphic model and explanation.
Resumo:
Research background: Communicating the diverse nature of multimodal practice is inherently difficult for the design-led research academic. Websites are an effective means of displaying images and text, but for the user/viewer the act of viewing is often random and disorienting, due to the non-linear means of accessing the information. This characteristic of websites limits the medium’s efficacy in regard to presenting an overarching philosophical standpoint or theme - the key driver behind most academic research. Research Contribution: This website: http://www.ianweirarchitect.com, presents a means of reconciling this problem by presenting a deceptively simple graphic and temporal layout, which limits the opportunity for the user/viewer to become disoriented and miss the key themes and issues that binds, the otherwise divergent, research material together. Research significance: http://www.ianweirarchitect.com, is a creative work that supplements Dr Ian Weir’s exhibition “Enacted Cartography” held in August 2012 in Brisbane and in August/September 2012 in Venice, Italy for the 13th International Architecture Exhibition (Venice Architecture Biennale). Dr Weir was selected by the Australian Institute of Architects to represent innovation in architectural practice for the Institute’s Formations: New Practices in Australian Architecture, exhibition and catalogue (of the same name) held in the Australian Pavilion, The Giardini, Venice. This website is creative output that compliments Dr Weir’s other multimodal outputs including photographic artworks, cartographic maps and architectural designs.
Resumo:
In this paper we demonstrate how to monitor a smartphone running Symbian operating system and Windows Mobile in order to extract features for anomaly detection. These features are sent to a remote server because running a complex intrusion detection system on this kind of mobile device still is not feasible due to capability and hardware limitations. We give examples on how to compute relevant features and introduce the top ten applications used by mobile phone users based on a study in 2005. The usage of these applications is recorded by a monitoring client and visualized. Additionally, monitoring results of public and self-written malwares are shown. For improving monitoring client performance, Principal Component Analysis was applied which lead to a decrease of about 80 of the amount of monitored features.