215 resultados para Optics in computing
Resumo:
The Open and Trusted Health Information Systems (OTHIS) Research Group has formed in response to the health sector’s privacy and security requirements for contemporary Health Information Systems (HIS). Due to recent research developments in trusted computing concepts, it is now both timely and desirable to move electronic HIS towards privacy-aware and security-aware applications. We introduce the OTHIS architecture in this paper. This scheme proposes a feasible and sustainable solution to meeting real-world application security demands using commercial off-the-shelf systems and commodity hardware and software products.
Resumo:
Real-Time Kinematic (RTK) positioning is a technique used to provide precise positioning services at centimetre accuracy level in the context of Global Navigation Satellite Systems (GNSS). While a Network-based RTK (N-RTK) system involves multiple continuously operating reference stations (CORS), the simplest form of a NRTK system is a single-base RTK. In Australia there are several NRTK services operating in different states and over 1000 single-base RTK systems to support precise positioning applications for surveying, mining, agriculture, and civil construction in regional areas. Additionally, future generation GNSS constellations, including modernised GPS, Galileo, GLONASS, and Compass, with multiple frequencies have been either developed or will become fully operational in the next decade. A trend of future development of RTK systems is to make use of various isolated operating network and single-base RTK systems and multiple GNSS constellations for extended service coverage and improved performance. Several computational challenges have been identified for future NRTK services including: • Multiple GNSS constellations and multiple frequencies • Large scale, wide area NRTK services with a network of networks • Complex computation algorithms and processes • Greater part of positioning processes shifting from user end to network centre with the ability to cope with hundreds of simultaneous users’ requests (reverse RTK) There are two major requirements for NRTK data processing based on the four challenges faced by future NRTK systems, expandable computing power and scalable data sharing/transferring capability. This research explores new approaches to address these future NRTK challenges and requirements using the Grid Computing facility, in particular for large data processing burdens and complex computation algorithms. A Grid Computing based NRTK framework is proposed in this research, which is a layered framework consisting of: 1) Client layer with the form of Grid portal; 2) Service layer; 3) Execution layer. The user’s request is passed through these layers, and scheduled to different Grid nodes in the network infrastructure. A proof-of-concept demonstration for the proposed framework is performed in a five-node Grid environment at QUT and also Grid Australia. The Networked Transport of RTCM via Internet Protocol (Ntrip) open source software is adopted to download real-time RTCM data from multiple reference stations through the Internet, followed by job scheduling and simplified RTK computing. The system performance has been analysed and the results have preliminarily demonstrated the concepts and functionality of the new NRTK framework based on Grid Computing, whilst some aspects of the performance of the system are yet to be improved in future work.
Resumo:
Multi-resolution modelling has become essential as modern 3D applications demand 3D objects with higher LODs (LOD). Multi-modal devices such as PDAs and UMPCs do not have sufficient resources to handle the original 3D objects. The increased usage of collaborative applications has created many challenges for remote manipulation working with 3D objects of different quality. This paper studies how we can improve multi-resolution techniques by performing multiedge decimation and using annotative commands. It also investigates how devices with poorer quality 3D object can participate in collaborative actions.
Resumo:
Poor student engagement and high failure rates in first year units were addressed at the Queensland University of Technology (QUT) with a course restructure involving a fresh approach to introducing programming. Students’ first taste of programming in the new course focused less on the language and syntax, and more on problem solving and design, and the role of programming in relation to other technologies they are likely to encounter in their studies. In effect, several technologies that have historically been compartmentalised and taught in isolation have been brought together as a breadth-first introduction to IT. Incorporating databases and Web development technologies into what used to be a purely programming unit gave students a very short introduction to each technology, with programming acting as the glue between each of them. As a result, students not only had a clearer understanding of the application of programming in the real world, but were able to determine their preference or otherwise for each of the technologies introduced, which will help them when the time comes for choosing a course major. Students engaged well in an intensely collaborative learning environment for this unit which was designed to both support the needs of students and meet industry expectations. Attrition from the unit was low, with computer laboratory practical attendance rates for the first time remaining high throughout semester, and the failure rate falling to a single figure percentage.
Resumo:
This essay--part of a special issue on the work of Gunther Kress--uses the idea of affordances and constraints to explore the (im)possibilities of new environments for engaging with literature written for children (see Kress, 2003). In particular, it examines a festival of children's literature from an Australian education context that occurs online. The festival is part of a technologically mediated library space designated by the term libr@ry (Kapitzke & Bruce, 2006). The @ symbol (French word "arobase") inserted into the word library indicates that technological mediation has a history, an established set of social practices, and a political economy, which even chatrooms with "real" authors may alter but not fully supplant.
Resumo:
In the 21st century, our global community is changing to increasingly value creativity and innovation as driving forces in our lives. This paper will investigate how educators need to move beyond the rhetoric to effective practices for teaching and fostering creativity. First, it will describe the nature of creativity at different levels, with a focus on personal and everyday creativity. It will then provide a brief snapshot of creativity in education through the lens of new policies and initiatives in Queensland, Australia. Next it will review two significant areas related to enriching and enhancing students’ creative engagement and production: 1) influential social and environmental factors; and 2) creative self-efficacy. Finally, this paper will propose that to effectively promote student creativity in schools, we need to not only emphasise policy, but also focus on establishing a shared discourse about the nature of creativity, and researching and implementing effective practices for supporting and fostering creativity. This paper has implications for educational policy, practice and teacher training that are applicable internationally.
Resumo:
This paper examines the enabling effect of using blended learning and synchronous internet mediated communication technologies to improve learning and develop a Sense of Community (SOC) in a group of post-graduate students consisting of a mix of on-campus and off-campus students. Both quantitative and qualitative data collected over a number of years supports the assertion that the blended learning environment enhanced both teaching and learning. The development of a SOC was pivotal to the success of the blended approach when working with geographically isolated groups within a single learning environment.
Theoretical and numerical investigation of plasmon nanofocusing in metallic tapered rods and grooves
Resumo:
Effective focusing of electromagnetic (EM) energy to nanoscale regions is one of the major challenges in nano-photonics and plasmonics. The strong localization of the optical energy into regions much smaller than allowed by the diffraction limit, also called nanofocusing, offers promising applications in nano-sensor technology, nanofabrication, near-field optics or spectroscopy. One of the most promising solutions to the problem of efficient nanofocusing is related to surface plasmon propagation in metallic structures. Metallic tapered rods, commonly used as probes in near field microscopy and spectroscopy, are of a particular interest. They can provide very strong EM field enhancement at the tip due to surface plasmons (SP’s) propagating towards the tip of the tapered metal rod. A large number of studies have been devoted to the manufacturing process of tapered rods or tapered fibers coated by a metal film. On the other hand, structures such as metallic V-grooves or metal wedges can also provide strong electric field enhancements but manufacturing of these structures is still a challenge. It has been shown, however, that the attainable electric field enhancement at the apex in the V-groove is higher than at the tip of a metal tapered rod when the dissipation level in the metal is strong. Metallic V-grooves also have very promising characteristics as plasmonic waveguides. This thesis will present a thorough theoretical and numerical investigation of nanofocusing during plasmon propagation along a metal tapered rod and into a metallic V-groove. Optimal structural parameters including optimal taper angle, taper length and shape of the taper are determined in order to achieve maximum field enhancement factors at the tip of the nanofocusing structure. An analytical investigation of plasmon nanofocusing by metal tapered rods is carried out by means of the geometric optics approximation (GOA), which is also called adiabatic nanofocusing. However, GOA is applicable only for analysing tapered structures with small taper angles and without considering a terminating tip structure in order to neglect reflections. Rigorous numerical methods are employed for analysing non-adiabatic nanofocusing, by tapered rod and V-grooves with larger taper angles and with a rounded tip. These structures cannot be studied by analytical methods due to the presence of reflected waves from the taper section, the tip and also from (artificial) computational boundaries. A new method is introduced to combine the advantages of GOA and rigorous numerical methods in order to reduce significantly the use of computational resources and yet achieve accurate results for the analysis of large tapered structures, within reasonable calculation time. Detailed comparison between GOA and rigorous numerical methods will be carried out in order to find the critical taper angle of the tapered structures at which GOA is still applicable. It will be demonstrated that optimal taper angles, at which maximum field enhancements occur, coincide with the critical angles, at which GOA is still applicable. It will be shown that the applicability of GOA can be substantially expanded to include structures which could be analysed previously by numerical methods only. The influence of the rounded tip, the taper angle and the role of dissipation onto the plasmon field distribution along the tapered rod and near the tip will be analysed analytically and numerically in detail. It will be demonstrated that electric field enhancement factors of up to ~ 2500 within nanoscale regions are predicted. These are sufficient, for instance, to detect single molecules using surface enhanced Raman spectroscopy (SERS) with the tip of a tapered rod, an approach also known as tip enhanced Raman spectroscopy or TERS. The results obtained in this project will be important for applications for which strong local field enhancement factors are crucial for the performance of devices such as near field microscopes or spectroscopy. The optimal design of nanofocusing structures, at which the delivery of electromagnetic energy to the nanometer region is most efficient, will lead to new applications in near field sensors, near field measuring technology, or generation of nanometer sized energy sources. This includes: applications in tip enhanced Raman spectroscopy (TERS); manipulation of nanoparticles and molecules; efficient coupling of optical energy into and out of plasmonic circuits; second harmonic generation in non-linear optics; or delivery of energy to quantum dots, for instance, for quantum computations.
Resumo:
This thesis investigates the problem of robot navigation using only landmark bearings. The proposed system allows a robot to move to a ground target location specified by the sensor values observed at this ground target posi- tion. The control actions are computed based on the difference between the current landmark bearings and the target landmark bearings. No Cartesian coordinates with respect to the ground are computed by the control system. The robot navigates using solely information from the bearing sensor space. Most existing robot navigation systems require a ground frame (2D Cartesian coordinate system) in order to navigate from a ground point A to a ground point B. The commonly used sensors such as laser range scanner, sonar, infrared, and vision do not directly provide the 2D ground coordi- nates of the robot. The existing systems use the sensor measurements to localise the robot with respect to a map, a set of 2D coordinates of the objects of interest. It is more natural to navigate between the points in the sensor space corresponding to A and B without requiring the Cartesian map and the localisation process. Research on animals has revealed how insects are able to exploit very limited computational and memory resources to successfully navigate to a desired destination without computing Cartesian positions. For example, a honeybee balances the left and right optical flows to navigate in a nar- row corridor. Unlike many other ants, Cataglyphis bicolor does not secrete pheromone trails in order to find its way home but instead uses the sun as a compass to keep track of its home direction vector. The home vector can be inaccurate, so the ant also uses landmark recognition. More precisely, it takes snapshots and compass headings of some landmarks. To return home, the ant tries to line up the landmarks exactly as they were before it started wandering. This thesis introduces a navigation method based on reflex actions in sensor space. The sensor vector is made of the bearings of some landmarks, and the reflex action is a gradient descent with respect to the distance in sensor space between the current sensor vector and the target sensor vec- tor. Our theoretical analysis shows that except for some fully characterized pathological cases, any point is reachable from any other point by reflex action in the bearing sensor space provided the environment contains three landmarks and is free of obstacles. The trajectories of a robot using reflex navigation, like other image- based visual control strategies, do not correspond necessarily to the shortest paths on the ground, because the sensor error is minimized, not the moving distance on the ground. However, we show that the use of a sequence of waypoints in sensor space can address this problem. In order to identify relevant waypoints, we train a Self Organising Map (SOM) from a set of observations uniformly distributed with respect to the ground. This SOM provides a sense of location to the robot, and allows a form of path planning in sensor space. The navigation proposed system is analysed theoretically, and evaluated both in simulation and with experiments on a real robot.
Resumo:
The relationship between multiple cameras viewing the same scene may be discovered automatically by finding corresponding points in the two views and then solving for the camera geometry. In camera networks with sparsely placed cameras, low resolution cameras or in scenes with few distinguishable features it may be difficult to find a sufficient number of reliable correspondences from which to compute geometry. This paper presents a method for extracting a larger number of correspondences from an initial set of putative correspondences without any knowledge of the scene or camera geometry. The method may be used to increase the number of correspondences and make geometry computations possible in cases where existing methods have produced insufficient correspondences.
Resumo:
Participatory design has the moral and pragmatic tenet of including those who will be most affected by a design into the design process. However, good participation is hard to achieve and results linking project success and degree of participation are inconsistent. Through three case studies examining some of the challenges that different properties of knowledge – novelty, difference, dependence – can impose on the participatory endeavour we examine some of the consequences to the participatory process of failing to bridge across knowledge boundaries – syntactic, semantic, and pragmatic. One pragmatic consequence, disrupting the user’s feeling of involvement to the project, has been suggested as a possible explanation for the inconsistent results linking participation and project success. To aid in addressing these issues a new form of participatory research, called embedded research, is proposed and examined within the framework of the case studies and knowledge framework with a call for future research into its possibilities.
Resumo:
The Node-based Local Mesh Generation (NLMG) algorithm, which is free of mesh inconsistency, is one of core algorithms in the Node-based Local Finite Element Method (NLFEM) to achieve the seamless link between mesh generation and stiffness matrix calculation, and the seamless link helps to improve the parallel efficiency of FEM. Furthermore, the key to ensure the efficiency and reliability of NLMG is to determine the candidate satellite-node set of a central node quickly and accurately. This paper develops a Fast Local Search Method based on Uniform Bucket (FLSMUB) and a Fast Local Search Method based on Multilayer Bucket (FLSMMB), and applies them successfully to the decisive problems, i.e. presenting the candidate satellite-node set of any central node in NLMG algorithm. Using FLSMUB or FLSMMB, the NLMG algorithm becomes a practical tool to reduce the parallel computation cost of FEM. Parallel numerical experiments validate that either FLSMUB or FLSMMB is fast, reliable and efficient for their suitable problems and that they are especially effective for computing the large-scale parallel problems.
Resumo:
We extended an earlier study (Vision Research, 45, 1967–1974, 2005) in which we investigated limits at which induced blur of letter targets becomes noticeable, troublesome and objectionable. Here we used a deformable adaptive optics mirror to vary spherical defocus for conditions of a white background with correction of astigmatism; a white background with reduction of all aberrations other than defocus; and a monochromatic background with reduction of all aberrations other than defocus. We used seven cyclopleged subjects, lines of three high-contrast letters as targets, 3–6 mm artificial pupils, and 0.1–0.6 logMAR letter sizes. Subjects used a method of adjustment to control the defocus component of the mirror to set the 'just noticeable', 'just troublesome' and 'just objectionable' defocus levels. For the white-no adaptive optics condition combined with 0.1 logMAR letter size, mean 'noticeable' blur limits were ±0.30, ±0.24 and ±0.23 D at 3, 4 and 6 mm pupils, respectively. White-adaptive optics and monochromatic-adaptive optics conditions reduced blur limits by 8% and 20%, respectively. Increasing pupil size from 3–6 mm decreased blur limits by 29%, and increasing letter size increased blur limits by 79%. Ratios of troublesome to noticeable, and of objectionable to noticeable, blur limits were 1.9 and 2.7 times, respectively. The study shows that the deformable mirror can be used to vary defocus in vision experiments. Overall, the results of noticeable, troublesome and objectionable blur agreed well with those of the previous study. Attempting to reduce higher-order aberrations or chromatic aberrations, reduced blur limits to only a small extent.
Resumo:
Recommender systems are widely used online to help users find other products, items etc that they may be interested in based on what is known about that user in their profile. Often however user profiles may be short on information and thus when there is not sufficient knowledge on a user it is difficult for a recommender system to make quality recommendations. This problem is often referred to as the cold-start problem. Here we investigate whether association rules can be used as a source of information to expand a user profile and thus avoid this problem, leading to improved recommendations to users. Our pilot study shows that indeed it is possible to use association rules to improve the performance of a recommender system. This we believe can lead to further work in utilising appropriate association rules to lessen the impact of the cold-start problem.