828 resultados para Feature spaces
Resumo:
Location based games (LBGs) provide an opportunity to look at how new technologies can support a reciprocal relationship between formal classroom learning and learning that can potentially occur in other everyday environments. Fundamentally many games are intensely engaging due to the resulting social interactions and technical challenges they provide to individual and group players. By introducing the use of mobile devices we can transport these characteristics of games into everyday spaces. LBGs are understood as a broad genre incorporating ideas and tools that provide many unique opportunities for us to to reveal, create and even subvert various social, cultural, technical, and scientific interpretations of place, in particular places where learning is sometimes problematic.--------- A team of Queensland game developers have learnt a great deal through designing a range of LBGs such as SCOOT for various user groups and places. While these LBGs were primarily designed as social events, we found that the players recognised and valued the game as an opportunity to learn about their environment, it's history, cultural significance, inhabitants, services etc. Since identifying the strong pedagogical outcomes of LBGs, the team has created a set of authoring tools for people to design and host their own LBGs. A particular version of this is known as MiLK the mobile learning kit for schools.---------- This presentation will include examples of how LBGs have been used to improve the teaching and learning outcomes in various contexts. Participants will be introduced to MiLK and invited to trial it in their own classrooms with students.
Resumo:
Global trends call for new research to investigate multimodal designing mediated by new technologies and the implications for classroom spaces. This article addresses the relationship between new technologies, students’ multimodal designing, and the social production of classroom spaces. Multimodal semiotics and sociological principles are applied to a series of claymation movie-making lessons in an upper primary school in Australia. The analysis focuses on the social meanings embedded in the multimodal spaces of the classroom—dialogic, bodily, embodied, architectonic, and screen spaces. The findings demonstrate how the uses of new technologies and the students’ multimodal learning were tied to important transformations of space.
Resumo:
The programming and retasking of sensor nodes could benefit greatly from the use of a virtual machine (VM) since byte code is compact, can be loaded on demand, and interpreted on a heterogeneous set of devices. The challenge is to ensure good programming tools and a small footprint for the virtual machine to meet the memory constraints of typical WSN platforms. To this end we propose Darjeeling, a virtual machine modelled after the Java VM and capable of executing a substantial subset of the Java language, but designed specifically to run on 8- and 16-bit microcontrollers with 2 - 10 KB of RAM. The Darjeeling VM uses a 16- rather than a 32-bit architecture, which is more efficient on the targeted platforms. Darjeeling features a novel memory organisation with strict separation of reference from non-reference types which eliminates the need for run-time type inspection in the underlying compacting garbage collector. Darjeeling uses a linked stack model that provides light-weight threads, and supports synchronisation. The VM has been implemented on three different platforms and was evaluated with micro benchmarks and a real-world application. The latter includes a pure Java implementation of the collection tree routing protocol conveniently programmed as a set of cooperating threads, and a reimplementation of an existing environmental monitoring application. The results show that Darjeeling is a viable solution for deploying large-scale heterogeneous sensor networks. Copyright 2009 ACM.
Resumo:
This paper examines the opportunities for social activities in public outdoor spaces associated with high-density residential living. This study surveyed activities in outdoor spaces outside three high-density residential communities in Brisbane. Results indicated that activity patterns in public outdoor space outside residential communities are different to general urban public outdoor space. This broadly but not fully supports current theories concerning activities in public space. That is some environmental factors have impacts on the level of social interaction. The relationship between outdoor space and a residential building may have a significant impact on the level of social activities. As a consequence, a new classification of activities in public space is suggested. In improving the level of social contact in public outdoor space outside a residential community, the challenge is how to encourage people to leave their comfortable homes and spend a short time in these public spaces. For residential buildings and public space to be treated as an integrated whole, the outdoor open spaces close to and surrounding these buildings must have a more welcoming design.
Resumo:
Wide-angle images exhibit significant distortion for which existing scale-space detectors such as the scale-invariant feature transform (SIFT) are inappropriate. The required scale-space images for feature detection are correctly obtained through the convolution of the image, mapped to the sphere, with the spherical Gaussian. A new visual key-point detector, based on this principle, is developed and several computational approaches to the convolution are investigated in both the spatial and frequency domain. In particular, a close approximation is developed that has comparable computation time to conventional SIFT but with improved matching performance. Results are presented for monocular wide-angle outdoor image sequences obtained using fisheye and equiangular catadioptric cameras. We evaluate the overall matching performance (recall versus 1-precision) of these methods compared to conventional SIFT. We also demonstrate the use of the technique for variable frame-rate visual odometry and its application to place recognition.
Resumo:
Today’s evolving networks are experiencing a large number of different attacks ranging from system break-ins, infection from automatic attack tools such as worms, viruses, trojan horses and denial of service (DoS). One important aspect of such attacks is that they are often indiscriminate and target Internet addresses without regard to whether they are bona fide allocated or not. Due to the absence of any advertised host services the traffic observed on unused IP addresses is by definition unsolicited and likely to be either opportunistic or malicious. The analysis of large repositories of such traffic can be used to extract useful information about both ongoing and new attack patterns and unearth unusual attack behaviors. However, such an analysis is difficult due to the size and nature of the collected traffic on unused address spaces. In this dissertation, we present a network traffic analysis technique which uses traffic collected from unused address spaces and relies on the statistical properties of the collected traffic, in order to accurately and quickly detect new and ongoing network anomalies. Detection of network anomalies is based on the concept that an anomalous activity usually transforms the network parameters in such a way that their statistical properties no longer remain constant, resulting in abrupt changes. In this dissertation, we use sequential analysis techniques to identify changes in the behavior of network traffic targeting unused address spaces to unveil both ongoing and new attack patterns. Specifically, we have developed a dynamic sliding window based non-parametric cumulative sum change detection techniques for identification of changes in network traffic. Furthermore we have introduced dynamic thresholds to detect changes in network traffic behavior and also detect when a particular change has ended. Experimental results are presented that demonstrate the operational effectiveness and efficiency of the proposed approach, using both synthetically generated datasets and real network traces collected from a dedicated block of unused IP addresses.
Resumo:
When the colonisers first came to Australia there was an urgent desire to map, name and settle. This desire, in part, stemmed from a fear of the unknown. Once these tasks were completed it was thought that a sense of identity and belonging would automatically come. In Anglo-Australian geography the map of Australia was always perceived in relationship to the larger map of Europe and Britain. The quicker Australia could be mapped the quicker its connection with the ‘civilised’ world could be established. Official maps could be taken up in official history books and a detailed monumental history could begin. Australians would feel secure in where they were placed in the world. However, this was not the case and anxieties about identity and belonging remained. One of the biggest hurdles was the fear of the open spaces and not knowing how to move across the land. Attempts to transpose colonisers’ use of space onto the Australian landscape did not work and led to confusion. Using authors who are often perceived as writers of national fictions (Henry Lawson, Barbara Baynton, Patrick White, David Malouf and Peter Carey) I will reveal how writing about space becomes a way to create a sense of belonging. It is through spatial knowledge and its application that we begin to gain a sense of closeness and identity. I will also look at how one of the greatest fears for the colonisers was the Aboriginal spatial command of the country. Aborigines already had a strongly developed awareness of spatial belonging and their stories reveal this authority (seen in the work of Lorna Little, Mick McLean) Colonisers attempted to discredit this knowledge but the stories and the land continue to recognise its legitimacy. From its beginning Australian spaces have been spaces of hybridity and the more the colonisers attempted to force predetermined structures onto these spaces the more hybrid they became.
Resumo:
Background Some dialysis patients fail to comply with their fluid restriction causing problems due to volume overload. These patients sometimes blame excessive thirst. There has been little work in this area and no work documenting polydipsia among peritoneal dialysis (PD) patients. Methods We measured motivation to drink and fluid consumption in 46 haemodialysis patients (HD), 39 PD patients and 42 healthy controls (HC) using a modified palmtop computer to collect visual analogue scores at hourly intervals. Results Mean thirst scores were markedly depressed on the dialysis day (day 1) for HD (P<0.0001). The profile for day 2 was similar to that of HC. PD generated consistently higher scores than HD day 1 and HC (P = 0.01 vs. HC and P<0.0001 vs HD day 1). Reported mean daily water consumption was similar for HD and PD with both significantly less than HC (P<0.001 for both). However, measured fluid losses were similar for PD and HC whilst HD were lower (P<0.001 for both) suggesting that the PD group may have underestimated their fluid intake. Conclusion Our results indicate that HD causes a protracted period of reduced thirst but that the population's thirst perception is similar to HC on the interdialytic day despite a reduced fluid intake. In contrast, the PD group recorded high thirst scores throughout the day and were apparently less compliant with their fluid restriction. This is potentially important because the volume status of PD patients influences their survival.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
This thesis introduces the problem of conceptual ambiguity, or Shades of Meaning (SoM) that can exist around a term or entity. As an example consider President Ronald Reagan the ex-president of the USA, there are many aspects to him that are captured in text; the Russian missile deal, the Iran-contra deal and others. Simply finding documents with the word “Reagan” in them is going to return results that cover many different shades of meaning related to "Reagan". Instead it may be desirable to retrieve results around a specific shade of meaning of "Reagan", e.g., all documents relating to the Iran-contra scandal. This thesis investigates computational methods for identifying shades of meaning around a word, or concept. This problem is related to word sense ambiguity, but is more subtle and based less on the particular syntactic structures associated with or around an instance of the term and more with the semantic contexts around it. A particularly noteworthy difference from typical word sense disambiguation is that shades of a concept are not known in advance. It is up to the algorithm itself to ascertain these subtleties. It is the key hypothesis of this thesis that reducing the number of dimensions in the representation of concepts is a key part of reducing sparseness and thus also crucial in discovering their SoMwithin a given corpus.
Resumo:
While the Queensland and Australian Governments have recognised the importance of new spaces for teaching and learning, particularly with the Rudd Government's Building the Education Revolution, the practical implementation of new spaces is largely left to schools and even individual teachers. This article proposes a theory for the consideration of 21st century learning spaces in relation to the learner, desired knowledge and understanding, digital technology and digital pedagogy. New and emerging learning spaces at Bounty Boulevard State School are analysed and critiqued through an analysis of the guiding principles offered by the 'Learning in an Online World: Learning Spaces Framework' (MCEETYA, 2008) publication, including flexibility, inclusivity, collaboration, creativity and efficiency. The argument put forward in this article is that 21st century learning spaces can be enabled while acknowledging barriers of resourcing and current ICT infrastructure.