904 resultados para backward mapping


Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the ways in which university departments and faculties can enhance the quality of learning and assessment is to develop a ‘well thought out criterion‐referenced assessment system’ (Biggs, 2003, p. 271). In designing undergraduate degrees (courses) this entails making decisions about the levelling of expectations across different years through devising objectives and their corresponding criteria and standards: a process of alignment analogous to what happens in unit (subject) design. These decisions about levelling have important repercussions in terms of supporting students’ work‐related learning, especially in relation to their ability to cope with the increasing cognitive and skill demands made on them as they progress through their studies. They also affect the accountability of teacher judgments of students’ responses to assessment tasks, achievement of unit objectives and, ultimately, whether students are awarded their degrees and are sufficiently prepared for the world of work. Research reveals that this decision‐making process is rarely underpinned by an explicit educational rationale (Morgan et al, 2002). The decision to implement criterion referenced assessment in an undergraduate microbiology degree was the impetus for developing such a rationale because of the implications for alignment, and therefore ‘levelling’ of expectations across different years of the degree. This paper provides supporting evidence for a multi‐pronged approach to levelling, through backward mapping of two revised units (foundation and exit year). This approach adheres to the principles of alignment while combining a work‐related approach (via industry input) with the blended disciplinary and learner‐centred approaches proposed by Morgan et al. (2002). It is suggested that this multi‐pronged approach has the potential for making expectations, especially work‐related ones across different year levels of degrees, more explicit to students and future employers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The implementation of a new national curriculum and standards-referenced assessment in Australia has been an opportunity and a challenge for teacher assessment practices. In this case study of teachers in two Queensland schools, we explore how annotating student or exemplar assessment tasks could support teacher assessment practice. Three learning conversations between the researchers and the teacher teams are interpreted through the lens of Bernstein’s (1999) horizontal and vertical discourses to understand the complexities of coming to know an assessment standard. The study contributes to the literature on the use of annotations by exploring how teachers negotiated the purposes and processes of annotation, how annotating student work or exemplars before teaching commenced supported teachers to experience greater clarity about assessment standards and, finally, some of the tensions experienced by the teachers as they considered this practice within the practicalities of their daily work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Assessment for Learning practices with students such as feedback, and self- and peer assessment are opportunities for teachers and students to develop a shared understanding of how to create quality learning performances. Quality is often represented through achievement standards. This paper explores how primary school teachers in Australia used the process of annotating work samples to develop shared understanding of achievement standards during their curriculum planning phase, and how this understanding informed their teaching so that their students also developed this understanding. Bernstein's concept of the pedagogic device is used to identify the ways teachers recontextualised their assessment knowledge into their pedagogic practices. Two researchers worked alongside seven primary school teachers in two schools over a year, gathering qualitative data through focus groups and interviews. Three general recontextualising approaches were identified in the case studies; recontextualising standards by reinterpreting the role of rubrics, recontextualising by replicating the annotation process with the students and recontextualising by reinterpreting practices with students. While each approach had strengths and limitations, all of the teachers concluded that annotating conversations in the planning phase enhanced their understanding, and informed their practices in helping students to understand expectations for quality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

L'attività di tesi è stata svolta presso la divisione System Ceramics della società System Group S.p.A. di Fiorano Modenese (MO) che si occupa dello sviluppo di soluzioni per l'industria ceramica, tra cui la decorazione delle piastrelle. Tipicamente nelle industrie ceramiche la movimentazione dei pezzi è effettuata tramite nastro trasportatore e durante il trasporto i pezzi possono subire leggeri movimenti. Se il pezzo non viene allineato alla stampante prima della fase di decorazione la stampa risulta disallineata e vi possono essere alcune zone non stampate lungo i bordi del pezzo. Perciò prima di procedere con la decorazione è fondamentale correggere il disallineamento. La soluzione più comune è installare delle guide all'ingresso del sistema di decorazione. Oltre a non consentire un’alta precisione, questa soluzione si dimostra inadatta nel caso la decorazione venga applicata in fasi successive da stampanti diverse. Il reparto di ricerca e sviluppo di System Ceramics ha quindi ideato una soluzione diversa e innovativa seguendo l'approccio inverso: allineare la grafica via software a ogni pezzo in base alla sua disposizione, invece che intervenire fisicamente modificandone la posizione. Il nuovo processo di stampa basato sull'allineamento software della grafica consiste nel ricavare inizialmente la disposizione di ogni piastrella utilizzando un sistema di visione artificiale posizionato sul nastro prima della stampante. Successivamente la grafica viene elaborata in base alla disposizione del pezzo ed applicata una volta che il pezzo arriva presso la zona di stampa. L'attività di tesi si è focalizzata sulla fase di rotazione della grafica ed è consistita nello studio e nell’ottimizzazione del prototipo di applicazione esistente al fine di ridurne i tempi di esecuzione. Il prototipo infatti, sebbene funzionante, ha un tempo di esecuzione così elevato da risultare incompatibile con la velocità di produzione adottata dalle industrie ceramiche.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In December, 1980, following increasing congressional and constituent-interest in problems associated with hazardous waste, the Comprehensive Environmental Recovery, Compensation and Liability Act (CERCLA) was passed. During its development, the legislative initiative was seriously compromised which resulted in a less exhaustive approach than was formerly sought. Still, CERCLA (Superfund) which established, among other things, authority to clean up abandoned waste dumps and to respond to emergencies caused by releases of hazardous substances was welcomed by many as an important initial law critical to the cleanup of the nation's hazardous waste. Expectations raised by passage of this bill were tragically unmet. By the end of four years, only six sites had been declared by the EPA as cleaned. Seemingly, even those determinations were liberal; of the six sites, two were identified subsequently as requiring further cleanup.^ This analysis is focused upon the implementation failure of the Superfund. In light of that focus, discussion encompasses development of linkages between flaws in the legislative language and foreclosure of chances for implementation success. Specification of such linkages is achieved through examination of the legislative initiative, identification of its flaws and characterization of attendant deficits in implementation ability. Subsequent analysis is addressed to how such legislative frailities might have been avoided and to attendant regulatory weaknesses which have contributed to implementation failure. Each of these analyses are accomplished through application of an expanded approach to the backward mapping analytic technique as presented by Elmore. Results and recommendations follow.^ Consideration is devoted to a variety of regulatory issues as well as to those pertinent to legislative and implementation analysis. Problems in assessing legal liability associated with hazardous waste management are presented, as is a detailed review of the legislative development of Superfund, and its initial implementation by Gorsuch's EPA. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The aim of this work is to propose a new method for estimating the backward flow directly from the optical flow. We assume that the optical flow has already been computed and we need to estimate the inverse mapping. This mapping is not bijective due to the presence of occlusions and disocclusions, therefore it is not possible to estimate the inverse function in the whole domain. Values in these regions has to be guessed from the available information. We propose an accurate algorithm to calculate the backward flow uniquely from the optical flow, using a simple relation. Occlusions are filled by selecting the maximum motion and disocclusions are filled with two different strategies: a min-fill strategy, which fills each disoccluded region with the minimum value around the region; and a restricted min-fill approach that selects the minimum value in a close neighborhood. In the experimental results, we show the accuracy of the method and compare the results using these two strategies.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For robots to operate in human environments they must be able to make their own maps because it is unrealistic to expect a user to enter a map into the robot’s memory; existing floorplans are often incorrect; and human environments tend to change. Traditionally robots have used sonar, infra-red or laser range finders to perform the mapping task. Digital cameras have become very cheap in recent years and they have opened up new possibilities as a sensor for robot perception. Any robot that must interact with humans can reasonably be expected to have a camera for tasks such as face recognition, so it makes sense to also use the camera for navigation. Cameras have advantages over other sensors such as colour information (not available with any other sensor), better immunity to noise (compared to sonar), and not being restricted to operating in a plane (like laser range finders). However, there are disadvantages too, with the principal one being the effect of perspective. This research investigated ways to use a single colour camera as a range sensor to guide an autonomous robot and allow it to build a map of its environment, a process referred to as Simultaneous Localization and Mapping (SLAM). An experimental system was built using a robot controlled via a wireless network connection. Using the on-board camera as the only sensor, the robot successfully explored and mapped indoor office environments. The quality of the resulting maps is comparable to those that have been reported in the literature for sonar or infra-red sensors. Although the maps are not as accurate as ones created with a laser range finder, the solution using a camera is significantly cheaper and is more appropriate for toys and early domestic robots.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter reports on Australian and Swedish experiences in the iterative design, development, and ongoing use of interactive educational systems we call ‘Media Maps.’ Like maps in general, Media Maps are usefully understood as complex cultural technologies; that is, they are not only physical objects, tools and artefacts, but also information creation and distribution technologies, the use and development of which are embedded in systems of knowledge and social meaning. Drawing upon Australian and Swedish experiences with one Media Map technology, this paper illustrates this three-layered approach to the development of media mapping. It shows how media mapping is being used to create authentic learning experiences for students preparing for work in the rapidly evolving media and communication industries. We also contextualise media mapping as a response to various challenges for curriculum and learning design in Media and Communication Studies that arise from shifts in tertiary education policy in a global knowledge economy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As regulators, governments are often criticised for over‐regulating industries. This research project seeks to examine the regulation affecting the construction industry in a federal system of government. It uses a case study of the Australian system of government to focus on the question of the implications of regulation in the construction industry. Having established the extent of the regulatory environment, the research project considers the costs associated with this environment. Consequently, ways in which the regulatory burden on industry can be reduced are evaluated. The Construction Industry Business Environment project is working with industry and government agencies to improve regulatory harmonisation in Australia, and thereby reduce the regulatory burden on industry. It is found that while taxation and compliance costs are not likely to be reduced in the short term, costs arising from having to adapt to variation between regulatory regimes in a federal system of government, seem the most promising way of reducing regulatory costs. Identifying and reducing adaptive costs across jurisdictional are argued to present a novel approach to regulatory reform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are currently a number of issues of great importance affecting universities and the way in which their programs are now offered. Many issues are largely being driven top-down and impact both at a university-wide and at an individual discipline level. This paper provides a brief history of cartography and digital mapping education at the Queensland University of Technology (QUT). It also provides an overview of what is curriculum mapping and presents some interesting findings from the program review process. Further, this review process has triggered discussion and action for the review, mapping and embedding of graduate attributes within the spatial science major program. Some form of practical based learning is expected in vocationally oriented degrees that lead to professional accreditation and are generally regarded as a good learning exposure. With the restructure of academic programs across the Faculty of Built Environment and Engineering in 2006, spatial science and surveying students now undertake a formal work integrated learning unit. There is little doubt that students acquire the skills of their discipline (mapping science, spatial) by being immersed in the industry culture- learning how to process information and solve real-world problems within context. The broad theme of where geo-spatial mapping skills are embedded in this broad-based tertiary education course are examined with some focused discussion on the learning objectives, outcomes and examples of some student learning experiences

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Network crawling and visualisation tools and other datamining systems are now advanced enough to provide significant new impulses to the study of cultural activity on the Web. A growing range of studies focus on communicative processes in the blogosphere – including for example Adamic & Glance’s 2005 map of political allegiances during the 2004 U.S. presidential election and Kelly & Etling’s 2008 study of blogging practices in Iran. There remain a number of significant shortcomings in the application of such tools and methodologies to the study of blogging; these relate both to how the content of blogs is analysed, and to how the network maps resulting from such studies are understood. Our project highlights and addresses such shortcomings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Process Control Systems (PCSs) or Supervisory Control and Data Acquisition (SCADA) systems have recently been added to the already wide collection of wireless sensor networks applications. The PCS/SCADA environment is somewhat more amenable to the use of heavy cryptographic mechanisms such as public key cryptography than other sensor application environments. The sensor nodes in the environment, however, are still open to devastating attacks such as node capture, which makes designing a secure key management challenging. In this paper, a key management scheme is proposed to defeat node capture attack by offering both forward and backward secrecies. Our scheme overcomes the pitfalls which Nilsson et al.'s scheme suffers from, and is not more expensive than their scheme.