509 resultados para Real work


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Internet presents a constantly evolving frontier for criminology and policing, especially in relation to online predators – paedophiles operating within the Internet for safer access to children, child pornography and networking opportunities with other online predators. The goals of this qualitative study are to undertake behavioural research – identify personality types and archetypes of online predators and compare and contrast them with behavioural profiles and other psychological research on offline paedophiles and sex offenders. It is also an endeavour to gather intelligence on the technological utilisation of online predators and conduct observational research on the social structures of online predator communities. These goals were achieved through the covert monitoring and logging of public activity within four Internet Relay Chat(rooms) (IRC) themed around child sexual abuse and which were located on the Undernet network. Five days of monitoring was conducted on these four chatrooms between Wednesday 1 to Sunday 5 April 2009; this raw data was collated and analysed. The analysis identified four personality types – the gentleman predator, the sadist, the businessman and the pretender – and eight archetypes consisting of the groomers, dealers, negotiators, roleplayers, networkers, chat requestors, posters and travellers. The characteristics and traits of these personality types and archetypes, which were extracted from the literature dealing with offline paedophiles and sex offenders, are detailed and contrasted against the online sexual predators identified within the chatrooms, revealing many similarities and interesting differences particularly with the businessman and pretender personality types. These personality types and archetypes were illustrated by selecting users who displayed the appropriate characteristics and tracking them through the four chatrooms, revealing intelligence data on the use of proxies servers – especially via the Tor software – and other security strategies such as Undernet’s host masking service. Name and age changes, which is used as a potential sexual grooming tactic was also revealed through the use of Analyst’s Notebook software and information on ISP information revealed the likelihood that many online predators were not using any safety mechanism and relying on the anonymity of the Internet. The activities of these online predators were analysed, especially in regards to child sexual grooming and the ‘posting’ of child pornography, which revealed a few of the methods in which online predators utilised new Internet technologies to sexually groom and abuse children – using technologies such as instant messengers, webcams and microphones – as well as store and disseminate illegal materials on image sharing websites and peer-to-peer software such as Gigatribe. Analysis of the social structures of the chatrooms was also carried out and the community functions and characteristics of each chatroom explored. The findings of this research have indicated several opportunities for further research. As a result of this research, recommendations are given on policy, prevention and response strategies with regards to online predators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a method of voice activity detection (VAD) suitable for high noise scenarios, based on the fusion of two complementary systems. The first system uses a proposed non-Gaussianity score (NGS) feature based on normal probability testing. The second system employs a histogram distance score (HDS) feature that detects changes in the signal through conducting a template-based similarity measure between adjacent frames. The decision outputs by the two systems are then merged using an open-by-reconstruction fusion stage. Accuracy of the proposed method was compared to several baseline VAD methods on a database created using real recordings of a variety of high-noise environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the changes occurring in manufacturing industries and their effect on knowledge and skills necessary to perform effectively in the new environments. The changes in knowledge and skills are presented as a summary to illustrate the extent of the change. The concept of multiskilling is used to conceptualise the emerging new knowledge and skills and finally some guidelines for designing training programs to acquire multiskilling are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Real‐time kinematic (RTK) GPS techniques have been extensively developed for applications including surveying, structural monitoring, and machine automation. Limitations of the existing RTK techniques that hinder their applications for geodynamics purposes are twofold: (1) the achievable RTK accuracy is on the level of a few centimeters and the uncertainty of vertical component is 1.5–2 times worse than those of horizontal components and (2) the RTK position uncertainty grows in proportional to the base‐torover distances. The key limiting factor behind the problems is the significant effect of residual tropospheric errors on the positioning solutions, especially on the highly correlated height component. This paper develops the geometry‐specified troposphere decorrelation strategy to achieve the subcentimeter kinematic positioning accuracy in all three components. The key is to set up a relative zenith tropospheric delay (RZTD) parameter to absorb the residual tropospheric effects and to solve the established model as an ill‐posed problem using the regularization method. In order to compute a reasonable regularization parameter to obtain an optimal regularized solution, the covariance matrix of positional parameters estimated without the RZTD parameter, which is characterized by observation geometry, is used to replace the quadratic matrix of their “true” values. As a result, the regularization parameter is adaptively computed with variation of observation geometry. The experiment results show that new method can efficiently alleviate the model’s ill condition and stabilize the solution from a single data epoch. Compared to the results from the conventional least squares method, the new method can improve the longrange RTK solution precision from several centimeters to the subcentimeter in all components. More significantly, the precision of the height component is even higher. Several geosciences applications that require subcentimeter real‐time solutions can largely benefit from the proposed approach, such as monitoring of earthquakes and large dams in real‐time, high‐precision GPS leveling and refinement of the vertical datum. In addition, the high‐resolution RZTD solutions can contribute to effective recovery of tropospheric slant path delays in order to establish a 4‐D troposphere tomography.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a robust stochastic framework for the incorporation of visual observations into conventional estimation, data fusion, navigation and control algorithms. The representation combines Isomap, a non-linear dimensionality reduction algorithm, with expectation maximization, a statistical learning scheme. The joint probability distribution of this representation is computed offline based on existing training data. The training phase of the algorithm results in a nonlinear and non-Gaussian likelihood model of natural features conditioned on the underlying visual states. This generative model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The instantiated likelihoods are expressed as a Gaussian mixture model and are conveniently integrated within existing non-linear filtering algorithms. Example applications based on real visual data from heterogenous, unstructured environments demonstrate the versatility of the generative models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a robust stochastic model for the incorporation of natural features within data fusion algorithms. The representation combines Isomap, a non-linear manifold learning algorithm, with Expectation Maximization, a statistical learning scheme. The representation is computed offline and results in a non-linear, non-Gaussian likelihood model relating visual observations such as color and texture to the underlying visual states. The likelihood model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The likelihoods are expressed as a Gaussian Mixture Model so as to permit convenient integration within existing nonlinear filtering algorithms. The resulting compactness of the representation is especially suitable to decentralized sensor networks. Real visual data consisting of natural imagery acquired from an Unmanned Aerial Vehicle is used to demonstrate the versatility of the feature representation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the Smart Skies project, an ambitious and world-leading research endeavor exploring the development of key enabling technologies, which support the efficient utilization of airspace by manned and unmanned airspace users. This paper provides a programmatic description of the research and development of: an automated separation management system, a mobile aircraft tracking system, and aircraft-based sense-and-act technologies. A summary of the results from a series of real-world flight testing campaigns is also presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a real-time foreground–background segmentation algorithm that exploits the following observation (very often satisfied by a static camera positioned high in its environment). If a blob moves on a pixel p that had not changed its colour significantly for a few frames, then p was probably part of the background when its colour was static. With this information we are able to update differentially pixels believed to be background. This work is relevant to autonomous minirobots, as they often navigate in buildings where smart surveillance cameras could communicate wirelessly with them. A by-product of the proposed system is a mask of the image regions which are demonstrably background. Statistically significant tests show that the proposed method has a better precision and recall rates than the state of the art foreground/background segmentation algorithm of the OpenCV computer vision library.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The progress of technology has led to the increased adoption of energy monitors among household energy consumers. While the monitors available on the market deliver real-time energy usage feedback to the consumer, the format of this data is usually unengaging and mundane. Moreover, it fails to address consumers with different motivations and needs to save and compare energy. This paper presents a study that seeks to provide initial indications for motivation-specific design of energy-related feedback. We focus on comparative feedback supported by a community of energy consumers. In particular, we examine eco-visualisations, temporal self-comparison, norm comparison, one-on-one comparison and ranking, whereby the last three allow us to explore the potential of socialising energy-related feedback. These feedback types were integrated in EnergyWiz – a mobile application that enables users to compare with their past performance, neighbours, contacts from social networking sites and other EnergyWiz users. The application was evaluated in personal, semi-structured interviews, which provided first insights on how to design motivation-related comparative feedback.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the main challenges of slow speed machinery condition monitoring is that the energy generated from an incipient defect is too weak to be detected by traditional vibration measurements due to its low impact energy. Acoustic emission (AE) measurement is an alternative for this as it has the ability to detect crack initiations or rubbing between moving surfaces. However, AE measurement requires high sampling frequency and consequently huge amount of data are obtained to be processed. It also requires expensive hardware to capture those data, storage and involves signal processing techniques to retrieve valuable information on the state of the machine. AE signal has been utilised for early detection of defects in bearings and gears. This paper presents an online condition monitoring (CM) system for slow speed machinery, which attempts to overcome those challenges. The system incorporates relevant signal processing techniques for slow speed CM which include noise removal techniques to enhance the signal-to-noise and peak-holding down sampling to reduce the burden of massive data handling. The analysis software works under Labview environment, which enables online remote control of data acquisition, real-time analysis, offline analysis and diagnostic trending. The system has been fully implemented on a site machine and contributing significantly to improve the maintenance efficiency and provide a safer and reliable operation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In total, 782 Escherichia coli strains originating from various host sources have been analyzed in this study by using a highly discriminatory single-nucleotide polymorphism (SNP) approach. A set of eight SNPs, with a discrimination value (Simpson's index of diversity [D]) of 0.96, was determined using the Minimum SNPs software, based on sequences of housekeeping genes from the E. coli multilocus sequence typing (MLST) database. Allele-specific real-time PCR was used to screen 114 E. coli isolates from various fecal sources in Southeast Queensland (SEQ). The combined analysis of both the MLST database and SEQ E. coli isolates using eight high-D SNPs resolved the isolates into 74 SNP profiles. The data obtained suggest that SNP typing is a promising approach for the discrimination of host-specific groups and allows for the identification of human-specific E. coli in environmental samples. However, a more diverse E. coli collection is required to determine animal- and environment-specific E. coli SNP profiles due to the abundance of human E. coli strains (56%) in the MLST database.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the emergence of multi-core processors into the mainstream, parallel programming is no longer the specialized domain it once was. There is a growing need for systems to allow programmers to more easily reason about data dependencies and inherent parallelism in general purpose programs. Many of these programs are written in popular imperative programming languages like Java and C]. In this thesis I present a system for reasoning about side-effects of evaluation in an abstract and composable manner that is suitable for use by both programmers and automated tools such as compilers. The goal of developing such a system is to both facilitate the automatic exploitation of the inherent parallelism present in imperative programs and to allow programmers to reason about dependencies which may be limiting the parallelism available for exploitation in their applications. Previous work on languages and type systems for parallel computing has tended to focus on providing the programmer with tools to facilitate the manual parallelization of programs; programmers must decide when and where it is safe to employ parallelism without the assistance of the compiler or other automated tools. None of the existing systems combine abstraction and composition with parallelization and correctness checking to produce a framework which helps both programmers and automated tools to reason about inherent parallelism. In this work I present a system for abstractly reasoning about side-effects and data dependencies in modern, imperative, object-oriented languages using a type and effect system based on ideas from Ownership Types. I have developed sufficient conditions for the safe, automated detection and exploitation of a number task, data and loop parallelism patterns in terms of ownership relationships. To validate my work, I have applied my ideas to the C] version 3.0 language to produce a language extension called Zal. I have implemented a compiler for the Zal language as an extension of the GPC] research compiler as a proof of concept of my system. I have used it to parallelize a number of real-world applications to demonstrate the feasibility of my proposed approach. In addition to this empirical validation, I present an argument for the correctness of the type system and language semantics I have proposed as well as sketches of proofs for the correctness of the sufficient conditions for parallelization proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The emergence of ePortfolios is relatively recent in the university sector as a way to engage students in their learning and assessment, and to produce records of their accomplishments. An ePortfolio is an online tool that students can utilise to record, catalogue, retrieve and present reflections and artefacts that support and demonstrate the development of graduate students’ capabilities and professional standards across university courses. The ePortfolio is therefore considered as both process and product. Although ePortfolios show promise as a useful tool and their uptake has grown, they are not yet a mainstream higher education technology. To date, the emphasis has been on investigating their potential to support the multiple purposes of learning, assessment and employability, but less is known about whether and how students engage with ePortfolios in the university setting. This thesis investigates student engagement with an ePortfolio in one university. As the educational designer for the ePortfolio project at the University, I was uniquely positioned as a researching professional to undertake an inquiry into whether students were engaging with the ePortfolio. The participants in this study were a cohort (defined by enrolment in a unit of study) of second and third year education students (n=105) enrolled in a four year Bachelor of Education degree. The students were introduced to the ePortfolio in an introductory lecture and a hands-on workshop in a computer laboratory. They were subsequently required to complete a compulsory assessment task – a critical reflection - using the ePortfolio. Following that, engagement with the ePortfolio was voluntary. A single case study approach arising from an interpretivist paradigm directed the methodological approach and research design for this study. The study investigated the participants’ own accounts of their experiences with the ePortfolio, including how and when they engaged with the ePortfolio and the factors that impacted on their engagement. Data collection methods consisted of an attitude survey, student interviews, document collection, a researcher reflective journal and researcher observations. The findings of the study show that, while the students were encouraged to use the ePortfolio as a learning and employability tool, most students ultimately chose to disengage after completing the assessment task. Only six of the forty-five students (13%) who completed the research survey had used the ePortfolio in a sustained manner. The data obtained from the students during this research has provided insight into reasons why they disengaged from the ePortfolio. The findings add to the understandings and descriptions of student engagement with technology, and more broadly, advance the understanding of ePortfolios. These findings also contribute to the interdisciplinary field of technology implementation. There are three key outcomes from this study, a model of student engagement with technology, a set of criteria for the design of an ePortfolio, and a set of recommendations for effective practice for those implementing ePortfolios. The first, the Model of Student Engagement with Technology (MSET) (Version 2) explored student engagement with technology by highlighting key engagement decision points for students The model was initially conceptualised by building on work of previous research (Version 1), however, following data analysis a new model emerged, MSET (Version 2). The engagement decision points were identified as: • Prior Knowledge and Experience, leading to imagined usefulness and imagined ease of use; • Initial Supported Engagement, leading to supported experience of usefulness and supported ease of use; • Initial Independent Engagement, leading to actual experience of independent usefulness and actual ease of use; and • Ongoing Independent Engagement, leading to ongoing experience of usefulness and ongoing ease of use. The Model of Student Engagement with Technology (MSET) goes beyond numerical figures of usage to demonstrate student engagement with an ePortfolio. The explanatory power of the model is based on the identification of the types of decisions that students make and when they make them during the engagement process. This model presents a greater depth of understanding student engagement than was previously available and has implications for the direction and timing of future implementation, and academic and student development activities. The second key outcome from this study is a set of criteria for the re-conceptualisation of the University ePortfolio. The knowledge gained from this research has resulted in a new set of design criteria that focus on the student actions of writing reflections and adding artefacts. The process of using the ePortfolio is reconceptualised in terms of privileging student learning over administrative compliance. The focus of the ePortfolio is that the writing of critical reflections is the key function, not the selection of capabilities. The third key outcome from this research consists of five recommendations for university practice that have arisen from this study. They are that, sustainable implementation is more often achieved through small steps building on one another; that a clear definition of the purpose of an ePortfolio is crucial for students and staff; that ePortfolio pedagogy should be the driving force not the technology; that the merit of the ePortfolio is fostered in students and staff; and finally, that supporting delayed task performance is crucial. Students do not adopt an ePortfolio just because it is provided. While students must accept responsibility for their own engagement with the ePortfolio, the institution has to accept responsibility for providing the environment, and technical and pedagogical support to foster engagement. Ultimately, an ePortfolio should be considered as a joint venture between student and institution where strong returns on investment can be realised by both. It is acknowledged that the current implementation strategies for the ePortfolio are just the beginning of a much longer process. The real rewards for students, academics and the university lie in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter considers the complex literate repertoires of 21st century children in multicultural primary classrooms in Adelaide South Australia. It draws on the curricular and pedagogical work of two experienced primary school teachers who explore culture, race and class, by positioning children as textual producers across a variety of media. In particular we discuss two child-authored texts – A is for Arndale – a local alphabet book co-authored by children aged between eight and ten, and – Cooking Afghani Style - a magazine style film produced by a multi-aged class of children (aged eight to thirteen) recently arrived in Australia. In the process of making these texts, primary children engaged in reading as a cultural practice – re-reading and re-writing their neighbourhoods and identities (both individual and collective). This involved frequent excursions to local key sites, both familiar and unfamiliar to the children. They investigated how diverse children experienced and lived their lives in particular places within changing communities.