942 resultados para dynamic visual noise
Resumo:
PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.
Resumo:
Smart Card Automated Fare Collection (AFC) data has been extensively exploited to understand passenger behavior, passenger segment, trip purpose and improve transit planning through spatial travel pattern analysis. The literature has been evolving from simple to more sophisticated methods such as from aggregated to individual travel pattern analysis, and from stop-to-stop to flexible stop aggregation. However, the issue of high computing complexity has limited these methods in practical applications. This paper proposes a new algorithm named Weighted Stop Density Based Scanning Algorithm with Noise (WS-DBSCAN) based on the classical Density Based Scanning Algorithm with Noise (DBSCAN) algorithm to detect and update the daily changes in travel pattern. WS-DBSCAN converts the classical quadratic computation complexity DBSCAN to a problem of sub-quadratic complexity. The numerical experiment using the real AFC data in South East Queensland, Australia shows that the algorithm costs only 0.45% in computation time compared to the classical DBSCAN, but provides the same clustering results.
Resumo:
Stochastic modelling is critical in GNSS data processing. Currently, GNSS data processing commonly relies on the empirical stochastic model which may not reflect the actual data quality or noise characteristics. This paper examines the real-time GNSS observation noise estimation methods enabling to determine the observation variance from single receiver data stream. The methods involve three steps: forming linear combination, handling the ionosphere and ambiguity bias and variance estimation. Two distinguished ways are applied to overcome the ionosphere and ambiguity biases, known as the time differenced method and polynomial prediction method respectively. The real time variance estimation methods are compared with the zero-baseline and short-baseline methods. The proposed method only requires single receiver observation, thus applicable to both differenced and un-differenced data processing modes. However, the methods may be subject to the normal ionosphere conditions and low autocorrelation GNSS receivers. Experimental results also indicate the proposed method can result on more realistic parameter precision.
Resumo:
In this paper, dynamic modeling and simulation of the hydropurification reactor in a purified terephthalic acid production plant has been investigated by gray-box technique to evaluate the catalytic activity of palladium supported on carbon (0.5 wt.% Pd/C) catalyst. The reaction kinetics and catalyst deactivation trend have been modeled by employing artificial neural network (ANN). The network output has been incorporated with the reactor first principle model (FPM). The simulation results reveal that the gray-box model (FPM and ANN) is about 32 percent more accurate than FPM. The model demonstrates that the catalyst is deactivated after eleven months. Moreover, the catalyst lifetime decreases about two and half months in case of 7 percent increase of reactor feed flowrate. It is predicted that 10 percent enhancement of hydrogen flowrate promotes catalyst lifetime at the amount of one month. Additionally, the enhancement of 4-carboxybenzaldehyde concentration in the reactor feed improves CO and benzoic acid synthesis. CO is a poison to the catalyst, and benzoic acid might affect the product quality. The model can be applied into actual working plants to analyze the Pd/C catalyst efficient functioning and the catalytic reactor performance.
Resumo:
Despite significant improvements in capacity-distortion performance, a computationally efficient capacity control is still lacking in the recent watermarking schemes. In this paper, we propose an efficient capacity control framework to substantiate the notion of watermarking capacity control to be the process of maintaining “acceptable” distortion and running time, while attaining the required capacity. The necessary analysis and experimental results on the capacity control are reported to address practical aspects of the watermarking capacity problem, in dynamic (size) payload embedding.
Resumo:
This paper presents a technique for the automated removal of noise from process execution logs. Noise is the result of data quality issues such as logging errors and manifests itself in the form of infrequent process behavior. The proposed technique generates an abstract representation of an event log as an automaton capturing the direct follows relations between event labels. This automaton is then pruned from arcs with low relative frequency and used to remove from the log those events not fitting the automaton, which are identified as outliers. The technique has been extensively evaluated on top of various auto- mated process discovery algorithms using both artificial logs with different levels of noise, as well as a variety of real-life logs. The results show that the technique significantly improves the quality of the discovered process model along fitness, appropriateness and simplicity, without negative effects on generalization. Further, the technique scales well to large and complex logs.
Resumo:
Study Design: Comparative analysis Background: Calculations of lower limbs kinetics are limited by floor-mounted force-plates. Objectives: Comparison of hip joint moments, power and mechanical work on the prosthetic limb of a transfemoral amputee calculated by inverse dynamics using either the ground reactions (force-plates) or knee reactions (transducer). Methods: Kinematics, ground reactions and knee reactions were collected using a motion analysis system, two force-plates and a multi-axial transducer mounted below the socket, respectively. Results: The inverse dynamics using ground reactions under-estimated the peaks of hip energy generation and absorption occurring at 63 % and 76 % of the gait cycle (GC) by 28 % and 54 %, respectively. This method over-estimated a phase of negative work at the hip (from 37 %GC to 56 %GC) by 24%. It under-estimated the phases of positive (from 57 %GC to 72 %GC) and negative (from 73 %GC to 98 %GC) work at the hip by 11 % and 58%, respectively. Conclusions: A transducer mounted within the prosthesis has the capacity to provide more realistic kinetics of the prosthetic limb because it enables assessment of multiple consecutive steps and a wide range of activities without issues of foot placement on force-plates. CLINICAL RELEVANCE The hip is the only joint that an amputee controls directly to set in motion the prosthesis. Hip joint kinetics are associated with joint degeneration, low back pain, risks of fall, etc. Therefore, realistic assessment of hip kinetics over multiple gait cycles and a wide range of activities is essential.
Resumo:
The research reported here addresses the problem of detecting and tracking independently moving objects from a moving observer in real-time, using corners as object tokens. Corners are detected using the Harris corner detector, and local image-plane constraints are employed to solve the correspondence problem. The approach relaxes the restrictive static-world assumption conventionally made, and is therefore capable of tracking independently moving and deformable objects. Tracking is performed without the use of any 3-dimensional motion model. The technique is novel in that, unlike traditional feature-tracking algorithms where feature detection and tracking is carried out over the entire image-plane, here it is restricted to those areas most likely to contain-meaningful image structure. Two distinct types of instantiation regions are identified, these being the “focus-of-expansion” region and “border” regions of the image-plane. The size and location of these regions are defined from a combination of odometry information and a limited knowledge of the operating scenario. The algorithms developed have been tested on real image sequences taken from typical driving scenarios. Implementation of the algorithm using T800 Transputers has shown that near-linear speedups are achievable, and that real-time operation is possible (half-video rate has been achieved using 30 processing elements).
Resumo:
The research reported here addresses the problem of detecting and tracking independently moving objects from a moving observer in real time, using corners as object tokens. Local image-plane constraints are employed to solve the correspondence problem removing the need for a 3D motion model. The approach relaxes the restrictive static-world assumption conventionally made, and is therefore capable of tracking independently moving and deformable objects. The technique is novel in that feature detection and tracking is restricted to areas likely to contain meaningful image structure. Feature instantiation regions are defined from a combination of odometry informatin and a limited knowledge of the operating scenario. The algorithms developed have been tested on real image sequences taken from typical driving scenarios. Preliminary experiments on a parallel (transputer) architecture indication that real-time operation is achievable.
Resumo:
Power line inspection is a vital function for electricity supply companies but it involves labor-intensive and expensive procedures which are tedious and error-prone for humans to perform. A possible solution is to use an unmanned aerial vehicle (UAV) equipped with video surveillance equipment to perform the inspection. This paper considers how a small, electrically driven rotorcraft conceived for this application could be controlled by visually tracking the overhead supply lines. A dynamic model for a ducted-fan rotorcraft is presented and used to control the action of an Air Vehicle Simulator (AVS), consisting of a cable-array robot. Results show how visual data can be used to determine, and hence regulate in closed loop, the simulated vehicle’s position relative to the overhead lines.
Resumo:
This new volume, Exploring with Grammar in the Primary Years (Exley, Kevin & Mantei, 2014), follows on from Playing with Grammar in the Early Years (Exley & Kervin, 2013). We extend our thanks to the ALEA membership for their take up of the first volume and the vibrant conversations around our first attempt at developing a pedagogy for the teaching of grammar in the early years. Your engagement at locally held ALEA events has motivated us to complete this second volume and reassert our interest in the pursuit of socially-just outcomes in the primary years. As noted in Exley and Kervin (2013), we believe that mastering a range of literacy competences includes not only the technical skills for learning, but also the resources for viewing and constructing the world (Freire and Macdeo, 1987). Rather than seeing knowledge about language as the accumulation of technical skills alone, the viewpoint to which we subscribe treats knowledge about language as a dialectic that evolves from, is situated in, and contributes to active participation within a social arena (Halliday, 1978). We acknowledge that to explore is to engage in processes of discovery as we look closely and examine the opportunities before us. As such, we draw on Janks’ (2000; 2014) critical literacy theory to underpin many of the learning experiences in this text. Janks (2000) argues that effective participation in society requires knowledge about how the power of language promotes views, beliefs and values of certain groups to the exclusion of others. Powerful language users can identify not only how readers are positioned by these views, but also the ways these views are conveyed through the design of the text, that is, the combination of vocabulary, syntax, image, movement and sound. Similarly, powerful designers of texts can make careful modal choices in written and visual design to promote certain perspectives that position readers and viewers in new ways to consider more diverse points of view. As the title of our text suggests, our activities are designed to support learners in exploring the design of texts to achieve certain purposes and to consider the potential for the sharing of their own views through text production. In Exploring with Grammar in the Primary Years, we focus on the Year 3 to Year 6 grouping in line with the Australian Curriculum, Assessment and Reporting Authority’s (hereafter ACARA) advice on the ‘nature of learners’ (ACARA, 2014). Our goal in this publication is to provide a range of highly practical strategies for scaffolding students’ learning through some of the Content Descriptions from the Australian Curriculum: English Version 7.2, hereafter AC:E (ACARA, 2014). We continue to express our belief in the power of using whole texts from a range of authentic sources including high quality children’s literature, the internet, and examples of community-based texts to expose students to the richness of language. Taking time to look at language patterns within actual texts is a pathway to ‘…capture interest, stir the imagination and absorb the [child]’ into the world of language and literacy (Saxby, 1993, p. 55). It is our intention to be more overt this time and send a stronger message that our learning experiences are simply ‘sample’ activities rather than a teachers’ workbook or a program of study to be followed. We’re hoping that teachers and students will continue to explore their bookshelves, the internet and their community for texts that provide powerful opportunities to engage with language-based learning experiences. In the following three sections, we have tried to remain faithful to our interpretation of the AC:E Content Descriptions without giving an exhaustive explanation of the grammatical terms. This recently released curriculum offers a new theoretical approach to building students’ knowledge about language. The AC:E uses selected traditional terms through an approach developed in systemic functional linguistics (see Halliday and Matthiessen, 2004) to highlight the dynamic forms and functions of multimodal language in texts. For example, the following statement, taken from the ‘Language: Knowing about the English language’ strand states: English uses standard grammatical terminology within a contextual framework, in which language choices are seen to vary according to the topics at hand, the nature and proximity of the relationships between the language users, and the modalities or channels of communication available (ACARA, 2014). Put simply, traditional grammar terms are used within a functional framework made up of field, tenor, and mode. An understanding of genre is noted with the reference to a ‘contextual framework’. The ‘topics at hand’ concern the field or subject matter of the text. The ‘relationships between the language users’ is a description of tenor. There is reference to ‘modalities’, such as spoken, written or visual text. We posit that this innovative approach is necessary for working with contemporary multimodal and cross-cultural texts (see Exley & Mills, 2012). Other excellent tomes, such as Derewianka (2011), Humphrey, Droga and Feez (2012), and Rossbridge and Rushton (2011) provide more comprehensive explanations of this unique metalanguage, as does the AC:E Glossary. We’ve reproduced some of the AC:E Glossary at the end of this publication. We’ve also kept the same layout for our learning experiences, ensuring that our teacher notes are not only succinct but also prudent in their placement. Each learning experience is connected to a Content Description from the AC:E and contains an experience with an identified purpose, suggested resource text and a possible sequence for the experience that always commences with an orientation to text followed by an examination of a particular grammatical resource. Our plans allow for focused discussion, shared exploration and opportunities to revisit the same text for the purpose of enhancing meaning making. Some learning experiences finish with deconstruction of a stimulus text while others invite students to engage in the design of new texts. We encourage you to look for opportunities in your own classrooms to move from text deconstruction to text design. In this way, students can express not only their emerging grammatical understandings, but also the ways they might position readers or viewers through the creation of their own texts. We expect that each of these learning experiences will vary in the time taken. Some may indeed take a couple if not a few teaching episodes to work through, especially if students are meeting a concept or a pedagogical strategy for the first time. We hope you use as much, or as little, of each experience as is needed for your students. We do not want the teaching of grammar to slip into a crisis of irrelevance or to be seen as a series of worksheet drills with finite answers. We firmly believe that strategies for effective deconstruction and design practice, however, have much portability. We three are very keen to hear from teachers who are adopting and adapting these learning experiences in their classrooms. Please email us on b.exley@qut.edu.au, lkervin@uow.edu.au or jessicam@ouw.edu.au. We’d love to continue the conversation with you over time. Beryl Exley, Lisa Kervin & Jessica Mantei
Resumo:
The literacy demands of mathematics are very different to those in other subjects (Gough, 2007; O'Halloran, 2005; Quinnell, 2011; Rubenstein, 2007) and much has been written on the challenges that literacy in mathematics poses to learners (Abedi and Lord, 2001; Lowrie and Diezmann, 2007, 2009; Rubenstein, 2007). In particular, a diverse selection of visuals typifies the field of mathematics (Carter, Hipwell and Quinnell, 2012), placing unique literacy demands on learners. Such visuals include varied tables, graphs, diagrams and other representations, all of which are used to communicate information.
Resumo:
This paper presents visual detection and classification of light vehicles and personnel on a mine site.We capitalise on the rapid advances of ConvNet based object recognition but highlight that a naive black box approach results in a significant number of false positives. In particular, the lack of domain specific training data and the unique landscape in a mine site causes a high rate of errors. We exploit the abundance of background-only images to train a k-means classifier to complement the ConvNet. Furthermore, localisation of objects of interest and a reduction in computation is enabled through region proposals. Our system is tested on over 10km of real mine site data and we were able to detect both light vehicles and personnel. We show that the introduction of our background model can reduce the false positive rate by an order of magnitude.
Resumo:
The Ugly Australian Underground documents the music, songwriting, aesthetics and struggles of fifty of Australia’s most innovative and significant bands and artists currently at the creative peak of their careers. The book provides a rare insight into the critically heralded cult music scene in Australia. The author, Jimi Kritzler, is both a journalist and a musician, and is personally connected to the musicians he interviews through his involvement in this music subculture. The interviews are extremely personal and reveal much more than any interview granted to street press or blogs. They deal with not only the music and songwriting processes of each band, but in some circumstances their struggles with drugs, involvement in crime and the death of band members.
Resumo:
Uncorrected refractive error, including astigmatism, is a leading cause of reversible visual impairment. While the ability to perform vision-related daily activities is reduced when people are not optimally corrected, only limited research has investigated the impact of uncorrected astigmatism. Given the capacity to perform vision-related daily activities involves integration of a range of visual and cognitive cues, this research examined the impact of simulated astigmatism on visual tasks that also involved cognitive input. The research also examined whether the higher levels of complexity inherent in Chinese characters makes them more susceptible to the effects of astigmatism. The effects of different powers of astigmatism, as well as astigmatism at different axes were investigated in order to determine the minimum level of astigmatism that resulted in a decrement in visual performance.