925 resultados para Remote Centre-of-Motion (RCM)
Resumo:
In this paper the author considers the possibilities for establishing democratic governance in virtual worlds. He looks at the freedoms currently available to players in “Second Life”, contrasting these to those established in Raph Koster’s “A Declaration of the Rights of Avatars”, and assess whether some restrictions are more necessary in game spaces than social spaces. The author looks at the early implementations of self-governance in online spaces, and consider what lessons can be taken from these, investigating what a contemporary democratic space looks like, in the form of “A Tale in the Desert”, and finally considers how else we may think of giving players more rights in these developing social spaces.
Resumo:
This paper details the implementation and trialling of a prototype in-bucket bulk density monitor on a production dragline. Bulk density information can provide feedback to mine planning and scheduling to improve blasting and consequently facilitating optimal bucket sizing. The bulk density measurement builds upon outcomes presented in the AMTC2009 paper titled ‘Automatic In-Bucket Volume Estimation for Dragline Operations’ and utilises payload information from a commercial dragline monitor. While the previous paper explains the algorithms and theoretical basis for the system design and scaled model testing this paper will focus on the full scale implementation and the challenges involved.
Resumo:
WikiLeaks has become a global phenomenon, and its founder and spokesman Julian Assange an international celebrity (or terrorist, depending on one’s perspective). But perhaps this focus on Assange and his website is as misplaced as the attacks against Napster and its founders were a decade ago: WikiLeaks itself only marks a new phase in a continuing shift in the balance of power between states and citizens, much as Napster helped to undermine the control of major music labels over the music industry. If the history of music filesharing is any guide, no level of punitive action against WikiLeaks and its supporters is going to re-contain the information WikiLeaks has set loose.
Resumo:
This article presents mathematical models to simulate coupled heat and mass transfer during convective drying of food materials using three different effective diffusivities: shrinkage dependent, temperature dependent and average of those two. Engineering simulation software COMSOL Multiphysics was utilized to simulate the model in 2D and 3D. The simulation results were compared with experimental data. It is found that the temperature dependent effective diffusivity model predicts the moisture content more accurately at the initial stage of the drying, whereas, the shrinkage dependent effective diffusivity model is better for the final stage of the drying. The model with shrinkage dependent effective diffusivity shows evaporative cooling phenomena at the initial stage of drying. This phenomenon was investigated and explained. Three dimensional temperature and moisture profiles show that even when the surface is dry, inside of the sample may still contain large amount of moisture. Therefore, drying process should be carefully dealt with otherwise microbial spoilage may start from the centre of the ‘dried’ food. A parametric investigation has been conducted after the validation of the model.
Resumo:
The Australia Council awarded the tender of APAMs 2014, 2016 and 2018 to the Brisbane Powerhouse. The Australia Council, in awarding the contract for the presentation of APAM by Brisbane Powerhouse, stipulated that a formal evaluation of the three iterations of APAM and activity in the intervening years be undertaken. Queensland University of Technology, Creative Industries Faculty, under the leadership of Associate Professor Sandra Gattenhof, were contracted to undertake the formal evaluation. This is the first year report on the Brisbane iteration of the Market. This report has drawn from data collected across a range of sources, drawing on the scoping study undertaken by Justin Macdonnell addressing the Market from 1994–2010; the tender document submitted by the Brisbane Powerhouse; in-person interviews with APAM staff, APAM Stakeholders, Vox Pops from delegates in response to individual sessions, producer company/artist case studies and, most significantly, responses from a detailed online survey sent to all delegates. The main body of the report is organised around three key research aims, as outlined in the Brisbane Powerhouse Tender document (2011). These have been articulated as: Evaluation of international market development outcomes through showcasing work to targeted international presenters and agents Evaluation of national market development outcomes through showcasing work to national presenters and producers Evaluation of the exchange ideas, dialogue, skill development, partnerships, collaborations and co- productions and networks with local and international peers. The culmination of the data analysis has been articulated through five key recommendations, which may assist the APAM delivery team for the next version, in 2016. In summary, the recommendations are described as: 1. Indigenous focus to remain central to the conception and delivery of APAM 2. Re-framing APAM’s function and its delivery 3. Logistics and communications in a multi-venue approach, including communications and housekeeping, volunteers, catering, re-calibrating the employment of Brisbane Powerhouse protocols and processes for APAM 4. Presentation and promotion for presenters 5. Strategic targeting of Asian producers.
Resumo:
The Final Report of the Review of the Australian Curriculum is seriously flawed. Many aspects of the report have attracted comment – but the recommendation that schools do away with a major, world-leading innovation has not. For the first time, Media Arts, one of the five strands of the Arts curriculum, was to become a compulsory subject for primary school students. This will no longer be the case if the Review’s recommendations are adopted by the government.
Resumo:
In traditional communication and information theory, noise is the demon Other, an unwelcome disruption in the passage of information. Noise is "anything that is added to the signal between its transmission and reception that is not intended by the source...anything that makes the intended signal harder to decode accurately". It is in Michel Serres' formulation, the "third man" in dialogue who is always assumed, and whom interlocutors continually struggle to exclude. Noise is simultaneously a condition and a by-product of the act of communication, it represents the ever present possibility of disruption, interruption, misunderstanding. In sonic or musical terms noise is cacophony, dissonance. For economists, noise is an arbitrary element, both a barrier to the pursuit of wealth and a basis for speculation. For Mick (Jeremy Sims) and his mate Kev (Ben Mendelsohn) in David Caesar's Idiot Box (1996), as for Hando (Russell Crowe) and his gang of skinheads in Geoffrey Wright's Romper Stomper (1992), or Dazey (Ben Mendelsohn) and Joe (Aden Young) in Wright's Metal Skin (1994) and all those like them starved of (useful) information and excluded from the circuit - the information poor - their only option, their only point of intervention in the loop, is to make noise, to disrupt, to discomfort, to become Serres' "third man", "the prosopopoeia of noise" (5).
Resumo:
Even though revenues from recorded music have fallen dramatically over the past fifteen years, people across the world are not listening to less music. Actually, they listen to more recorded music than ever before. Recorded music permeates throughout almost every aspect of our daily lives...
Resumo:
Fine-grained leaf classification has concentrated on the use of traditional shape and statistical features to classify ideal images. In this paper we evaluate the effectiveness of traditional hand-crafted features and propose the use of deep convolutional neural network (ConvNet) features. We introduce a range of condition variations to explore the robustness of these features, including: translation, scaling, rotation, shading and occlusion. Evaluations on the Flavia dataset demonstrate that in ideal imaging conditions, combining traditional and ConvNet features yields state-of-theart performance with an average accuracy of 97:3%�0:6% compared to traditional features which obtain an average accuracy of 91:2%�1:6%. Further experiments show that this combined classification approach consistently outperforms the best set of traditional features by an average of 5:7% for all of the evaluated condition variations.
Resumo:
Purpose To quantify the effects of driver age on night-time pedestrian conspicuity, and to determine whether individual differences in visual performance can predict drivers' ability to recognise pedestrians at night. Methods Participants were 32 visually normal drivers (20 younger: M = 24.4 years ± 6.4 years; 12 older: M = 72.0 years ± 5.0 years). Visual performance was measured in a laboratory-based testing session including visual acuity, contrast sensitivity, motion sensitivity and the useful field of view. Night-time pedestrian recognition distances were recorded while participants drove an instrumented vehicle along a closed road course at night; to increase the workload of drivers, auditory and visual distracter tasks were presented for some of the laps. Pedestrians walked in place, sideways to the oncoming vehicles, and wore either a standard high visibility reflective vest or reflective tape positioned on the movable joints (biological motion). Results Driver age and pedestrian clothing significantly (p < 0.05) affected the distance at which the drivers first responded to the pedestrians. Older drivers recognised pedestrians at approximately half the distance of the younger drivers and pedestrians were recognised more often and at longer distances when they wore a biological motion reflective clothing configuration than when they wore a reflective vest. Motion sensitivity was an independent predictor of pedestrian recognition distance, even when controlling for driver age. Conclusions The night-time pedestrian recognition capacity of older drivers was significantly worse than that of younger drivers. The distance at which drivers first recognised pedestrians at night was best predicted by a test of motion sensitivity.
Resumo:
Purpose Age-related changes in motion sensitivity have been found to relate to reductions in various indices of driving performance and safety. The aim of this study was to investigate the basis of this relationship in terms of determining which aspects of motion perception are most relevant to driving. Methods Participants included 61 regular drivers (age range 22–87 years). Visual performance was measured binocularly. Measures included visual acuity, contrast sensitivity and motion sensitivity assessed using four different approaches: (1) threshold minimum drift rate for a drifting Gabor patch, (2) Dmin from a random dot display, (3) threshold coherence from a random dot display, and (4) threshold drift rate for a second-order (contrast modulated) sinusoidal grating. Participants then completed the Hazard Perception Test (HPT) in which they were required to identify moving hazards in videos of real driving scenes, and also a Direction of Heading task (DOH) in which they identified deviations from normal lane keeping in brief videos of driving filmed from the interior of a vehicle. Results In bivariate correlation analyses, all motion sensitivity measures significantly declined with age. Motion coherence thresholds, and minimum drift rate threshold for the first-order stimulus (Gabor patch) both significantly predicted HPT performance even after controlling for age, visual acuity and contrast sensitivity. Bootstrap mediation analysis showed that individual differences in DOH accuracy partly explained these relationships, where those individuals with poorer motion sensitivity on the coherence and Gabor tests showed decreased ability to perceive deviations in motion in the driving videos, which related in turn to their ability to detect the moving hazards. Conclusions The ability to detect subtle movements in the driving environment (as determined by the DOH task) may be an important contributor to effective hazard perception, and is associated with age, and an individuals' performance on tests of motion sensitivity. The locus of the processing deficits appears to lie in first-order, rather than second-order motion pathways.
Resumo:
In this chapter, we draw out the relevant themes from a range of critical scholarship from the small body of digital media and software studies work that has focused on the politics of Twitter data and the sociotechnical means by which access is regulated. We highlight in particular the contested relationships between social media research (in both academic and non-academic contexts) and the data wholesale, retail, and analytics industries that feed on them. In the second major section of the chapter we discuss in detail the pragmatic edge of these politics in terms of what kinds of scientific research is and is not possible in the current political economy of Twitter data access. Finally, at the end of the chapter we return to the much broader implications of these issues for the politics of knowledge, demonstrating how the apparently microscopic level of how the Twitter API mediates access to Twitter data actually inscribes and influences the macro level of the global political economy of science itself, through re-inscribing institutional and traditional disciplinary privilege We conclude with some speculations about future developments in data rights and data philanthropy that may at least mitigate some of these negative impacts.
Resumo:
World Heritage Landscapes (WHLs) are receiving increased attention from researchers, urban planners, managers, and policy makers and many heritage values and resources are becoming irreversibly lost. This phenomenon is especially prominent for WHLs located in cities, where greater development opportunities are involved. Decision making for sustainable urban landscape planning, conservation and management of WHLs often takes place from an economic perspective, especially in developing countries. This, together with the uncertain source of funding to cover WHL operating and maintenance costs, has resulted in many urban managers seeking private sector funding either in the form of visitor access fees or leasing part of the site for high-rental facilities such as five star hotels, clubs and expensive restaurants. For the former, this can result in low-income urban citizens being unable to afford the access fees and hence contradicting the principle of equal access for all; while, for the latter, the principle of open access for all is equally violated. To resolve this conflict, a game model is developed to determine how urban managers should allocate WHL spaces to maximize the combination of economic, social and ecological benefits and cultural values. A case study is provided of the Hangzhou's West Lake Scenic Area, a WHL located at the centre of Hangzhou city, in which several high-rental facilities have recently been closed down by the local authorities due to charges of elitism and misuse of public funds by government officials. The result shows that the best solution is to lease a small space with high rents and leave the remainder of the site to the public. This solution is likely to be applicable only in cities with a strong economy.
Resumo:
This paper presents an online, unsupervised training algorithm enabling vision-based place recognition across a wide range of changing environmental conditions such as those caused by weather, seasons, and day-night cycles. The technique applies principal component analysis to distinguish between aspects of a location’s appearance that are condition-dependent and those that are condition-invariant. Removing the dimensions associated with environmental conditions produces condition-invariant images that can be used by appearance-based place recognition methods. This approach has a unique benefit – it requires training images from only one type of environmental condition, unlike existing data-driven methods that require training images with labelled frame correspondences from two or more environmental conditions. The method is applied to two benchmark variable condition datasets. Performance is equivalent or superior to the current state of the art despite the lesser training requirements, and is demonstrated to generalise to previously unseen locations.