847 resultados para Large Scale Virtual Environments
Resumo:
An experiment in large scale, live, game design and public performance, bringing together participants from across the creative arts to design, deliver and document a project that was both a cooperative learning experience and an experimental public performance. The four month project, funded by the Edge Digital Centre, culminated into a 24 hour ARG event involving over 100 participants in December 2012. Using the premise of a viral outbreak, young enthusiasts auditioned for the roles of Survivor, Zombie, Medic and Military. The main objective was for the Survivors to complete a series of challenges over 24 hours, while the other characters fulfilled their opposing objectives of interference and sabotage supported by both scripted and free-form scenarios staged in constructed scenes throughout the venues. The event was set in the State Library of Queensland and the Edge Digital Centre who granted the project full access, night and day to all areas including public, office and underground areas. These venues were transformed into cinematic settings full of interactive props and various audio-visual effects. The ZomPoc Project was an innovative experiment in writing and directing a large scale, live, public performance, bringing together participants from across the creative industries. In order to design such an event a number of innovative resources were developed exploiting techniques of game design, theatre, film, television and tangible media production. A series of workshops invited local artists, scientists, technicians and engineers to find new ways of collaborating to create networked artifacts, experimental digital works, robotic props, modular set designs, sound effects and unique costuming guided by an innovative multi-platform script developed by Deb Polson. The result of this collaboration was the creation of innovative game and set props, both atmospheric and interactive. Such works animated the space, presented story clues and facilitated interactions between strangers who found themselves sharing a unique experience in unexpected places.
Resumo:
In most of the advanced economies, students are losing interest in careers especially in en¬gineering and related industries. Hence, western economies are confronting a critical skilled labour shortage in areas of technology, science and engineering. Decisions about career pathways are made as early as the primary years of schooling and hence cooperation be¬tween industry and schools to attract students to the professions is crucial. The aim of this paper is to document how the organisational and institutional elements of one industry-school partnerships initiative — The Gateway Schools Program — contribute to productive knowledge sharing and networking. In particular this paper focuses on an initiative of an Australian State government in response to a perceived crisis around the skills shortage in an economy transitioning from a localised to a global knowledge production economy. The Gateway Schools initiative signals the first sustained attempt in Australia to incorporate schools into production networks through strategic partnerships linking them to partner organisations at the industry level. We provide case examples of how four schools opera¬tionalise the partnerships with the minerals and energy industries and how these partner¬ships as knowledge assets impact the delivery of curriculum and capacity building among teachers. Our ultimate goal is to define those characteristics of successful partnerships that do contribute to enhanced interest and engagement by students in those careers that are currently experiencing critical shortages.
Resumo:
Statistical methodology was applied to a survey of time-course incidence of four viruses (alfalfa mosaic virus, clover yellow vein virus, subterranean clover mottle virus and subterranean clover red leaf virus) in improved pastures in southern regions of Australia. -from Authors
Resumo:
Due to the demand for better and deeper analysis in sports, organizations (both professional teams and broadcasters) are looking to use spatiotemporal data in the form of player tracking information to obtain an advantage over their competitors. However, due to the large volume of data, its unstructured nature, and lack of associated team activity labels (e.g. strategic/tactical), effective and efficient strategies to deal with such data have yet to be deployed. A bottleneck restricting such solutions is the lack of a suitable representation (i.e. ordering of players) which is immune to the potentially infinite number of possible permutations of player orderings, in addition to the high dimensionality of temporal signal (e.g. a game of soccer last for 90 mins). Leveraging a recent method which utilizes a "role-representation", as well as a feature reduction strategy that uses a spatiotemporal bilinear basis model to form a compact spatiotemporal representation. Using this representation, we find the most likely formation patterns of a team associated with match events across nearly 14 hours of continuous player and ball tracking data in soccer. Additionally, we show that we can accurately segment a match into distinct game phases and detect highlights. (i.e. shots, corners, free-kicks, etc) completely automatically using a decision-tree formulation.
Resumo:
As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grained level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.
Resumo:
Next Generation Sequencing (NGS) has revolutionised molecular biology, resulting in an explosion of data sets and an increasing role in clinical practice. Such applications necessarily require rapid identification of the organism as a prelude to annotation and further analysis. NGS data consist of a substantial number of short sequence reads, given context through downstream assembly and annotation, a process requiring reads consistent with the assumed species or species group. Highly accurate results have been obtained for restricted sets using SVM classifiers, but such methods are difficult to parallelise and success depends on careful attention to feature selection. This work examines the problem at very large scale, using a mix of synthetic and real data with a view to determining the overall structure of the problem and the effectiveness of parallel ensembles of simpler classifiers (principally random forests) in addressing the challenges of large scale genomics.
Resumo:
Data associated with germplasm collections are typically large and multivariate with a considerable number of descriptors measured on each of many accessions. Pattern analysis methods of clustering and ordination have been identified as techniques for statistically evaluating the available diversity in germplasm data. While used in many studies, the approaches have not dealt explicitly with the computational consequences of large data sets (i.e. greater than 5000 accessions). To consider the application of these techniques to germplasm evaluation data, 11328 accessions of groundnut (Arachis hypogaea L) from the International Research Institute for the Semi-Arid Tropics, Andhra Pradesh, India were examined. Data for nine quantitative descriptors measured in the rainy and post-rainy growing seasons were used. The ordination technique of principal component analysis was used to reduce the dimensionality of the germplasm data. The identification of phenotypically similar groups of accessions within large scale data via the computationally intensive hierarchical clustering techniques was not feasible and non-hierarchical techniques had to be used. Finite mixture models that maximise the likelihood of an accession belonging to a cluster were used to cluster the accessions in this collection. The patterns of response for the different growing seasons were found to be highly correlated. However, in relating the results to passport and other characterisation and evaluation descriptors, the observed patterns did not appear to be related to taxonomy or any other well known characteristics of groundnut.
Resumo:
A novel approach to large-scale production of high-quality graphene flakes in magnetically-enhanced arc discharges between carbon electrodes is reported. A non-uniform magnetic field is used to control the growth and deposition zones, where the Y-Ni catalyst experiences a transition to the ferromagnetic state, which in turn leads to the graphene deposition in a collection area. The quality of the produced material is characterized by the SEM, TEM, AFM, and Raman techniques. The proposed growth mechanism is supported by the nucleation and growth model.
Resumo:
Next Generation Sequencing (NGS) has revolutionised molecular biology, resulting in an explosion of data sets and an increasing role in clinical practice. Such applications necessarily require rapid identification of the organism as a prelude to annotation and further analysis. NGS data consist of a substantial number of short sequence reads, given context through downstream assembly and annotation, a process requiring reads consistent with the assumed species or species group. Highly accurate results have been obtained for restricted sets using SVM classifiers, but such methods are difficult to parallelise and success depends on careful attention to feature selection. This work examines the problem at very large scale, using a mix of synthetic and real data with a view to determining the overall structure of the problem and the effectiveness of parallel ensembles of simpler classifiers (principally random forests) in addressing the challenges of large scale genomics.
Resumo:
Although the collection of player and ball tracking data is fast becoming the norm in professional sports, large-scale mining of such spatiotemporal data has yet to surface. In this paper, given an entire season's worth of player and ball tracking data from a professional soccer league (approx 400,000,000 data points), we present a method which can conduct both individual player and team analysis. Due to the dynamic, continuous and multi-player nature of team sports like soccer, a major issue is aligning player positions over time. We present a "role-based" representation that dynamically updates each player's relative role at each frame and demonstrate how this captures the short-term context to enable both individual player and team analysis. We discover role directly from data by utilizing a minimum entropy data partitioning method and show how this can be used to accurately detect and visualize formations, as well as analyze individual player behavior.
Resumo:
Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making
Resumo:
Large-scale integration of non-inertial generators such as wind farms will create frequency stability issues due to reduced system inertia. Inertia based frequency stability study is important to predict the performance of power system with increased level of renewables. This paper focuses on the impact large-scale wind penetration on frequency stability of the Australian Power Network. MATLAB simulink is used to develop a frequency based dynamic model utilizing the network data from a simplified 14-generator Australian power system. The loss of generation is modeled as the active power disturbance and minimum inertia required to maintain the frequency stability is determined for five-area power system.
Resumo:
A hippocampal-CA3 memory model was constructed with PGENESIS, a recently developed version of GENESIS that allows for distributed processing of a neural network simulation. A number of neural models of the human memory system have identified the CA3 region of the hippocampus as storing the declarative memory trace. However, computational models designed to assess the viability of the putative mechanisms of storage and retrieval have generally been too abstract to allow comparison with empirical data. Recent experimental evidence has shown that selective knock-out of NMDA receptors in the CA1 of mice leads to reduced stability of firing specificity in place cells. Here a similar reduction of stability of input specificity is demonstrated in a biologically plausible neural network model of the CA3 region, under conditions of Hebbian synaptic plasticity versus an absence of plasticity. The CA3 region is also commonly associated with seizure activity. Further simulations of the same model tested the response to continuously repeating versus randomized nonrepeating input patterns. Each paradigm delivered input of equal intensity and duration. Non-repeating input patterns elicited a greater pyramidal cell spike count. This suggests that repetitive versus non-repeating neocortical inpus has a quantitatively different effect on the hippocampus. This may be relevant to the production of independent epileptogenic zones and the process of encoding new memories.