881 resultados para VIRTUAL ENVIRONMENTS
Resumo:
The liberalization of international trade and foreign direct investment through multilateral, regional and bilateral agreements has had profound implications for the structure and nature of food systems, and therefore, for the availability, nutritional quality, accessibility, price and promotion of foods in different locations. Public health attention has only relatively recently turned to the links between trade and investment agreements, diets and health, and there is currently no systematic monitoring of this area. This paper reviews the available evidence on the links between trade agreements, food environments and diets from an obesity and non-communicable disease (NCD) perspective. Based on the key issues identified through the review, the paper outlines an approach for monitoring the potential impact of trade agreements on food environments and obesity/NCD risks. The proposed monitoring approach encompasses a set of guiding principles, recommended procedures for data collection and analysis, and quantifiable ‘minimal’, ‘expanded’ and ‘optimal’ measurement indicators to be tailored to national priorities, capacity and resources. Formal risk assessment processes of existing and evolving trade and investment agreements, which focus on their impacts on food environments will help inform the development of healthy trade policy, strengthen domestic nutrition and health policy space and ultimately protect population nutrition.
Resumo:
Objectives This study introduces and assesses the precision of a standardized protocol for anthropometric measurement of the juvenile cranium using three-dimensional surface rendered models, for implementation in forensic investigation or paleodemographic research. Materials and methods A subset of multi-slice computed tomography (MSCT) DICOM datasets (n=10) of modern Australian subadults (birth—10 years) was accessed from the “Skeletal Biology and Forensic Anthropology Virtual Osteological Database” (n>1200), obtained from retrospective clinical scans taken at Brisbane children hospitals (2009–2013). The capabilities of Geomagic Design X™ form the basis of this study; introducing standardized protocols using triangle surface mesh models to (i) ascertain linear dimensions using reference plane networks and (ii) calculate the area of complex regions of interest on the cranium. Results The protocols described in this paper demonstrate high levels of repeatability between five observers of varying anatomical expertise and software experience. Intra- and inter-observer error was indiscernible with total technical error of measurement (TEM) values ≤0.56 mm, constituting <0.33% relative error (rTEM) for linear measurements; and a TEM value of ≤12.89 mm2, equating to <1.18% (rTEM) of the total area of the anterior fontanelle and contiguous sutures. Conclusions Exploiting the advances of MSCT in routine clinical assessment, this paper assesses the application of this virtual approach to acquire highly reproducible morphometric data in a non-invasive manner for human identification and population studies in growth and development. The protocols and precision testing presented are imperative for the advancement of “virtual anthropology” into routine Australian medico-legal death investigation.
Resumo:
Novel computer vision techniques have been developed for automatic monitoring of crowed environments such as airports, railway stations and shopping malls. Using video feeds from multiple cameras, the techniques enable crowd counting, crowd flow monitoring, queue monitoring and abnormal event detection. The outcome of the research is useful for surveillance applications and for obtaining operational metrics to improve business efficiency.
Resumo:
Articular cartilage is the load-bearing tissue that consists of proteoglycan macromolecules entrapped between collagen fibrils in a three-dimensional architecture. To date, the drudgery of searching for mathematical models to represent the biomechanics of such a system continues without providing a fitting description of its functional response to load at micro-scale level. We believe that the major complication arose when cartilage was first envisaged as a multiphasic model with distinguishable components and that quantifying those and searching for the laws that govern their interaction is inadequate. To the thesis of this paper, cartilage as a bulk is as much continuum as is the response of its components to the external stimuli. For this reason, we framed the fundamental question as to what would be the mechano-structural functionality of such a system in the total absence of one of its key constituents-proteoglycans. To answer this, hydrated normal and proteoglycan depleted samples were tested under confined compression while finite element models were reproduced, for the first time, based on the structural microarchitecture of the cross-sectional profile of the matrices. These micro-porous in silico models served as virtual transducers to produce an internal noninvasive probing mechanism beyond experimental capabilities to render the matrices micromechanics and several others properties like permeability, orientation etc. The results demonstrated that load transfer was closely related to the microarchitecture of the hyperelastic models that represent solid skeleton stress and fluid response based on the state of the collagen network with and without the swollen proteoglycans. In other words, the stress gradient during deformation was a function of the structural pattern of the network and acted in concert with the position-dependent compositional state of the matrix. This reveals that the interaction between indistinguishable components in real cartilage is superimposed by its microarchitectural state which directly influences macromechanical behavior.
Resumo:
Novel techniques have been developed for the automatic recognition of human behaviour in challenging environments using information from visual and infra-red camera feeds. The techniques have been applied to two interesting scenarios: Recognise drivers' speech using lip movements and recognising audience behaviour, while watching a movie, using facial features and body movements. Outcome of the research in these two areas will be useful in the improving the performance of voice recognition in automobiles for voice based control and for obtaining accurate movie interest ratings based on live audience response analysis.
Resumo:
This paper presents a system which enhances the capabilities of a light general aviation aircraft to land autonomously in case of an unscheduled event such as engine failure. The proposed system will not only increase the level of autonomy for the general aviation aircraft industry but also increase the level of dependability. Safe autonomous landing in case of an engine failure with a certain level of reliability is the primary focus of our work as both safety and reliability are attributes of dependability. The system is designed for a light general aviation aircraft but can be extended for dependable unmanned aircraft systems. The underlying system components are computationally efficient and provides continuous situation assessment in case of an emergency landing. The proposed system is undergoing an evaluation phase using an experimental platform (Cessna 172R) in real world scenarios.
Resumo:
This chapter looks at the management and zoning of online sexual culture–the web sites which make up the pornosphere (McNair 2013). It explores the concept of ‘community standards’, which has been a central part of the management of sexually explicit materials in the offline world, and asks what it might mean to talk about ‘community standards’ on the Internet. And finally, it uses the concept of virtual-community standards to revisit the question of managing access to sexually explicit materials on the Internet.
Resumo:
Contemporary online environments suffer from a regulatory gap; that is there are few options for participants between customer service departments and potentially expensive court cases in foreign jurisdictions. Whatever form of regulation ultimately fills that gap will be charged with determining whether specific behavior, within a specific environment, is fair or foul; whether it’s cheating or not. However, cheating is a term that, despite substantial academic study, remains problematic. Is anything the developer doesn’t want you to do cheating? Is it only if your actions breach the formal terms of service? What about the community norms, do they matter at all? All of these remain largely unresolved questions, due to the lack of public determination of cases in such environments, which have mostly been settled prior to legal action. In this paper, I propose a re-branding of participant activity in such environments into developer-sanctioned, advantage play, and cheating. Advantage play, ultimately, is activity within the environment in which the player is able to turn the mechanics of the environment to their advantage without breaching the rules of the environment. Such a definition, and the term itself, is based on the usage of the term within the gambling industry, in which advantage play is considered betting with the advantage in the players’ favor rather than that of the house. Through examples from both the gambling industry and the Massively Multiplayer Role-Playing Game Eve Online, I consider the problems in defining cheating, suggest how the term ‘advantage play’ may be useful in understanding participants behavior in contemporary environments, and ultimately consider the use of such terminology in dispute resolution models which may overcome this regulatory gap.
Resumo:
Current governance challenges facing the global games industry are heavily dominated by online games. Whilst much academic and industry attention has been afforded to Virtual Worlds, the more pressing contemporary challenges may arise in casual games, especially when found on social networks. As authorities are faced with an increasing volume of disputes between participants and platform operators, the likelihood of external regulation increases, and the role that such regulation would have on the industry – both internationally and within specific regions – is unclear. Kelly (2010) argues that “when you strip away the graphics of these [social] games, what you are left with is simply a button [...] You push it and then the game returns a value of either Win or Lose”. He notes that while “every game developer wants their game to be played, preferably addictively, because it’s so awesome”, these mechanics lead not to “addiction of engagement through awesomeness” but “the addiction of compulsiveness”, surmising that “the reality is that they’ve actually sort-of kind-of half-intentionally built a virtual slot machine industry”. If such core elements of social game design are questioned, this gives cause to question the real-money options to circumvent them. With players able to purchase virtual currency and speed the completion of tasks, the money invested by the 20% purchasing in-game benefits (Zainwinger, 2012) may well be the result of compulsion. The decision by the Japanese Consumer Affairs agency to investigate the ‘Kompu Gacha’ mechanic (in which players are rewarded for completing a set of items obtained through purchasing virtual goods such as mystery boxes), and the resultant verdict that such mechanics should be regulated through gambling legislation, demonstrates that politicians are beginning to look at the mechanics deployed in these environments. Purewal (2012) states that “there’s a reasonable argument that complete gacha would be regulated under gambling law under at least some (if not most) Western jurisdictions”. This paper explores the governance challenged within these games and platforms, their role in the global industry, and current practice amongst developers in the Australian and United States to address such challenges.
Resumo:
Almost half of all game players are now women. However, women only represent a small proportion of game developers. There is a lack of previous research to suggest why women don't pursue careers in games and how we can attract more women to the industry. In this paper, we investigate the issues and barriers that prevent women from entering the games industry, as well as the solutions and steps that can be taken to attract more women to the industry. We draw on the lessons learned by the information technology industry and report on a program of events that was conducted at the Queensland University of Technology in 2011. These events provided some insight into the issues surrounding the lack of women in the games industry, as well as some initial steps that we can take as an industry to attract and support more female developers.
Resumo:
Stigmergy is a biological term used when discussing a sub-set of insect swarm-behaviour describing the apparent organisation seen during their activities. Stigmergy describes a communication mechanism based on environment-mediated signals which trigger responses among the insects. This phenomenon is demonstrated in the behavior of ants and their food gathering process when following pheromone trails, where the pheromones are a form of environment-mediated communication. What is interesting with this phenomenon is that highly organized societies are achieved without an apparent management structure. Stigmergy is also observed in human environments, both natural and engineered. It is implicit in the Web where sites provide a virtual environment supporting coordinative contributions. Researchers in varying disciplines appreciate the power of this phenomenon and have studied how to exploit it. As stigmergy becomes more widely researched we see its definition mutate as papers citing original work become referenced themselves. Each paper interprets these works in ways very specific to the research being conducted. Our own research aims to better understand what improves the collaborative function of a Web site when exploiting the phenomenon. However when researching stigmergy to develop our understanding we discover a lack of a standardized and abstract model for the phenomenon. Papers frequently cited the same generic descriptions before becoming intimately focused on formal specifications of an algorithm, or esoteric discussions regarding sub-facets of the topic. None provide a holistic and macro-level view to model and standardize the nomenclature. This paper provides a content analysis of influential literature documenting the numerous theoretical and experimental papers that have focused on stigmergy. We establish that stigmergy is a phenomenon that transcends the insect world and is more than just a metaphor when applied to the human world. We present from our own research our general theory and abstract model of semantics of stigma in stigmergy. We hope our model will clarify the nuances of the phenomenon into a useful road-map, and standardise vocabulary that we witness becoming confused and divergent. Furthermore, this paper documents the analysis on which we base our next paper: Special Theory of Stigmergy: A Design Pattern for Web 2.0 Collaboration.
Resumo:
Live migration of multiple Virtual Machines (VMs) has become an indispensible management activity in datacenters for application performance, load balancing, server consolidation. While state-of-the-art live VM migration strategies focus on the improvement of the migration performance of a single VM, little attention has been given to the case of multiple VMs migration. Moreover, existing works on live VM migration ignore the inter-VM dependencies, and underlying network topology and its bandwidth. Different sequences of migration and different allocations of bandwidth result in different total migration times and total migration downtimes. This paper concentrates on developing a multiple VMs migration scheduling algorithm such that the performance of migration is maximized. We evaluate our proposed algorithm through simulation. The simulation results show that our proposed algorithm can migrate multiple VMs on any datacenter with minimum total migration time and total migration downtime.
Resumo:
This paper presents large, accurately calibrated and time-synchronised datasets, gathered outdoors in controlled environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. It discusses how the data collection process was designed, the conditions in which these datasets have been gathered, and some possible outcomes of their exploitation, in particular for the evaluation of performance of sensors and perception algorithms for UGVs.
Resumo:
Operating in vegetated environments is a major challenge for autonomous robots. Obstacle detection based only on geometric features causes the robot to consider foliage, for example, small grass tussocks that could be easily driven through, as obstacles. Classifying vegetation does not solve this problem since there might be an obstacle hidden behind the vegetation. In addition, dense vegetation typically needs to be considered as an obstacle. This paper addresses this problem by augmenting probabilistic traversability map constructed from laser data with ultra-wideband radar measurements. An adaptive detection threshold and a probabilistic sensor model are developed to convert the radar data to occupancy probabilities. The resulting map captures the fine resolution of the laser map but clears areas from the traversability map that are induced by obstacle-free foliage. Experimental results validate that this method is able to improve the accuracy of traversability maps in vegetated environments.
Resumo:
Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation technology. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches consider the energy consumption by physical machines only, but do not consider the energy consumption in communication network, in a data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement. In our preliminary research, we have proposed a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both physical machines and the communication network in a data center. Aiming at improving the performance and efficiency of the genetic algorithm, this paper presents a hybrid genetic algorithm for the energy-efficient virtual machine placement problem. Experimental results show that the hybrid genetic algorithm significantly outperforms the original genetic algorithm, and that the hybrid genetic algorithm is scalable.