446 resultados para Multiple views
Resumo:
The aim of the research was two-fold: firstly, to investigate strategies used by Australian parents to encourage desirable child behaviours and to decrease undesirable behaviours; secondly, to determine the acceptability and perceived usefulness to parents of various strategies. The research encompassed two studies. In the first study, 152 parents of children aged under six years completed questionnaires to identify their disciplinary practices. In Study 2, 129 parents reported on the acceptability and perceived effectiveness of various parenting strategies (modelling, ignoring, rewarding and physical punishment) for influencing child behaviour. Most parents in Study 1 reported using techniques consistent with positive parenting strategies. The use of physical punishment was also reported, but predominantly as a secondary method of discipline. In Study 2, the techniques of modelling and rewarding were found to be more acceptable to parents than were ignoring and smacking. The findings highlight the need to raise parental awareness and acceptance of a broader range of positive ways to manage child behaviour.
Resumo:
This paper introduces a straightforward method to asymptotically solve a variety of initial and boundary value problems for singularly perturbed ordinary differential equations whose solution structure can be anticipated. The approach is simpler than conventional methods, including those based on asymptotic matching or on eliminating secular terms. © 2010 by the Massachusetts Institute of Technology.
Resumo:
NLS is a stream cipher which was submitted to the eSTREAM project. A linear distinguishing attack against NLS was presented by Cho and Pieprzyk, which was called Crossword Puzzle (CP) attack. NLSv2 is a tweak version of NLS which aims mainly at avoiding the CP attack. In this paper, a new distinguishing attack against NLSv2 is presented. The attack exploits high correlation amongst neighboring bits of the cipher. The paper first shows that the modular addition preserves pairwise correlations as demonstrated by existence of linear approximations with large biases. Next, it shows how to combine these results with the existence of high correlation between bits 29 and 30 of the S-box to obtain a distinguisher whose bias is around 2^−37. Consequently, we claim that NLSv2 is distinguishable from a random cipher after observing around 2^74 keystream words.
Resumo:
The value of information technology (IT) is often realized when continuously being used after users’ initial acceptance. However, previous research on continuing IT usage is limited for dismissing the importance of mental goals in directing users’ behaviors and for inadequately accommodating the group context of users. This in-progress paper offers a synthesis of several literature to conceptualize continuing IT usage as multilevel constructs and to view IT usage behavior as directed and energized by a set of mental goals. Drawing from the self-regulation theory in the social psychology, this paper proposes a process model, positioning continuing IT usage as multiple-goal pursuit. An agent-based modeling approach is suggested to further explore causal and analytical implications of the proposed process model.
Resumo:
The ability to build high-fidelity 3D representations of the environment from sensor data is critical for autonomous robots. Multi-sensor data fusion allows for more complete and accurate representations. Furthermore, using distinct sensing modalities (i.e. sensors using a different physical process and/or operating at different electromagnetic frequencies) usually leads to more reliable perception, especially in challenging environments, as modalities may complement each other. However, they may react differently to certain materials or environmental conditions, leading to catastrophic fusion. In this paper, we propose a new method to reliably fuse data from multiple sensing modalities, including in situations where they detect different targets. We first compute distinct continuous surface representations for each sensing modality, with uncertainty, using Gaussian Process Implicit Surfaces (GPIS). Second, we perform a local consistency test between these representations, to separate consistent data (i.e. data corresponding to the detection of the same target by the sensors) from inconsistent data. The consistent data can then be fused together, using another GPIS process, and the rest of the data can be combined as appropriate. The approach is first validated using synthetic data. We then demonstrate its benefit using a mobile robot, equipped with a laser scanner and a radar, which operates in an outdoor environment in the presence of large clouds of airborne dust and smoke.
Resumo:
The purpose of this book is to open a conversation on the idea of information experience, which we understand to be a complex, multidimensional engagement with information. In developing the book we invited colleagues to propose a chapter on any aspect of information experience, for example conceptual, methodological or empirical. We invited them to express their interpretation of information experience, to contribute to the development of this concept. The book has thus become a vehicle for interested researchers and practitioners to explore their thinking around information experience, including relationships between information experience, learning experience, user experience and similar constructs. It represents a collective awareness of information experience in contemporary research and practice. Through this sharing of multiple perspectives, our insights into possible ways of interpreting information experience, and its relationship to other concepts in information research and practice, is enhanced. In this chapter, we introduce the idea of information experience. We also outline the book and its chapters, and bring together some emerging alternative views and approaches to this important idea.
Resumo:
This paper presents a novel method to rank map hypotheses by the quality of localization they afford. The highest ranked hypothesis at any moment becomes the active representation that is used to guide the robot to its goal location. A single static representation is insufficient for navigation in dynamic environments where paths can be blocked periodically, a common scenario which poses significant challenges for typical planners. In our approach we simultaneously rank multiple map hypotheses by the influence that localization in each of them has on locally accurate odometry. This is done online for the current locally accurate window by formulating a factor graph of odometry relaxed by localization constraints. Comparison of the resulting perturbed odometry of each hypothesis with the original odometry yields a score that can be used to rank map hypotheses by their utility. We deploy the proposed approach on a real robot navigating a structurally noisy office environment. The configuration of the environment is physically altered outside the robots sensory horizon during navigation tasks to demonstrate the proposed approach of hypothesis selection.
Resumo:
Live migration of multiple Virtual Machines (VMs) has become an integral management activity in data centers for power saving, load balancing and system maintenance. While state-of-the-art live migration techniques focus on the improvement of migration performance of an independent single VM, only a little has been investigated to the case of live migration of multiple interacting VMs. Live migration is mostly influenced by the network bandwidth and arbitrarily migrating a VM which has data inter-dependencies with other VMs may increase the bandwidth consumption and adversely affect the performances of subsequent migrations. In this paper, we propose a Random Key Genetic Algorithm (RKGA) that efficiently schedules the migration of a given set of VMs accounting both inter-VM dependency and data center communication network. The experimental results show that the RKGA can schedule the migration of multiple VMs with significantly shorter total migration time and total downtime compared to a heuristic algorithm.
Resumo:
It is often said that Australia is a world leader in rates of copyright infringement for entertainment goods. In 2012, the hit television show, Game of Thrones, was the most downloaded television show over bitorrent, and estimates suggest that Australians accounted for a plurality of nearly 10% of the 3-4 million downloads each week. The season finale of 2013 was downloaded over a million times within 24 hours of its release, and again Australians were the largest block of illicit downloaders over BitTorrent, despite our relatively small population. This trend has led the former US Ambassador to Australia to implore Australians to stop 'stealing' digital content, and rightsholders to push for increasing sanctions on copyright infringers. The Australian Government is looking to respond by requiring Internet Service Providers to issue warnings and potentially punish consumers who are alleged by industry groups to have infringed copyright. This is the logical next step in deterring infringement, given that the operators of infringing networks (like The Pirate Bay, for example) are out of regulatory reach. This steady ratcheting up of the strength of copyright, however, comes at a significant cost to user privacy and autonomy, and while the decentralisation of enforcement reduces costs, it also reduces the due process safeguards provided by the judicial process. This article presents qualitative evidence that substantiates a common intuition: one of the major reasons that Australians seek out illicit downloads of content like Game of Thrones in such numbers is that it is more difficult to access legitimately in Australia. The geographically segmented way in which copyright is exploited at an international level has given rise to a ‘tyranny of digital distance’, where Australians have less access to copyright goods than consumers in other countries. Compared to consumers in the US and the EU, Australians pay more for digital goods, have less choice in distribution channels, are exposed to substantial delays in access, and are sometimes denied access completely. In this article we focus our analysis on premium film and television offerings, like Game of Thrones, and through semi-structured interviews, explore how choices in distribution impact on the willingness of Australian consumers to seek out infringing copies of copyright material. Game of Thrones provides an excellent case study through which to frame this analysis: it is both one of the least legally accessible television offerings and one of the most downloaded through filesharing networks of recent times. Our analysis shows that at the same time as rightsholder groups, particularly in the film and television industries, are lobbying for stronger laws to counter illicit distribution, the business practices of their member organisations are counter-productively increasing incentives for consumers to infringe. The lack of accessibility and high prices of copyright goods in Australia leads to substantial economic waste. The unmet consumer demand means that Australian consumers are harmed by lower access to information and entertainment goods than consumers in other jurisdictions. The higher rates of infringement that fulfils some of this unmet demand increases enforcement costs for copyright owners and imposes burdens either on our judicial system or on private entities – like ISPs – who may be tasked with enforcing the rights of third parties. Most worryingly, the lack of convenient and cheap legitimate digital distribution channels risks undermining public support for copyright law. Our research shows that consumers blame rightsholders for failing to meet market demand, and this encourages a social norm that infringing copyright, while illegal, is not morally wrongful. The implications are as simple as they are profound: Australia should not take steps to increase the strength of copyright law at this time. The interests of the public and those of rightsholders align better when there is effective competition in distribution channels and consumers can legitimately get access to content. While foreign rightsholders are seeking enhanced protection for their interests, increasing enforcement is likely to increase their ability to engage in lucrative geographical price-discrimination, particularly for premium content. This is only likely to increase the degree to which Australian consumers feel that their interests are not being met and, consequently, to further undermine the legitimacy of copyright law. If consumers are to respect copyright law, increasing sanctions for infringement without enhancing access and competition in legitimate distribution channels could be dangerously counter-productive. We suggest that rightsholders’ best strategy for addressing infringement in Australia at this time is to ensure that Australians can access copyright goods in a timely, affordable, convenient, and fair lawful manner.
Resumo:
In 2001 45% (2.7 billion) of the world’s population of approximately 6.1 billion lived in ‘moderate poverty’ on less than US $ 2 per person per day (World Population Summary, 2012). In the last 60 years there have been many theories attempting to explain development, why some countries have the fastest growth in history, while others stagnate and so far no way has been found to explain the differences. Traditional views imply that development is the aggregation of successes from multiple individual business enterprises, but this ignores the interactions between and among institutions, organisations and individuals in the economy, which can often have unpredictable effects. Complexity Development Theory proposes that by viewing development as an emergent property of society, we can help create better development programs at the organisational, institutional and national levels. This paper asks how the principals of CAS can be used to develop CDT principals used to develop and operate development programs at the bottom of the pyramid in developing economies. To investigate this research question we conduct a literature review to define and describe CDT and create propositions for testing. We illustrate these propositions using a case study of an Asset Based Community Development (ABCD) Program for existing and nascent entrepreneurs in the Democratic Republic of the Congo (DRC). We found evidence that all the principals of CDT were related to the characteristics of CAS. If this is the case, development programs will be able to select which CAS needed to test these propositions.
Resumo:
This chapter describes decentralized data fusion algorithms for a team of multiple autonomous platforms. Decentralized data fusion (DDF) provides a useful basis with which to build upon for cooperative information gathering tasks for robotic teams operating in outdoor environments. Through the DDF algorithms, each platform can maintain a consistent global solution from which decisions may then be made. Comparisons will be made between the implementation of DDF using two probabilistic representations. The first, Gaussian estimates and the second Gaussian mixtures are compared using a common data set. The overall system design is detailed, providing insight into the overall complexity of implementing a robust DDF system for use in information gathering tasks in outdoor UAV applications.
Resumo:
Introduction and aims: Despite evidence that many Australian adolescents have considerable experience with various drug types, little is known about the extent to which adolescents use multiple substances. The aim of this study was to examine the degree of clustering of drug types within individuals, and the extent to which demographic and psychosocial predictors are related to cluster membership. Design and method: A sample of 1402 adolescents aged 12-17. years were extracted from the Australian 2007 National Drug Strategy Household Survey. Extracted data included lifetime use of 10 substances, gender, psychological distress, physical health, perceived peer substance use, socioeconomic disadvantage, and regionality. Latent class analysis was used to determine clusters, and multinomial logistic regression employed to examine predictors of cluster membership. Result: There were 3 latent classes. The great majority (79.6%) of adolescents used alcohol only, 18.3% were limited range multidrug users (encompassing alcohol, tobacco, and marijuana), and 2% were extended range multidrug users. Perceived peer drug use and psychological distress predicted limited and extended multiple drug use. Psychological distress was a more significant predictor of extended multidrug use compared to limited multidrug use. Discussion and conclusion: In the Australian school-based prevention setting, a very strong focus on alcohol use and the linkages between alcohol, tobacco and marijuana are warranted. Psychological distress may be an important target for screening and early intervention for adolescents who use multiple drugs.
Resumo:
This paper addresses the topic of real-time decision making for autonomous city vehicles, i.e. the autonomous vehicles’ ability to make appropriate driving decisions in city road traffic situations. After decomposing the problem into two consecutive decision making stages, and giving a short overview about previous work, the paper explains how Multiple Criteria Decision Making (MCDM) can be used in the process of selecting the most appropriate driving maneuver.
Resumo:
Marsupials exhibit great diversity in ecology and morphology. However, compared to their sister group, the placental mammals, our understanding of many aspects of marsupial evolution remains limited. We use 101 mitochondrial genomes and data from 26 nuclear loci to reconstruct a dated phylogeny including 97% of extant genera and 58% of modern marsupial species. This tree allows us to analyze the evolution of habitat preference and geographic distributions of marsupial species through time. We found a pattern of mesic-adapted lineages evolving to use more arid and open habitats, which is broadly consistent with regional climate and environmental change. However, contrary to the general trend, several lineages subsequently appear to have reverted from drier to more mesic habitats. Biogeographic reconstructions suggest that current views on the connectivity between Australia and New Guinea/Wallacea during the Miocene and Pliocene need to be revised. The antiquity of several endemic New Guinean clades strongly suggests a substantially older period of connection stretching back to the Middle Miocene, and implies that New Guinea was colonized by multiple clades almost immediately after its principal formation.
Resumo:
This paper reports an investigation of the views and practices of 203 Australian psychologists and guidance counsellors with respect to psycho-educational assessment of students with Specific Learning Disabilities (SLDs). Results from an online survey indicated that practitioners draw upon a wide-range of theoretical perspectives when conceptualising and identifying SLDs, including both response to intervention and IQ – achievement discrepancy models. Intelligence tests (particularly the Wechsler scales) are commonly employed, with the main stated reasons for their use being ‘traditional’ perspectives (including IQ-achievement discrepancy-based definitions of SLDs), to exclude a diagnosis of intellectual disability, and to guide further assessment and intervention. In contrast participants reported using measures of academic achievement and tests of specific cognitive deficits known to predict SLDs (e.g., phonological awareness) relatively infrequently.