873 resultados para Divergent Sets
Resumo:
Awareness of the power of the mass media to communicate images of protest to global audiences and, in so doing, to capture space in global media discourses is a central feature of the transnational protest movement. A number of protest movements have formed around opposition to concepts and practices that operate beyond national borders, such as neoliberal globalization or threats to the environment. However, transnational protests also involve more geographically discreet issues such as claims to national independence or greater religious or political freedom by groups within specific national contexts. Appealing to the international community for support is a familiar strategy for communities who feel that they are being discriminated against or ignored by a national government.
Resumo:
In Australia, young children who lack decision-making capacity can have regenerative tissue removed to treat another person suffering from a severe or life-threatening disease. While great good can potentially result from this as the recipient’s life may be saved, ethical unease remains over the ‘use’ of young children in this way. This paper examines the ethical approaches that have featured in the debate over the acceptability and limits of this practice, and how these are reflected in Australia’s legal regime governing removal of tissue from young children. This analysis demonstrates a troubling dichotomy within the Australia’s laws that requires decision-makers to adopt inconsistent ethical approaches depending on where a donor child is situated. It is argued that this inconsistency in approach warrants legal reform of this ethically sensitive issue.
Resumo:
It is natural for those involved in entertainment to focus on the art. However, like any activity in even a free society, those involved in entertainment industries must operate within borders set by the law. This article examines the main areas of law that impact entertainment in an Australian context. It contrasts the position in relation to freedom of expression in Australia with that in the United States, which also promotes freedom of expression in a free society. It then briefly canvases the main limits on entertainment productions under Australian law.
Resumo:
The primary objective of the experiments reported here was to demonstrate the effects of opening up the design envelope for auditory alarms on the ability of people to learn the meanings of a set of alarms. Two sets of alarms were tested, one already extant and one newly-designed set for the same set of functions, designed according to a rationale set out by the authors aimed at increasing the heterogeneity of the alarm set and incorporating some well-established principles of alarm design. For both sets of alarms, a similarity-rating experiment was followed by a learning experiment. The results showed that the newly-designed set was judged to be more internally dissimilar, and easier to learn, than the extant set. The design rationale outlined in the paper is useful for design purposes in a variety of practical domains and shows how alarm designers, even at a relatively late stage in the design process, can improve the efficacy of an alarm set.
Resumo:
We address the problem of face recognition on video by employing the recently proposed probabilistic linear discrimi-nant analysis (PLDA). The PLDA has been shown to be robust against pose and expression in image-based face recognition. In this research, the method is extended and applied to video where image set to image set matching is performed. We investigate two approaches of computing similarities between image sets using the PLDA: the closest pair approach and the holistic sets approach. To better model face appearances in video, we also propose the heteroscedastic version of the PLDA which learns the within-class covariance of each individual separately. Our experi-ments on the VidTIMIT and Honda datasets show that the combination of the heteroscedastic PLDA and the closest pair approach achieves the best performance.
Resumo:
Background. We have characterised a new highly divergent geminivirus species, Eragrostis curvula streak virus (ECSV), found infecting a hardy perennial South African wild grass. ECSV represents a new genus-level geminivirus lineage, and has a mixture of features normally associated with other specific geminivirus genera. Results. Whereas the ECSV genome is predicted to express a replication associated protein (Rep) from an unspliced complementary strand transcript that is most similar to those of begomoviruses, curtoviruses and topocuviruses, its Rep also contains what is apparently a canonical retinoblastoma related protein interaction motif such as that found in mastreviruses. Similarly, while ECSV has the same unusual TAAGATTCC virion strand replication origin nonanucleotide found in another recently described divergent geminivirus, Beet curly top Iran virus (BCTIV), the rest of the transcription and replication origin is structurally more similar to those found in begomoviruses and curtoviruses than it is to those found in BCTIV and mastreviruses. ECSV also has what might be a homologue of the begomovirus transcription activator protein gene found in begomoviruses, a mastrevirus-like coat protein gene and two intergenic regions. Conclusion. Although it superficially resembles a chimaera of geminiviruses from different genera, the ECSV genome is not obviously recombinant, implying that the features it shares with other geminiviruses are those that were probably present within the last common ancestor of these viruses. In addition to inferring how the ancestral geminivirus genome may have looked, we use the discovery of ECSV to refine various hypotheses regarding the recombinant origins of the major geminivirus lineages. © 2009 Varsani et al; licensee BioMed Central Ltd.
Resumo:
Big data is big news in almost every sector including crisis communication. However, not everyone has access to big data and even if we have access to big data, we often do not have necessary tools to analyze and cross reference such a large data set. Therefore this paper looks at patterns in small data sets that we have ability to collect with our current tools to understand if we can find actionable information from what we already have. We have analyzed 164390 tweets collected during 2011 earthquake to find out what type of location specific information people mention in their tweet and when do they talk about that. Based on our analysis we find that even a small data set that has far less data than a big data set can be useful to find priority disaster specific areas quickly.
Resumo:
It has been known since Rhodes Fairbridge’s first attempt to establish a global pattern of Holocene sea-level change by combining evidence from Western Australia and from sites in the northern hemisphere that the details of sea-level history since the Last Glacial Maximum vary considerably across the globe. The Australian region is relatively stable tectonically and is situated in the ‘far-field’ of former ice sheets. It therefore preserves important records of post-glacial sea levels that are less complicated by neotectonics or glacio-isostatic adjustments. Accordingly, the relative sea-level record of this region is dominantly one of glacio-eustatic (ice equivalent) sea-level changes. The broader Australasian region has provided critical information on the nature of post-glacial sea level, including the termination of the Last Glacial Maximum when sea level was approximately 125 m lower than present around 21,000–19,000 years BP, and insights into meltwater pulse 1A between 14,600 and 14,300 cal. yr BP. Although most parts of the Australian continent reveals a high degree of tectonic stability, research conducted since the 1970s has shown that the timing and elevation of a Holocene highstand varies systematically around its margin. This is attributed primarily to variations in the timing of the response of the ocean basins and shallow continental shelves to the increased ocean volumes following ice-melt, including a process known as ocean siphoning (i.e. glacio-hydro-isostatic adjustment processes). Several seminal studies in the early 1980s produced important data sets from the Australasian region that have provided a solid foundation for more recent palaeo-sea-level research. This review revisits these key studies emphasising their continuing influence on Quaternary research and incorporates relatively recent investigations to interpret the nature of post-glacial sea-level change around Australia. These include a synthesis of research from the Northern Territory, Queensland, New South Wales, South Australia and Western Australia. A focus of these more recent studies has been the re-examination of: (1) the accuracy and reliability of different proxy sea-level indicators; (2) the rate and nature of post-glacial sea-level rise; (3) the evidence for timing, elevation, and duration of mid-Holocene highstands; and, (4) the notion of mid- to late Holocene sea-level oscillations, and their basis. Based on this synthesis of previous research, it is clear that estimates of past sea-surface elevation are a function of eustatic factors as well as morphodynamics of individual sites, the wide variety of proxy sea-level indicators used, their wide geographical range, and their indicative meaning. Some progress has been made in understanding the variability of the accuracy of proxy indicators in relation to their contemporary sea level, the inter-comparison of the variety of dating techniques used and the nuances of calibration of radiocarbon ages to sidereal years. These issues need to be thoroughly understood before proxy sea-level indicators can be incorporated into credible reconstructions of relative sea-level change at individual locations. Many of the issues, which challenged sea-level researchers in the latter part of the twentieth century, remain contentious today. Divergent opinions remain about: (1) exactly when sea level attained present levels following the most recent post-glacial marine transgression (PMT); (2) the elevation that sea-level reached during the Holocene sea-level highstand; (3) whether sea-level fell smoothly from a metre or more above its present level following the PMT; (4) whether sea level remained at these highstand levels for a considerable period before falling to its present position; or (5) whether it underwent a series of moderate oscillations during the Holocene highstand.
Resumo:
This paper evaluates the efficiency of a number of popular corpus-based distributional models in performing discovery on very large document sets, including online collections. Literature-based discovery is the process of identifying previously unknown connections from text, often published literature, that could lead to the development of new techniques or technologies. Literature-based discovery has attracted growing research interest ever since Swanson's serendipitous discovery of the therapeutic effects of fish oil on Raynaud's disease in 1986. The successful application of distributional models in automating the identification of indirect associations underpinning literature-based discovery has been heavily demonstrated in the medical domain. However, we wish to investigate the computational complexity of distributional models for literature-based discovery on much larger document collections, as they may provide computationally tractable solutions to tasks including, predicting future disruptive innovations. In this paper we perform a computational complexity analysis on four successful corpus-based distributional models to evaluate their fit for such tasks. Our results indicate that corpus-based distributional models that store their representations in fixed dimensions provide superior efficiency on literature-based discovery tasks.
Resumo:
In this paper we present large, accurately calibrated and time-synchronized data sets, gathered outdoors in controlled and variable environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. These include four 2D laser scanners, a radar scanner, a color camera and an infrared camera. It provides a full description of the system used for data collection and the types of environments and conditions in which these data sets have been gathered, which include the presence of airborne dust, smoke and rain.
Resumo:
Antibodies can play a protective but non-essential role in natural chlamydial infections dependent on antigen specificity and antibody isotype. IgG is the dominant antibody in both male and female reproductive tract mucosal secretions, and is bi-directionally trafficked across epithelia by the neonatal Fc receptor (FcRn). Using physiologically relevant pH-polarized epididymal epithelia grown on Transwells®, IgG specifically targeting an extracellular chlamydial antigen; the Major Outer Membrane Protein (MOMP), enhanced uptake and translocation of infection at pH 6-6.5 but not at neutral pH. This was dependent on FcRn expression. Conversely, FcRn-mediated transport of IgG targeting the intracellular chlamydial inclusion membrane protein A (IncA), induced aberrant inclusion morphology, recruited autophagic proteins independent of lysosomes, and significantly reduced infection. Challenge of female mice with MOMP-specific IgG-opsonized C. muridarum delayed infection clearance but exacerbated oviduct occlusion. In male mice, MOMP-IgG elicited by immunization afforded no protection against testicular chlamydial infection, whereas; the transcytosis of IncA-IgG significantly reduced testicular chlamydial burden. Together these data show that the protective and pathological effects of IgG are dependent on FcRn-mediated transport as well as the specificity of IgG for intracellular or extracellular antigens.
Resumo:
Traditional nearest points methods use all the samples in an image set to construct a single convex or affine hull model for classification. However, strong artificial features and noisy data may be generated from combinations of training samples when significant intra-class variations and/or noise occur in the image set. Existing multi-model approaches extract local models by clustering each image set individually only once, with fixed clusters used for matching with various image sets. This may not be optimal for discrimination, as undesirable environmental conditions (eg. illumination and pose variations) may result in the two closest clusters representing different characteristics of an object (eg. frontal face being compared to non-frontal face). To address the above problem, we propose a novel approach to enhance nearest points based methods by integrating affine/convex hull classification with an adapted multi-model approach. We first extract multiple local convex hulls from a query image set via maximum margin clustering to diminish the artificial variations and constrain the noise in local convex hulls. We then propose adaptive reference clustering (ARC) to constrain the clustering of each gallery image set by forcing the clusters to have resemblance to the clusters in the query image set. By applying ARC, noisy clusters in the query set can be discarded. Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method outperforms single model approaches and other recent techniques, such as Sparse Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant Analysis.
Resumo:
It is often said that Australia is a world leader in rates of copyright infringement for entertainment goods. In 2012, the hit television show, Game of Thrones, was the most downloaded television show over bitorrent, and estimates suggest that Australians accounted for a plurality of nearly 10% of the 3-4 million downloads each week. The season finale of 2013 was downloaded over a million times within 24 hours of its release, and again Australians were the largest block of illicit downloaders over BitTorrent, despite our relatively small population. This trend has led the former US Ambassador to Australia to implore Australians to stop 'stealing' digital content, and rightsholders to push for increasing sanctions on copyright infringers. The Australian Government is looking to respond by requiring Internet Service Providers to issue warnings and potentially punish consumers who are alleged by industry groups to have infringed copyright. This is the logical next step in deterring infringement, given that the operators of infringing networks (like The Pirate Bay, for example) are out of regulatory reach. This steady ratcheting up of the strength of copyright, however, comes at a significant cost to user privacy and autonomy, and while the decentralisation of enforcement reduces costs, it also reduces the due process safeguards provided by the judicial process. This article presents qualitative evidence that substantiates a common intuition: one of the major reasons that Australians seek out illicit downloads of content like Game of Thrones in such numbers is that it is more difficult to access legitimately in Australia. The geographically segmented way in which copyright is exploited at an international level has given rise to a ‘tyranny of digital distance’, where Australians have less access to copyright goods than consumers in other countries. Compared to consumers in the US and the EU, Australians pay more for digital goods, have less choice in distribution channels, are exposed to substantial delays in access, and are sometimes denied access completely. In this article we focus our analysis on premium film and television offerings, like Game of Thrones, and through semi-structured interviews, explore how choices in distribution impact on the willingness of Australian consumers to seek out infringing copies of copyright material. Game of Thrones provides an excellent case study through which to frame this analysis: it is both one of the least legally accessible television offerings and one of the most downloaded through filesharing networks of recent times. Our analysis shows that at the same time as rightsholder groups, particularly in the film and television industries, are lobbying for stronger laws to counter illicit distribution, the business practices of their member organisations are counter-productively increasing incentives for consumers to infringe. The lack of accessibility and high prices of copyright goods in Australia leads to substantial economic waste. The unmet consumer demand means that Australian consumers are harmed by lower access to information and entertainment goods than consumers in other jurisdictions. The higher rates of infringement that fulfils some of this unmet demand increases enforcement costs for copyright owners and imposes burdens either on our judicial system or on private entities – like ISPs – who may be tasked with enforcing the rights of third parties. Most worryingly, the lack of convenient and cheap legitimate digital distribution channels risks undermining public support for copyright law. Our research shows that consumers blame rightsholders for failing to meet market demand, and this encourages a social norm that infringing copyright, while illegal, is not morally wrongful. The implications are as simple as they are profound: Australia should not take steps to increase the strength of copyright law at this time. The interests of the public and those of rightsholders align better when there is effective competition in distribution channels and consumers can legitimately get access to content. While foreign rightsholders are seeking enhanced protection for their interests, increasing enforcement is likely to increase their ability to engage in lucrative geographical price-discrimination, particularly for premium content. This is only likely to increase the degree to which Australian consumers feel that their interests are not being met and, consequently, to further undermine the legitimacy of copyright law. If consumers are to respect copyright law, increasing sanctions for infringement without enhancing access and competition in legitimate distribution channels could be dangerously counter-productive. We suggest that rightsholders’ best strategy for addressing infringement in Australia at this time is to ensure that Australians can access copyright goods in a timely, affordable, convenient, and fair lawful manner.
Resumo:
Analytically or computationally intractable likelihood functions can arise in complex statistical inferential problems making them inaccessible to standard Bayesian inferential methods. Approximate Bayesian computation (ABC) methods address such inferential problems by replacing direct likelihood evaluations with repeated sampling from the model. ABC methods have been predominantly applied to parameter estimation problems and less to model choice problems due to the added difficulty of handling multiple model spaces. The ABC algorithm proposed here addresses model choice problems by extending Fearnhead and Prangle (2012, Journal of the Royal Statistical Society, Series B 74, 1–28) where the posterior mean of the model parameters estimated through regression formed the summary statistics used in the discrepancy measure. An additional stepwise multinomial logistic regression is performed on the model indicator variable in the regression step and the estimated model probabilities are incorporated into the set of summary statistics for model choice purposes. A reversible jump Markov chain Monte Carlo step is also included in the algorithm to increase model diversity for thorough exploration of the model space. This algorithm was applied to a validating example to demonstrate the robustness of the algorithm across a wide range of true model probabilities. Its subsequent use in three pathogen transmission examples of varying complexity illustrates the utility of the algorithm in inferring preference of particular transmission models for the pathogens.