896 resultados para Logical Inference
Resumo:
To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).
Resumo:
Health Law in Australia is the country’s leading text in this area and was the first book to deal with health law on a comprehensive national basis. In this important field that continues to give rise to challenges for society Health Law in Australia takes a logical, structured approach to explain the breadth of this area of law across all Australian jurisdictions. By covering all the major areas in this diverse field, Health Law in Australia enhances the understanding of the discipline as a whole. Beginning with an exploration of the general principles of health law, including chapters on “Negligence”, “Children and Consent to Medical Treatment”, and “Medical Confidentiality and Patient Privacy”, the book goes on to consider beginning-of-life and end-of-life issues before concluding with chapters on emerging areas in health law, such as biotechnology, genetic technologies and medical research. The contributing authors are national leaders who are specialists in these areas of health law and who can share with readers the results of their research. Health Law in Australia has been written for both legal and health audiences and is essential reading for undergraduate and postgraduate students, researchers and scholars in the disciplines of law, health and medicine, as well as health and legal practitioners, government departments and bodies in the health area, and private health providers.
Resumo:
Both environmental economists and policy makers have shown a great deal of interest in the effect of pollution abatement on environmental efficiency. In line with the modern resources available, however, no contribution is brought to the environmental economics field with the Markov chain Monte Carlo (MCMC) application, which enables simulation from a distribution of a Markov chain and simulating from the chain until it approaches equilibrium. The probability density functions gained prominence with the advantages over classical statistical methods in its simultaneous inference and incorporation of any prior information on all model parameters. This paper concentrated on this point with the application of MCMC to the database of China, the largest developing country with rapid economic growth and serious environmental pollution in recent years. The variables cover the economic output and pollution abatement cost from the year 1992 to 2003. We test the causal direction between pollution abatement cost and environmental efficiency with MCMC simulation. We found that the pollution abatement cost causes an increase in environmental efficiency through the algorithm application, which makes it conceivable that the environmental policy makers should make more substantial measures to reduce pollution in the near future.
Resumo:
As a concept, the magic circle is in reality just 4 years old. Whilst often accredited to Johan Huizinga (1955), the modern usage of term in truth belongs to Katie Salen and Eric Zimmerman. It became in academia following the publication of “Rules of Play” in 2003. Because of the terminologyused, it carries with it unhelpful preconceptions that the game world, or play-space, excludes reality. In this paper, I argue that Salen and Zimmerman (2003) have taken a term used as an example, and applied a meaning to it that was never intended, based primarily upon definitions given by other authors, namely Apter (1991) and Sniderman (n.d.). I further argue that the definition itself contains a logical fallacy, which has prevented the full understanding of the definition in later work. Through a study of the literature in Game Theory, and examples of possible issues which could arise in contemporary games, I suggest that the emotions of the play experience continue beyond the play space, and that emotions from the “real world” enter it with the participants. I consider a reprise of the Stanley Milgram Obedience Experiment (2006), and what that tells us about human emotions and the effect that events taking place in a virtual environment can have upon them. I evaluate the opinion espoused by some authors of there being different magic circles for different players, and assert that this is not a useful approach to take when studying games, because it prevents the analysis of a game as a single entity. Furthermore I consider the reasons given by other authors for the existence of the Magic Circle, and I assert that the term “Magic Circle” should be discarded, that it has no relevance to contemporary games, and indeed it acts as a hindrance to the design and study of games. I conclude that the play space which it claims to protect from the courts and other governmental authorities would be better served by the existing concepts of intent, consent, and commonly accepted principles associated with international travel.
Resumo:
With respect to “shape” marks, there would appear to be a “break”, imposed by the Australian Courts, in the logical conclusion that registration of a shape, which performs a functional purpose, or even further, is indistinguishable from the shape of the item or product, creates a perpetual monopoly in the manufacture of that product.
Resumo:
It is often said that Australia is a world leader in rates of copyright infringement for entertainment goods. In 2012, the hit television show, Game of Thrones, was the most downloaded television show over bitorrent, and estimates suggest that Australians accounted for a plurality of nearly 10% of the 3-4 million downloads each week. The season finale of 2013 was downloaded over a million times within 24 hours of its release, and again Australians were the largest block of illicit downloaders over BitTorrent, despite our relatively small population. This trend has led the former US Ambassador to Australia to implore Australians to stop 'stealing' digital content, and rightsholders to push for increasing sanctions on copyright infringers. The Australian Government is looking to respond by requiring Internet Service Providers to issue warnings and potentially punish consumers who are alleged by industry groups to have infringed copyright. This is the logical next step in deterring infringement, given that the operators of infringing networks (like The Pirate Bay, for example) are out of regulatory reach. This steady ratcheting up of the strength of copyright, however, comes at a significant cost to user privacy and autonomy, and while the decentralisation of enforcement reduces costs, it also reduces the due process safeguards provided by the judicial process. This article presents qualitative evidence that substantiates a common intuition: one of the major reasons that Australians seek out illicit downloads of content like Game of Thrones in such numbers is that it is more difficult to access legitimately in Australia. The geographically segmented way in which copyright is exploited at an international level has given rise to a ‘tyranny of digital distance’, where Australians have less access to copyright goods than consumers in other countries. Compared to consumers in the US and the EU, Australians pay more for digital goods, have less choice in distribution channels, are exposed to substantial delays in access, and are sometimes denied access completely. In this article we focus our analysis on premium film and television offerings, like Game of Thrones, and through semi-structured interviews, explore how choices in distribution impact on the willingness of Australian consumers to seek out infringing copies of copyright material. Game of Thrones provides an excellent case study through which to frame this analysis: it is both one of the least legally accessible television offerings and one of the most downloaded through filesharing networks of recent times. Our analysis shows that at the same time as rightsholder groups, particularly in the film and television industries, are lobbying for stronger laws to counter illicit distribution, the business practices of their member organisations are counter-productively increasing incentives for consumers to infringe. The lack of accessibility and high prices of copyright goods in Australia leads to substantial economic waste. The unmet consumer demand means that Australian consumers are harmed by lower access to information and entertainment goods than consumers in other jurisdictions. The higher rates of infringement that fulfils some of this unmet demand increases enforcement costs for copyright owners and imposes burdens either on our judicial system or on private entities – like ISPs – who may be tasked with enforcing the rights of third parties. Most worryingly, the lack of convenient and cheap legitimate digital distribution channels risks undermining public support for copyright law. Our research shows that consumers blame rightsholders for failing to meet market demand, and this encourages a social norm that infringing copyright, while illegal, is not morally wrongful. The implications are as simple as they are profound: Australia should not take steps to increase the strength of copyright law at this time. The interests of the public and those of rightsholders align better when there is effective competition in distribution channels and consumers can legitimately get access to content. While foreign rightsholders are seeking enhanced protection for their interests, increasing enforcement is likely to increase their ability to engage in lucrative geographical price-discrimination, particularly for premium content. This is only likely to increase the degree to which Australian consumers feel that their interests are not being met and, consequently, to further undermine the legitimacy of copyright law. If consumers are to respect copyright law, increasing sanctions for infringement without enhancing access and competition in legitimate distribution channels could be dangerously counter-productive. We suggest that rightsholders’ best strategy for addressing infringement in Australia at this time is to ensure that Australians can access copyright goods in a timely, affordable, convenient, and fair lawful manner.
Resumo:
Approximate Bayesian Computation’ (ABC) represents a powerful methodology for the analysis of complex stochastic systems for which the likelihood of the observed data under an arbitrary set of input parameters may be entirely intractable – the latter condition rendering useless the standard machinery of tractable likelihood-based, Bayesian statistical inference [e.g. conventional Markov chain Monte Carlo (MCMC) simulation]. In this paper, we demonstrate the potential of ABC for astronomical model analysis by application to a case study in the morphological transformation of high-redshift galaxies. To this end, we develop, first, a stochastic model for the competing processes of merging and secular evolution in the early Universe, and secondly, through an ABC-based comparison against the observed demographics of massive (Mgal > 1011 M⊙) galaxies (at 1.5 < z < 3) in the Cosmic Assembly Near-IR Deep Extragalatic Legacy Survey (CANDELS)/Extended Groth Strip (EGS) data set we derive posterior probability densities for the key parameters of this model. The ‘Sequential Monte Carlo’ implementation of ABC exhibited herein, featuring both a self-generating target sequence and self-refining MCMC kernel, is amongst the most efficient of contemporary approaches to this important statistical algorithm. We highlight as well through our chosen case study the value of careful summary statistic selection, and demonstrate two modern strategies for assessment and optimization in this regard. Ultimately, our ABC analysis of the high-redshift morphological mix returns tight constraints on the evolving merger rate in the early Universe and favours major merging (with disc survival or rapid reformation) over secular evolution as the mechanism most responsible for building up the first generation of bulges in early-type discs.
Resumo:
A computationally efficient sequential Monte Carlo algorithm is proposed for the sequential design of experiments for the collection of block data described by mixed effects models. The difficulty in applying a sequential Monte Carlo algorithm in such settings is the need to evaluate the observed data likelihood, which is typically intractable for all but linear Gaussian models. To overcome this difficulty, we propose to unbiasedly estimate the likelihood, and perform inference and make decisions based on an exact-approximate algorithm. Two estimates are proposed: using Quasi Monte Carlo methods and using the Laplace approximation with importance sampling. Both of these approaches can be computationally expensive, so we propose exploiting parallel computational architectures to ensure designs can be derived in a timely manner. We also extend our approach to allow for model uncertainty. This research is motivated by important pharmacological studies related to the treatment of critically ill patients.
Resumo:
Existing planning theories tend to be limited in their analytical scope and often fail to account for the impact of many interactions between the multitudes of stakeholders involved in strategic planning processes. Although many theorists rejected structural–functional approaches from the 1970s, this article argues that many of structural–functional concepts remain relevant and useful to planning practitioners. In fact, structural–functional approaches are highly useful and practical when used as a foundation for systemic analysis of real-world, multi-layered, complex planning systems to support evidence-based governance reform. Such approaches provide a logical and systematic approach to the analysis of the wider governance of strategic planning systems that is grounded in systems theory and complementary to existing theories of complexity and planning. While we do not propose its use as a grand theory of planning, this article discusses how structural–functional concepts and approaches might be applied to underpin a practical analysis of the complex decision-making arrangements that drive planning practice, and to provide the evidence needed to target reform of poorly performing arrangements.
Resumo:
Driver training is one of the interventions aimed at mitigating the number of crashes that involve novice drivers. Our failure to understand what is really important for learners, in terms of risky driving, is one of the many drawbacks restraining us to build better training programs. Currently, there is a need to develop and evaluate Advanced Driving Assistance Systems that could comprehensively assess driving competencies. The aim of this paper is to present a novel Intelligent Driver Training System (IDTS) that analyses crash risks for a given driving situation, providing avenues for improvement and personalisation of driver training programs. The analysis takes into account numerous variables acquired synchronously from the Driver, the Vehicle and the Environment (DVE). The system then segments out the manoeuvres within a drive. This paper further presents the usage of fuzzy set theory to develop the safety inference rules for each manoeuvre executed during the drive. This paper presents a framework and its associated prototype that can be used to comprehensively view and assess complex driving manoeuvres and then provide a comprehensive analysis of the drive used to give feedback to novice drivers.
Resumo:
Many insect clades, especially within the Diptera (true flies), have been considered classically ‘Gondwanan’, with an inference that distributions derive from vicariance of the southern continents. Assessing the role that vicariance has played in the evolution of austral taxa requires testing the location and tempo of diversification and speciation against the well-established predictions of fragmentation of the ancient super-continent. Several early (anecdotal) hypotheses that current austral distributions originate from the breakup of Gondwana derive from studies of taxa within the family Chironomidae (non-biting midges). With the advent of molecular phylogenetics and biogeographic analytical software, these studies have been revisited and expanded to test such conclusions better. Here we studied the midge genus Stictocladius Edwards, from the subfamily Orthocladiinae, which contains austral-distributed clades that match vicariance-based expectations. We resolve several issues of systematic relationships among morphological species and reveal cryptic diversity within many taxa. Time-calibrated phylogenetic relationships among taxa accorded partially with the predicted tempo from geology. For these apparently vagile insects, vicariance-dated patterns persist for South America and Australia. However, as often found, divergence time estimates for New Zealand at c. 50 mya post-date separation of Zealandia from Antarctica and the remainder of Gondwana, but predate the proposed Oligocene ‘drowning’ of these islands. We detail other such ‘anomalous’ dates and suggest a single common explanation rather than stochastic processes. This could involve synchronous establishment following recovery from ‘drowning’ and/or deleteriously warming associated with the mid-Eocene climatic optimum (hence ‘waving’, which refers to cycles of drowning events) plus new availability of topography providing of cool running waters, or all these factors in combination. Alternatively a vicariance explanation remains available, given the uncertain duration of connectivity of Zealandia to Australia–Antarctic–South America via the Lord Howe and Norfolk ridges into the Eocene.
Resumo:
The invasive fruit fly Bactrocera invadens Drew, Tsuruta & White, and the Oriental fruit fly Bactrocera dorsalis (Hendel) are highly destructive horticultural pests of global significance. Bactrocera invadens originates from the Indian subcontinent and has recently invaded all of sub-Saharan Africa, while B. dorsalis principally occurs from the Indian subcontinent towards southern China and South-east Asia. High morphological and genetic similarity has cast doubt over whether B. invadens is a distinct species from B. dorsalis. Addressing this issue within an integrative taxonomic framework, we sampled from across the geographic distribution of both taxa and: (i) analysed morphological variation, including those characters considered diagnostic (scutum colour, length of aedeagus, width of postsutural lateral vittae, wing size, and wing shape); (ii) sequenced four loci (ITS1, ITS2, cox1 and nad4) for phylogenetic inference, and; (iii) generated a cox1 haplotype network to examine population structure. Molecular analyses included the closely related species, Bactrocera kandiensis Drew & Hancock. Scutum colour varies from red-brown to fully black for individuals from Africa and the Indian subcontinent. All individuals east of the Indian subcontinent are black except for a few red-brown individuals from China. The postsutural lateral vittae width of B. invadens is narrower than B. dorsalis from eastern Asia, but the variation is clinal, with subcontinent B. dorsalis populations intermediate in size. Aedeagus length, wing shape and wing size cannot discriminate between the two taxa. Phylogenetic analyses failed to resolve B. invadens from B. dorsalis, but did resolve B. kandiensis. Bactrocera dorsalis and B. invadens shared cox1 haplotypes, yet the haplotype network pattern does not reflect current taxonomy or patterns in thoracic colour. Some individuals of B. dorsalis/B. invadens possessed haplotypes more closely related to B. kandiensis than to conspecifics, suggestive of mitochondrial introgression between these species. The combined evidence fails to support the delimitation of B. dorsalis and B. invadens as separate biological species. Consequently, existing biological data for B. dorsalis may be applied to the invasive population in Africa. Our recommendation, in line with other recent publications, is that B. invadens be synonymized with B. dorsalis.
Resumo:
In the Bayesian framework a standard approach to model criticism is to compare some function of the observed data to a reference predictive distribution. The result of the comparison can be summarized in the form of a p-value, and it's well known that computation of some kinds of Bayesian predictive p-values can be challenging. The use of regression adjustment approximate Bayesian computation (ABC) methods is explored for this task. Two problems are considered. The first is the calibration of posterior predictive p-values so that they are uniformly distributed under some reference distribution for the data. Computation is difficult because the calibration process requires repeated approximation of the posterior for different data sets under the reference distribution. The second problem considered is approximation of distributions of prior predictive p-values for the purpose of choosing weakly informative priors in the case where the model checking statistic is expensive to compute. Here the computation is difficult because of the need to repeatedly sample from a prior predictive distribution for different values of a prior hyperparameter. In both these problems we argue that high accuracy in the computations is not required, which makes fast approximations such as regression adjustment ABC very useful. We illustrate our methods with several samples.
Resumo:
Most of the existing algorithms for approximate Bayesian computation (ABC) assume that it is feasible to simulate pseudo-data from the model at each iteration. However, the computational cost of these simulations can be prohibitive for high dimensional data. An important example is the Potts model, which is commonly used in image analysis. Images encountered in real world applications can have millions of pixels, therefore scalability is a major concern. We apply ABC with a synthetic likelihood to the hidden Potts model with additive Gaussian noise. Using a pre-processing step, we fit a binding function to model the relationship between the model parameters and the synthetic likelihood parameters. Our numerical experiments demonstrate that the precomputed binding function dramatically improves the scalability of ABC, reducing the average runtime required for model fitting from 71 hours to only 7 minutes. We also illustrate the method by estimating the smoothing parameter for remotely sensed satellite imagery. Without precomputation, Bayesian inference is impractical for datasets of that scale.
Resumo:
Alignment-free methods, in which shared properties of sub-sequences (e.g. identity or match length) are extracted and used to compute a distance matrix, have recently been explored for phylogenetic inference. However, the scalability and robustness of these methods to key evolutionary processes remain to be investigated. Here, using simulated sequence sets of various sizes in both nucleotides and amino acids, we systematically assess the accuracy of phylogenetic inference using an alignment-free approach, based on D2 statistics, under different evolutionary scenarios. We find that compared to a multiple sequence alignment approach, D2 methods are more robust against among-site rate heterogeneity, compositional biases, genetic rearrangements and insertions/deletions, but are more sensitive to recent sequence divergence and sequence truncation. Across diverse empirical datasets, the alignment-free methods perform well for sequences sharing low divergence, at greater computation speed. Our findings provide strong evidence for the scalability and the potential use of alignment-free methods in large-scale phylogenomics.