969 resultados para Sophisticated voting


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The range of legal instruments informing how the Murray-Darling Basin (MDB)is managed is extensive. Some provide guidance; a number indicate strategies and policies; some assume the form of protectable rights and enforceable duties.What has emerged is a complicated and sophisticated web of interacting normative arrangements. These include: several international agreements including those concerning wetlands,biodiversity and climate change; the Constitution of the Commonwealth; the Water Act 2007 of the Commonwealth; the Murray-Darling Basin Agreement scheduled to the Act; State water entitlements stated in the Agreement; Commonwealth environmental water holdings under the Act; the Murray-Darling Basin Plan; water-resource plans under the Act or State or Territorial water legislation; State and Territorial water legislation; and water entitlements and water rights under State or Territorial water legislation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The standard approach to tax compliance applies the economics-of-crime methodology pioneered by Becker (1968): in its first application, due to Allingham and Sandmo (1972) it models the behaviour of agents as a decision involving a choice of the extent of their income to report to tax authorities, given a certain institutional environment, represented by parameters such as the probability of detection and penalties in the event the agent is caught. While this basic framework yields important insights on tax compliance behavior, it has some critical limitations. Specifically, it indicates a level of compliance that is significantly below what is observed in the data. This thesis revisits the original framework with a view towards addressing this issue, and examining the political economy implications of tax evasion for progressivity in the tax structure. The approach followed involves building a macroeconomic, dynamic equilibrium model for the purpose of examining these issues, by using a step-wise model building procedure starting with some very simple variations of the basic Allingham and Sandmo construct, which are eventually integrated to a dynamic general equilibrium overlapping generations framework with heterogeneous agents. One of the variations involves incorporating the Allingham and Sandmo construct into a two-period model of a small open economy of the type originally attributed to Fisher (1930). A further variation of this simple construct involves allowing agents to initially decide whether to evade taxes or not. In the event they decide to evade, the agents then have to decide the extent of income or wealth they wish to under-report. We find that the ‘evade or not’ assumption has strikingly different and more realistic implications for the extent of evasion, and demonstrate that it is a more appropriate modeling strategy in the context of macroeconomic models, which are essentially dynamic in nature, and involve consumption smoothing across time and across various states of nature. Specifically, since deciding to undertake tax evasion impacts on the consumption smoothing ability of the agent by creating two states of nature in which the agent is ‘caught’ or ‘not caught’, there is a possibility that their utility under certainty, when they choose not to evade, is higher than the expected utility obtained when they choose to evade. Furthermore, the simple two-period model incorporating an ‘evade or not’ choice can be used to demonstrate some strikingly different political economy implications relative to its Allingham and Sandmo counterpart. In variations of the two models that allow for voting on the tax parameter, we find that agents typically choose to vote for a high degree of progressivity by choosing the highest available tax rate from the menu of choices available to them. There is, however, a small range of inequality levels for which agents in the ‘evade or not’ model vote for a relatively low value of the tax rate. The final steps in the model building procedure involve grafting the two-period models with a political economy choice into a dynamic overlapping generations setting with more general, non-linear tax schedules and a ‘cost-of evasion’ function that is increasing in the extent of evasion. Results based on numerical simulations of these models show further improvement in the model’s ability to match empirically plausible levels of tax evasion. In addition, the differences between the political economy implications of the ‘evade or not’ version of the model and its Allingham and Sandmo counterpart are now very striking; there is now a large range of values of the inequality parameter for which agents in the ‘evade or not’ model vote for a low degree of progressivity. This is because, in the ‘evade or not’ version of the model, low values of the tax rate encourages a large number of agents to choose the ‘not-evade’ option, so that the redistributive mechanism is more ‘efficient’ relative to the situations in which tax rates are high. Some further implications of the models of this thesis relate to whether variations in the level of inequality, and parameters such as the probability of detection and penalties for tax evasion matter for the political economy results. We find that (i) the political economy outcomes for the tax rate are quite insensitive to changes in inequality, and (ii) the voting outcomes change in non-monotonic ways in response to changes in the probability of detection and penalty rates. Specifically, the model suggests that changes in inequality should not matter, although the political outcome for the tax rate for a given level of inequality is conditional on whether there is a large or small or large extent of evasion in the economy. We conclude that further theoretical research into macroeconomic models of tax evasion is required to identify the structural relationships underpinning the link between inequality and redistribution in the presence of tax evasion. The models of this thesis provide a necessary first step in that direction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This book has been painstakingly researched by a scholar whose intellectual competencies span several disciplines: history, sociology, criminology, culture, drama and film studies. It is theoretically sophisticated and yet not dense as it reads like a novel with an abundance of interesting complex characters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

"Defrauding land titles systems impacts upon us all. Those who deal in land include ordinary citizens, big business, small business, governments, not-for-profit organisation, deceased estates...Fraud here touches almost everybody." the thesis presented in this paper is that the current and disparate steps taken by jurisdictions to alleviate land fraud associated with identity-based crimes are inadequate. The centrepiece of the analysis is the consideration of two scenarios that have recently occurred. One is the typical scenario where a spouse forges the partner's signature to obtain a mortgage from a financial institution. The second is atypical. It involves a sophisticated overseas fraud duping many stakeholders involved in the conveyancing process. After outlining these scenarios, we will examine how identity verification requirements of the United Kingdom, Ontario, the Australian states, and New Zealand would have been applied to these two frauds. Our conclusion is that even though some jurisdictions may have prevented the frauds from occurring, the current requirements are inadequate. We use the lessons learnt to propose what we consider core principles for identity verification in land transactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports on some findings from the first year of a three-year longitudinal study, in which seventh to ninth-graders were introduced to engineering education. Specifically, the paper addresses students’ responses to an initial design activity involving bridge construction, which was implemented at the end of seventh grade. This paper also addresses how students created their bridge designs and applied these in their bridge constructions; their reflections on their designs; their reflections on why the bridge failed to support increased weights during the testing process; and their suggestions on ways in which they would improve their bridge designs. The present findings include identification of six, increasingly sophisticated levels of illustrated bridge designs, with designs improving between the classroom and homework activities of two focus groups of students. Students’ responses to the classroom activity revealed a number of iterative design processes, where the problem goals, including constraints, served as monitoring factors for students’ generation of ideas, design thinking and construction of an effective bridge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Evaluating physical activity is important for public health population research and evaluating lifestyle interventions among targeted groups. Self-reported questionnaires are frequently used to evaluate physical activity in a variety of contexts where resource or pragmatic limitations prohibit the use of more sophisticated approaches. However, prior research in the use of other patient reported outcomes in healthcare settings has highlighted that simply completing a questionnaire may change a patients’ behaviour or responses to subsequent questions. This methodology study aimed to examine whether completing a standard physical activity questionnaire altered patients responses to two related questions a) whether they are ‘sufficiently physically active’ and b) whether they desire ‘to be more physically active.’

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Responding to the individual needs of the person affected by cancer is a fundamental tenet of nursing care. The evidence base to enable highly personalized approaches to the way we provide care has grown enormously in recent years. Today, we have a much better understanding of the mechanisms underpinning health needs of people with cancer, as well as the wide range of environmental, sociocultural, psychological, and biological influences on these needs. This growing evidence base enables us to better target and tailor interventions in increasingly sophisticated ways.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers the role of CCTV (closed circuit television) in the surveillance, policing and control of public space in urban and rural locations, specifically in relation to the use of public space by young people. The use of CCTV technology in public spaces is now an established and largely uncontested feature of everyday life in a number of countries and the assertion that they are essentially there for the protection of law abiding and consuming citizens has broadly gone unchallenged. With little or no debate in the U.K. to critique the claims made by the burgeoning security industry that CCTV protects people in the form of a ‘Big Friend’, the state at both central and local levels has endorsed the installation of CCTV apparatus across the nation. Some areas assert in their promotional material that the centre of the shopping and leisure zone is fully surveilled by cameras in order to reassure visitors that their personal safety is a matter of civic concern, with even small towns and villages expending monies on sophisticated and expensive to maintain camera systems. It is within a context of monitoring, recording and control procedures that young people’s use of public space is constructed as a threat to social order, in need of surveillance and exclusion which forms a major and contemporary feature in shaping thinking about urban and rural working class young people in the U.K. As Loader (1996) notes, young people’s claims on public space rarely gain legitimacy if ‘colliding’ with those of local residents, and Davis (1990) describes the increasing ‘militarization and destruction of public space’, while Jacobs (1965) asserts that full participation in the ‘daily life of urban streets’ is essential to the development of young people and beneficial for all who live in an area. This paper challenges the uncritical acceptance of widespread use of CCTV and identifies its oppressive and malevolent potential in forming a ‘surveillance gaze’ over young people (adapting Foucault’s ‘clinical gaze’c. 1973) which can jeopardise mental health and well being in coping with the ‘metropolis’, after Simmel, (1964).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

All elections are unique, but the Australian federal election of 2010 was unusual for many reasons. It came in the wake of the unprecedented ousting of the Prime Minister who had led the Australian Labor Party to a landslide victory, after eleven years in opposition, at the previous election in 2007. In a move that to many would have been unthinkable, Kevin Rudd’s increasing unpopularity within his own parliamentary party finally took its toll and in late June he was replaced by his deputy, Julia Gillard. Thus the second unusual feature of the election was that it was contested by Australia’s first female prime minister. The third unusual feature was that the election almost saw a first-term government, with a comfortable majority, defeated. Instead it resulted in a hung parliament, for the first time since 1940, and Labor scraped back into power as a minority government, supported by three independents and the first member of the Australian Greens ever to be elected to the House of Representatives. The Coalition Liberal and National opposition parties themselves had a leader of only eight months standing, Tony Abbott, whose ascension to the position had surprised more than a few. This was the context for an investigation of voting behaviour in the 2010 election....

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Collections of solid particles from the Earth's stratosphere by high-flying aircraft have been reported since 1965, with the initial primary objective of understanding the nature of the aerosol layer that occurs in the lower stratosphere. With the advent of efficient collection procedures and sophisticated electron- and ion-beam techniques, the primary aim of current stratospheric collections has been to study specific particle types that are extraterrestrial in origin and have survived atmospheric entry processes. The collection program provided by NASA at Johnson Space Center (JSC) has conducted many flights over the past 4 years and retrieved a total of 99 collection surfaces (flags) suitable for detailed study. Most of these collections are part of dedicated flights and have occurred during volcanically quiescent periods, although solid particles from the El Chichon eruptions have also been collected. Over 800 individual particles (or representative samples from larger aggregates) have been picked from these flags, examined in a preliminary fashion by SEM and EDS, and cataloged in a manner suitable for selection and study by the wider scientific community.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A major factor in the stratospheric collection process is the relative density of particles at the collection altitude. With current aircraft-borne collector plate geometries, one potential extraterrestrial particle of about 10 micron diameter is collected approximately every hour. However, a new design for the collector plate, termed the Large Area Collector (LAC), allows a factor of 10 improvement in collection efficiency over current conventional geometry. The implementation of LAC design on future stratospheric collection flights will provide many opportunities for additional data on both terrestrial and extraterrestrial phenomena. With the improvement in collection efficiency, LAC's may provide a suitable number of potential extraterrestrial particles in one short flight of between 4 and 8 hours duration. Alternatively, total collection periods of approximately 40 hours enhance the probability that rare particles can be retrieved from the stratosphere. This latter approach is of great value for the cosmochemist who may wish to perform sophisticated analyses on interplanetary dust greater than a picogram. The former approach, involving short duration flights, may also provide invaluable data on the source of many extraterrestrial particles. The time dependence of particle entry to the collection altitude is an important parameter which may be correlated with specific global events (e.g., meteoroid streams) provided the collection time is known to an accuracy of 2 hours.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The pioneering work of Runge and Kutta a hundred years ago has ultimately led to suites of sophisticated numerical methods suitable for solving complex systems of deterministic ordinary differential equations. However, in many modelling situations, the appropriate representation is a stochastic differential equation and here numerical methods are much less sophisticated. In this paper a very general class of stochastic Runge-Kutta methods is presented and much more efficient classes of explicit methods than previous extant methods are constructed. In particular, a method of strong order 2 with a deterministic component based on the classical Runge-Kutta method is constructed and some numerical results are presented to demonstrate the efficacy of this approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years considerable attention has been paid to the numerical solution of stochastic ordinary differential equations (SODEs), as SODEs are often more appropriate than their deterministic counterparts in many modelling situations. However, unlike the deterministic case numerical methods for SODEs are considerably less sophisticated due to the difficulty in representing the (possibly large number of) random variable approximations to the stochastic integrals. Although Burrage and Burrage [High strong order explicit Runge-Kutta methods for stochastic ordinary differential equations, Applied Numerical Mathematics 22 (1996) 81-101] were able to construct strong local order 1.5 stochastic Runge-Kutta methods for certain cases, it is known that all extant stochastic Runge-Kutta methods suffer an order reduction down to strong order 0.5 if there is non-commutativity between the functions associated with the multiple Wiener processes. This order reduction down to that of the Euler-Maruyama method imposes severe difficulties in obtaining meaningful solutions in a reasonable time frame and this paper attempts to circumvent these difficulties by some new techniques. An additional difficulty in solving SODEs arises even in the Linear case since it is not possible to write the solution analytically in terms of matrix exponentials unless there is a commutativity property between the functions associated with the multiple Wiener processes. Thus in this present paper first the work of Magnus [On the exponential solution of differential equations for a linear operator, Communications on Pure and Applied Mathematics 7 (1954) 649-673] (applied to deterministic non-commutative Linear problems) will be applied to non-commutative linear SODEs and methods of strong order 1.5 for arbitrary, linear, non-commutative SODE systems will be constructed - hence giving an accurate approximation to the general linear problem. Secondly, for general nonlinear non-commutative systems with an arbitrary number (d) of Wiener processes it is shown that strong local order I Runge-Kutta methods with d + 1 stages can be constructed by evaluated a set of Lie brackets as well as the standard function evaluations. A method is then constructed which can be efficiently implemented in a parallel environment for this arbitrary number of Wiener processes. Finally some numerical results are presented which illustrate the efficacy of these approaches. (C) 1999 Elsevier Science B.V. All rights reserved.