281 resultados para Authoritarian Regimes
Resumo:
Here, we describe a metal-insulator-insulator nanofocusing structure formed by a high-permittivity dielectric wedge on a metal substrate. The structure is shown to produce nanofocusing of surface plasmon polaritons (SPPs) in the direction opposite to the taper of the wedge, including a range of nanoplasmonic effects such as nanofocusing of SPPs with negative refraction, formation of plasmonic caustics within a nanoscale distance from the wedge tip, mutual transformation of SPP modes, and significant local field enhancements in the adiabatic and strongly nonadiabatic regimes. A combination of approximate analytical and rigorous numerical approaches is used to analyze the strength and position of caustics in the structure. In particular, it is demonstrated that strong SPP localization within spatial regions as small as a few tens of nanometers near the caustic is achievable in the considered structures. Contrary to other nanofocusing configurations, efficient nanofocusing is shown to occur in the strongly nonadiabatic regime with taper angles of the dielectric wedge as large as ∼40° and within uniquely short distances (as small as a few dozens of nanometers) from the tip of the wedge. Physical interpretations of the obtained results are also presented and discussed.
Resumo:
We present an algorithm for multiarmed bandits that achieves almost optimal performance in both stochastic and adversarial regimes without prior knowledge about the nature of the environment. Our algorithm is based on augmentation of the EXP3 algorithm with a new control lever in the form of exploration parameters that are tailored individually for each arm. The algorithm simultaneously applies the “old” control lever, the learning rate, to control the regret in the adversarial regime and the new control lever to detect and exploit gaps between the arm losses. This secures problem-dependent “logarithmic” regret when gaps are present without compromising on the worst-case performance guarantee in the adversarial regime. We show that the algorithm can exploit both the usual expected gaps between the arm losses in the stochastic regime and deterministic gaps between the arm losses in the adversarial regime. The algorithm retains “logarithmic” regret guarantee in the stochastic regime even when some observations are contaminated by an adversary, as long as on average the contamination does not reduce the gap by more than a half. Our results for the stochastic regime are supported by experimental validation.
Resumo:
This article is based on a historical-comparative policy and discourse analysis of the principles underpinning the Australian disability income support system. It determines that these principles rely on a conception of disability that sustains a system of coercion and paternalism that perpetuates disability and referred to as disablism. The article examines the construction of disability in Australian income support across four major historical epochs spanning the period 1908-2007. Contextualisation of the policy trajectory and discourses of the contemporary disability pension regime for the time period 2008-now is also provided. Two major themes were found to have interacted with the ideology of disablism. This article argues that a non-disabling provision based on social citizenship, rather than responsible or productive citizenship, counters the tendency for authoritarian and paternal approaches. [Abridged]
Resumo:
Australian Media Law details and explains the complex case law, legislation and regulations governing media practice in areas as diverse as journalism, advertising, multimedia and broadcasting. It examines the issues affecting traditional forms of media such as television, radio, film and newspapers as well as for recent forms such as the internet, online forums and digital technology, in a clear and accessible format. New additions to the fifth edition include: - the implications of new anti-terrorism legislation for journalists; - developments in privacy law, including Law Reform recommendations for a statutory cause of action to protect personal privacy in Australia and the expanding privacy jurisprudence in the United Kingdom and New Zealand; - liability for defamation of internet search engines and service providers; - the High Court decision in Roadshow v iiNet and the position of internet service providers in relation to copyright infringement via their services; - new suppression order regimes; - statutory reforms providing journalists with a rebuttable presumption of non-disclosure when called upon to reveal their sources in a court of law; - recent developments regarding whether journalists can use electronic devices to collect and disseminate information about court proceedings; - contempt committed by jurors via social media; and an examination of recent decisions on defamation, confidentiality, vilification, copyright and contempt.
Resumo:
Child-centeredness runs a familiar route in educational narratives. From Rousseau to Pestalozzi to Froebel to present day systems of childcare and schooling, childcenteredness is thought to have shifted the treatment of children into closer harmony with their true nature and hence into more sensitive and civilized forms of rearing. The celebratory air surrounding its deployment in education has been pervasive and difficult to contest partly because of the emotive alliances that have been drawn between child-centeredness and progressivism. That is, child-centeredness has been positioned as superseding a harsh, medieval ignorance of children while preventing present-day authoritarian strategies of domination. Child-centeredness is thus presently constituted as a soft space, as a deeply sensitive middle ground, between ignoring children and dominating them completely.
Resumo:
Data generated via user activity on social media platforms is routinely used for research across a wide range of social sciences and humanities disciplines. The availability of data through the Twitter APIs in particular has afforded new modes of research, including in media and communication studies; however, there are practical and political issues with gaining access to such data, and with the consequences of how that access is controlled. In their paper ‘Easy Data, Hard Data’, Burgess and Bruns (2015) discuss both the practical and political aspects of Twitter data as they relate to academic research, describing how communication research has been enabled, shaped and constrained by Twitter’s “regimes of access” to data, the politics of data use, and emerging economies of data exchange. This conceptual model, including the ‘easy data, hard data’ formulation, can also be applied to Sina Weibo. In this paper, we build on this model to explore the practical and political challenges and opportunities associated with the ‘regimes of access’ to Weibo data, and their consequences for digital media and communication studies. We argue that in the Chinese context, the politics of data access can be even more complicated than in the case of Twitter, which makes scientific research relying on large social data from this platform more challenging in some ways, but potentially richer and more rewarding in others.
Resumo:
We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
This paper argues that Michel Foucault’s lectures that form The Birth of Biopolitics owe a considerable debt to the thought of Max Weber, particularly in their analysis of how different socio-legal regimes shape distinctive national forms of capitalist economies, and the role that is played by social and economic institutions in the shaping of individual identities. This is in contrast to a common interpretation of Foucault’s account of neoliberalism, which synthesizes his work into neo-Marxist notions of hegemony and capitalist domination. It also identifies Foucault’s approach to neoliberalism as an exploratory one, which considers insights into how a particular relationship between ideas and institutional practices may help in imagining socialist forms of government practice.
Resumo:
My thesis examined an alternative approach, referred to as the unitary taxation approach to the allocation of profit, which arises from the notion that as a multinational group exists as a single economic entity, it should be taxed as one taxable unit. The plausibility of a unitary taxation regime achieving international acceptance and agreement is highly contestable due to its implementation issues, and economic and political feasibility. Using a case-study approach focusing on Freeport-McMoRan and Rio Tinto's mining operations in Indonesia, this thesis compares both tax regimes against the criteria for a good tax system - equity, efficiency, neutrality and simplicity. This thesis evaluates key issues that arise when implementing a unitary taxation approach with formulary apportionment based on the context of mining multinational firms in Indonesia.
Resumo:
The nature of collective perception of prostitution is understudied in Canada. Except some rudimentary reports on the percentages of the key legal options, multivariate analysis has never been used to analyze the details of public opinion on prostitution. The current study explores the trend of public attitude toward prostitution acceptability in Canada over a 25-year span and examines the social determinants of the acceptability of prostitution, using structural equation modeling (SEM), which allows researchers to elaborate both direct and indirect effects (through mediating variables) on the outcome variable. Results show that the public has become more acceptant of prostitution over time. In addition, the less religious, less authoritarian, and more educated are more acceptant of prostitution than the more religious, more authoritarian, and less well educated. The effects of religiosity and authoritarianism mediate out the direct effects of age, gender, gender equality, marriage, marriage as an outdated institution, Quebec, race, and tolerance. The findings may serve as a reference point for the law reform regarding the regulation of prostitution in Canada.
Resumo:
The electricity industries of New Zealand (NZ) and the Australian state of Queensland have undergone substantial structural and regulatory reform with the common intent to improve economic efficiency. Deregulation and privatisation have been key elements of the reform but have been approached differently by each jurisdiction. This study traces the link between structural and regulatory regimes and asset valuation, profits and, ultimately, pricing. The study finds that key drivers in recent price increases are the government-owned generation and retail sector in NZ and the government-owned distribution sector in Queensland. It is concluded that, contrary to the rationale for the imposition of regulatory controls in a nonmarket environment, the regulatory regimes appear to have contributed to higher rather than lower pricing structures.
Resumo:
Mitigating and adapting to the effects of climate change will require innovation and the development of new technologies. Intellectual property laws have a key part to play in the global transfer of climate technologies. However, failures to properly utilize flexibilities in intellectual property regimes or comply with technology transfer obligations under international climate change agreements calls for a human rights based analysis of climate technology transfer. Climate change is an unprecedented challenge and requires unprecedented strategies. Given the substantial impact of climate change on all of humanity and the ethical imperative to act, a complete rethink of traditional intellectual property approaches is warranted. This report proposes a series of intellectual property law policy options, through a human rights framework, aimed at promoting access to technologies to reduce the human suffering caused by climate change.
Resumo:
Mathematical models describing the movement of multiple interacting subpopulations are relevant to many biological and ecological processes. Standard mean-field partial differential equation descriptions of these processes suffer from the limitation that they implicitly neglect to incorporate the impact of spatial correlations and clustering. To overcome this, we derive a moment dynamics description of a discrete stochastic process which describes the spreading of distinct interacting subpopulations. In particular, we motivate our model by mimicking the geometry of two typical cell biology experiments. Comparing the performance of the moment dynamics model with a traditional mean-field model confirms that the moment dynamics approach always outperforms the traditional mean-field approach. To provide more general insight we summarise the performance of the moment dynamics model and the traditional mean-field model over a wide range of parameter regimes. These results help distinguish between those situations where spatial correlation effects are sufficiently strong, such that a moment dynamics model is required, from other situations where spatial correlation effects are sufficiently weak, such that a traditional mean-field model is adequate.
Resumo:
BACKGROUND Hydrogel-based cell cultures are excellent tools for studying physiological events occurring in the growth and proliferation of cells, including cancer cells. Diffusion magnetic resonance is a physical technique that has been widely used for the characterisation of biological systems as well as hydrogels. In this work, we applied diffusion magnetic resonance imaging (MRI) to hydrogel-based cultures of human ovarian cancer cells. METHODS Diffusion-weighted spin-echo MRI measurements were used to obtain spatially-resolved maps of apparent diffusivities for hydrogel samples with different compositions, cell loads and drug (Taxol) treatment regimes. The samples were then characterised using their diffusivity histograms, mean diffusivities and the respective standard deviations, and pairwise Mann-Whitney tests. The elastic moduli of the samples were determined using mechanical compression testing. RESULTS The mean apparent diffusivity of the hydrogels was sensitive to the polymer content, cell load and Taxol treatment. For a given sample composition, the mean apparent diffusivity and the elastic modulus of the hydrogels exhibited a negative correlation. CONCLUSIONS Diffusivity of hydrogel-based cancer cell culture constructs is sensitive to both cell proliferation and Taxol treatment. This suggests that diffusion-weighted imaging is a promising technique for non-invasive monitoring of cancer cell proliferation in hydrogel-based, cellularly-sparse 3D cell cultures. The negative correlation between mean apparent diffusivity and elastic modulus suggests that the diffusion coefficient is indicative of the average density of the physical microenvironment within the hydrogel construct.