950 resultados para Order disorder transitions
Resumo:
The decision of Applegarth J in Heartwood Architectural & Joinery Pty Ltd v Redchip Lawyers [2009] QSC 195 (27 July 2009) involved a costs order against solicitors personally. This decision is but one of several recent decisions in which the court has been persuaded that the circumstances justified costs orders against legal practitioners on the indemnity basis. These decisions serve as a reminder to practitioners of their disclosure obligations when seeking any interlocutory relief in an ex parte application. These obligations are now clearly set out in r 14.4 of the Legal Profession (Solicitors) Rule 2007 and r 25 of 2007 Barristers Rule. Inexperience or ignorance will not excuse breaches of the duties owed to the court.
Resumo:
Purpose: To use a large wavefront database of a clinical population to investigate relationships between refractions and higher order aberrations and between aberrations of right and left eyes. Methods: Third and fourth-order aberration coefficients and higher-order root-mean-squared aberrations (HO RMS), scaled to a pupil size of 4.5 mm diameter, were analysed in a population of about 24,000 patients from Carl Zeiss Vision's European wavefront database. Correlations were determined between the aberrations and the variables of refraction, near addition and cylinder. Results: Most aberration coefficients were significantly dependent upon these variables, but the proportions of aberrations that could be explained by these factors were less than 2% except for spherical aberration (12%), horizontal coma (9%) and HO RMS (7%). Near addition was the major contributor for horizontal coma (8.5% out of 9.5%) and spherical equivalent was the major contributor for spherical aberration (7.7% out of 11.6%). Interocular correlations were highly significant for all aberration coefficients, varying between 0.16 and 0.81. Anisometropia was a variable of significance for three aberrations (vertical coma, secondary astigmatism and tetrafoil), but little importance can be placed on this because of the small proportions of aberrations that can be explained by refraction (all less than 1.0 %). Conclusions: Most third- and fourth-order aberration coefficients were significantly dependent upon spherical equivalent, near addition and cylinder, but only horizontal coma (9%) and spherical aberration (12%) showed dependencies of greater than 2%. Interocular correlations were highly significant for all aberration coefficients, but anisometropia had little influence on aberration coefficients.
Resumo:
This article uses critical discourse analysis to analyse material shifts in the political economy of communications. It examines texts of major corporations to describe four key changes in political economy: (1) the separation of ownership from control; (2) the separation of business from industry; (3) the separation of accountability from responsibility; and (4) the subjugation of ‘going concerns’ by overriding concerns. The authors argue that this amounts to a political economic shift from traditional concepts of ‘capitalism’ to a new ‘corporatism’ in which the relationships between public and private, state and individual interests have become redefined and obscured through new discourse strategies. They conclude that the present financial and regulatory ‘crisis’ cannot be adequately resolved without a new analytic framework for examining the relationships between corporation, discourse and political economy.
Resumo:
Rayleigh–Stokes problems have in recent years received much attention due to their importance in physics. In this article, we focus on the variable-order Rayleigh–Stokes problem for a heated generalized second grade fluid with fractional derivative. Implicit and explicit numerical methods are developed to solve the problem. The convergence, stability of the numerical methods and solvability of the implicit numerical method are discussed via Fourier analysis. Moreover, a numerical example is given and the results support the effectiveness of the theoretical analysis.
Resumo:
Fractional reaction–subdiffusion equations are widely used in recent years to simulate physical phenomena. In this paper, we consider a variable-order nonlinear reaction–subdiffusion equation. A numerical approximation method is proposed to solve the equation. Its convergence and stability are analyzed by Fourier analysis. By means of the technique for improving temporal accuracy, we also propose an improved numerical approximation. Finally, the effectiveness of the theoretical results is demonstrated by numerical examples.
Resumo:
Given global demand for new infrastructure, governments face substantial challenges in funding new infrastructure and delivering Value for Money (VfM). As part of the background to this challenge, a critique is given of current practice in the selection of the approach to procure major public sector infrastructure in Australia and which is akin to the Multi-Attribute Utility Approach (MAUA). To contribute towards addressing the key weaknesses of MAUA, a new first-order procurement decision-making model is presented. The model addresses the make-or-buy decision (risk allocation); the bundling decision (property rights incentives), as well as the exchange relationship decision (relational to arms-length exchange) in its novel approach to articulating a procurement strategy designed to yield superior VfM across the whole life of the asset. The aim of this paper is report on the development of this decisionmaking model in terms of the procedural tasks to be followed and the method being used to test the model. The planned approach to testing the model uses a sample of 87 Australian major infrastructure projects in the sum of AUD32 billion and deploys a key proxy for VfM comprising expressions of interest, as an indicator of competition.
Resumo:
Curriculum developers and researchers have promoted context-based programmes to arrest waning student interest and participation in the enabling sciences at high school and university. Context-based programmes aim for student connections between scientific discourse and real-world contexts to elevate curricular relevance without diminishing conceptual understanding. This interpretive study explored the learning transactions in one 11th grade context-based chemistry classroom where the context was the local creek. The dialectic of agency/structure was used as a lens to examine how the practices in classroom interactions afforded students the agency for learning. The results suggest that first, fluid transitions were evident in the student–student interactions involving successful students; and second, fluid transitions linking concepts to context were evident in the students’ successful reports. The study reveals that the structures of writing and collaborating in groups enabled students’ agential and fluent movement between the field of the real-world creek and the field of the formal chemistry classroom. Furthermore, characteristics of academically successful students in context-based chemistry are highlighted. Research, teaching, and future directions for context-based science teaching are discussed.
Resumo:
Higher-order thinking has featured persistently in the reform agenda for science education. The intended curriculum in various countries sets out aspirational statements for the levels of higher-order thinking to be attained by students. This study reports the extent to which chemistry examinations from four Australian states align and facilitate the intended higher-order thinking skills stipulated in curriculum documents. Through content analysis, the curriculum goals were identified for each state and compared to the nature of question items in the corresponding examinations. Categories of higher-order thinking were adapted from the OECD’s PISA Science test to analyze question items. There was considerable variation in the extent to which the examinations from the states supported the curriculum intent of developing and assessing higher-order thinking. Generally, examinations that used a marks-based system tended to emphasize lower-order thinking, with a greater distribution of marks allocated for lower-order thinking questions. Examinations associated with a criterion-referenced examination tended to award greater credit for higher-order thinking questions. The level of complexity of chemistry was another factor that limited the extent to which examination questions supported higher-order thinking. Implications from these findings are drawn for the authorities responsible for designing curriculum and assessment procedures and for teachers.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
Examining the evolution of British and Australian policing, this comparative review of the literature considers the historical underpinnings of policing in these two countries and the impact of community legitimacy derived from the early concepts of policing by consent. Using the August 2011 disorder in Britain as a lens, this paper considers whether, in striving to maintain community confidence, undue emphasis is placed on the police's public image at the expense of community safety. Examining the path of policing reform, the impact of bureaucracy on policing and the evolving debate surrounding police performance, this review suggests that, while largely delivering on the ideal of an ethical and strong police force, a preoccupation with self-image may in fact result in tarnishing the very thing British and Australian police forces strive to achieve – their standing with the public. This paper advocates for a more realistic goal of gaining public respect rather than affection in order to achieve the difficult balance between maintaining trust and respect as an approachable, ethical entity providing firm, confident policing in this ever-evolving, modern society.
Resumo:
Internationally, vocational education and training (VET) is challenged by increasing skills shortages in certain industries and rapidly changing skill requirements. Rigid and centralised state bureaucracies have proven inadequate to adapt to these challenges. Increasingly, partnerships between schools and industry have been established as a potential strategy to address local labour market demand and to provide school to work transition programs. Drawing on experiences in Australia, this paper reports on a case study of government-let partnerships between schools and industry. The Queensland Gateway schools initiative currently involves over 120 schools. The study aimed to understand how partnerships were constructed in this initiative. Selected partnerships were analysed in terms of the following principles of public-private partnerships – efficiency, effectiveness, sustainability, and beneficiaries. Although there are some benefits of partnership activities reported by both school and industry stakeholders, little evidence was found that the above underlying principles had been addressed to a significant extent in the Gateway school initiative. Further, these partnerships are often tenuously facilitated by individuals who have limited infrastructure or strategic support. Implications are that project stakeholders have not sufficiently accommodated theoretical perspectives on implementation and management of partnerships. Similar initiatives may be improved if stakeholders are cognisant of the underlying principles supporting successful public-private partnerships.
Resumo:
Recently in Australia, another media skirmish has erupted over the problem we currently call “Attention Deficit Hyperactivity Disorder”. This particular event was precipitated by the comments of a respected District Court judge. His claim that doctors are creating a generation of violent juvenile offenders by prescribing Ritalin to young children created a great deal of excitement, attracting the attention of election-conscious politicians who appear blissfully unaware of the role played by educational policy in creating and maintaining the problem. Given the short (election-driven) attention span of government policymakers, I bypass government to question what those at the front line can do to circumvent the questionable practice of diagnosing and medicating young children for difficulties they experience in schools and with learning.
Resumo:
Migraine is a common neurovascular brain disorder characterised by recurrent attacks of severe headache that may be accompanied by various neurological symptoms. Migraine is thought to result from activation of the trigeminovascular system followed by vasodilation of pain-producing intracranial blood vessels and activation of second-order sensory neurons in the trigeminal nucleus caudalis. Calcitonin gene-related peptide (CGRP) is a mediator of neurogenic inflammation and the most powerful vasodilating neuropeptide, and has been implicated in migraine pathophysiology. Consequently, genes involved in CGRP synthesis or CGRP receptor genes may play a role in migraine and/or increase susceptibility. This study investigates whether variants in the gene that encodes CGRP, calcitonin-related polypeptide alpha (CALCA) or in the gene that encodes a component of its receptor, receptor activity modifying protein 1 (RAMP1), are associated with migraine pathogenesis and susceptibility. The single nucleotide polymorphisms (SNPs) rs3781719 and rs145837941 in the CALCA gene, and rs3754701 and rs7590387 at the RAMP1 locus, were analysed in an Australian Caucasian population of migraineurs and matched controls. Although we find no significant association of any of the SNPs tested with migraine overall, we detected a nominally significant association (p = 0.031) of the RAMP1 rs3754701 variant in male migraine subjects, although this is non-significant after Bonferroni correction for multiple testing.
Resumo:
Focal segmental glomerulosclerosis (FSGS) is the consequence of a disease process that attacks the kidney's filtering system, causing serious scarring. More than half of FSGS patients develop chronic kidney failure within 10 years, ultimately requiring dialysis or renal transplantation. There are currently several genes known to cause the hereditary forms of FSGS (ACTN4, TRPC6, CD2AP, INF2, MYO1E and NPHS2). This study involves a large, unique, multigenerational Australian pedigree in which FSGS co-segregates with progressive heart block with apparent X-linked recessive inheritance. Through a classical combined approach of linkage and haplotype analysis, we identified a 21.19 cM interval implicated on the X chromosome. We then used a whole exome sequencing approach to identify two mutated genes, NXF5 and ALG13, which are located within this linkage interval. The two mutations NXF5-R113W and ALG13-T141L segregated perfectly with the disease phenotype in the pedigree and were not found in a large healthy control cohort. Analysis using bioinformatics tools predicted the R113W mutation in the NXF5 gene to be deleterious and cellular studies support a role in the stability and localization of the protein suggesting a causative role of this mutation in these co-morbid disorders. Further studies are now required to determine the functional consequence of these novel mutations to development of FSGS and heart block in this pedigree and to determine whether these mutations have implications for more common forms of these diseases in the general population.
Resumo:
Many primary immunodeficiency disorders of differing etiologies have been well characterized, and much understanding of immunological processes has been gained by investigating the mechanisms of disease. Here, we have used a whole-genome approach, employing single-nucleotide polymorphism and gene expression microarrays, to provide insight into the molecular etiology of a novel immunodeficiency disorder. Using DNA copy number profiling, we define a hyperploid region on 14q11.2 in the immunodeficiency case associated with the interleukin (IL)-25 locus. This alteration was associated with significantly heightened expression of IL25 following T-cell activation. An associated dominant type 2 helper T cell bias in the immunodeficiency case provides a mechanistic explanation for recurrence of infections by pathogens met by Th1-driven responses. Furthermore, this highlights the capacity of IL25 to alter normal human immune responses.