364 resultados para bargaining requirement
Resumo:
This article examines the law in Australia and New Zealand that governs the withholding and withdrawal of ‘futile’ life-sustaining treatment. Although doctors have both civil and criminal law duties to treat patients, those general duties do not require the provision of treatment that is deemed to be futile. This is either because futile treatment is not in a patient’s best interests or because stopping such treatment does not breach the criminal law. This means, in the absence of a duty to treat, doctors may unilaterally withdraw or withhold treatment that is futile; consent is not required. The article then examines whether this general position has been altered by statute. It considers a range of suggested possible legislation but concludes it is likely that only Queensland’s adult guardianship legislation imposes a requirement to obtain consent to withhold or withdraw such treatment.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
The design-build (DB) delivery system is an effective means of delivering a green construction project and selecting an appropriate contractor is critical to project success. Moreover, the delivery of green buildings requires specific design, construction and operation and maintenance considerations not generally encountered in the procurement of conventional buildings. Specifying clear sustainability requirements to potential contractors is particularly important in achieving sustainable project goals. However, many client/owners either do not explicitly specify sustainability requirements or do so in a prescriptive manner during the project procurement process. This paper investigates the current state-of-the-art procurement process used in specifying the sustainability requirements of the public sector in the USA construction market by means of a robust content analysis of 40 design-build requests for proposals (RFPs). The results of the content analysis indicate that the sustainability requirement is one of the most important dimensions in the best-value evaluation of DB contractors. Client/owners predominantly specify the LEED certification levels (e.g. LEED Certified, Silver, Gold, and Platinum) for a particular facility, and include the sustainability requirements as selection criteria (with specific importance weightings) for contractor evolution. Additionally, larger size projects tend to allocate higher importance weightings to sustainability requirements.This study provides public DB client/owners with a number of practical implications for selecting appropriate design-builders for sustainable DB projects.
Resumo:
Global Navigation Satellite Systems (GNSS)-based observation systems can provide high precision positioning and navigation solutions in real time, in the order of subcentimetre if we make use of carrier phase measurements in the differential mode and deal with all the bias and noise terms well. However, these carrier phase measurements are ambiguous due to unknown, integer numbers of cycles. One key challenge in the differential carrier phase mode is to fix the integer ambiguities correctly. On the other hand, in the safety of life or liability-critical applications, such as for vehicle safety positioning and aviation, not only is high accuracy required, but also the reliability requirement is important. This PhD research studies to achieve high reliability for ambiguity resolution (AR) in a multi-GNSS environment. GNSS ambiguity estimation and validation problems are the focus of the research effort. Particularly, we study the case of multiple constellations that include initial to full operations of foreseeable Galileo, GLONASS and Compass and QZSS navigation systems from next few years to the end of the decade. Since real observation data is only available from GPS and GLONASS systems, the simulation method named Virtual Galileo Constellation (VGC) is applied to generate observational data from another constellation in the data analysis. In addition, both full ambiguity resolution (FAR) and partial ambiguity resolution (PAR) algorithms are used in processing single and dual constellation data. Firstly, a brief overview of related work on AR methods and reliability theory is given. Next, a modified inverse integer Cholesky decorrelation method and its performance on AR are presented. Subsequently, a new measure of decorrelation performance called orthogonality defect is introduced and compared with other measures. Furthermore, a new AR scheme considering the ambiguity validation requirement in the control of the search space size is proposed to improve the search efficiency. With respect to the reliability of AR, we also discuss the computation of the ambiguity success rate (ASR) and confirm that the success rate computed with the integer bootstrapping method is quite a sharp approximation to the actual integer least-squares (ILS) method success rate. The advantages of multi-GNSS constellations are examined in terms of the PAR technique involving the predefined ASR. Finally, a novel satellite selection algorithm for reliable ambiguity resolution called SARA is developed. In summary, the study demonstrats that when the ASR is close to one, the reliability of AR can be guaranteed and the ambiguity validation is effective. The work then focuses on new strategies to improve the ASR, including a partial ambiguity resolution procedure with a predefined success rate and a novel satellite selection strategy with a high success rate. The proposed strategies bring significant benefits of multi-GNSS signals to real-time high precision and high reliability positioning services.
Resumo:
In Balnaves v Smith [2012] QSC 408 Byrne SJA concluded that an offer to settle could be an “offer to settle” under Chapter 9 Part 5 of the Uniform Civil Procedure Rules 1999 (Qld) (UCPR) despite the inclusion of non-monetary terms. His Honour took a different approach to that taken by Moynihan SJA in Taske v Occupational & Medical Innovations Ltd [2007] QSC 147.
Resumo:
Property is an important factor in all businesses production in order to function. Nourse (1990) quoted ¡°some businesses are real estate, all businesses use real estate¡±. In recent years, the management of property assets has become the focus of many organisations, including the non-real estate businesses. Good asset management is concerned with the effective utilisation of a property owner.s assets. It is the management process of ensuring that the portfolio of properties held meets the overall requirements of the users. In short, it is the process of identifying the user.s requirement and the rationalisation of property holdings to match that requirement best, followed by a monitoring and ongoing review of practice. In Malaysia, federal agencies and local authorities are among the largest property asset owners. Recently the federal government has released a Total Asset Management Manual (TAMM). It is at the preliminary stage of implementation. This thesis will study the international practices of asset management of public sector assets and assess the effectiveness of TAMM. This research will focus on current international practices for the effective management of public sector property assets. The current application in Malaysia will be highlighted, to determine the awareness and understanding of the current practices to the recently released TAMM. This research is an exploratory research. The basis of this research relies on the combination of qualitative and quantitative approach, whereby the qualitative approach focuses on the international practices and its application to the management of public sector property assets. Questionnaires survey will be conducted among the Malaysian public property assets managers and users in the quantitative approach to gauge the collective opinion on the current practices of TAMM and its implementation
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
Wireless networked control systems (WNCSs) have been widely used in the areas of manufacturing and industrial processing over the last few years. They provide real-time control with a unique characteristic: periodic traffic. These systems have a time-critical requirement. Due to current wireless mechanisms, the WNCS performance suffers from long time-varying delays, packet dropout, and inefficient channel utilization. Current wirelessly networked applications like WNCSs are designed upon the layered architecture basis. The features of this layered architecture constrain the performance of these demanding applications. Numerous efforts have attempted to use cross-layer design (CLD) approaches to improve the performance of various networked applications. However, the existing research rarely considers large-scale networks and congestion network conditions in WNCSs. In addition, there is a lack of discussions on how to apply CLD approaches in WNCSs. This thesis proposes a cross-layer design methodology to address the issues of periodic traffic timeliness, as well as to promote the efficiency of channel utilization in WNCSs. The design of the proposed CLD is highlighted by the measurement of the underlying network condition, the classification of the network state, and the adjustment of sampling period between sensors and controllers. This period adjustment is able to maintain the minimally allowable sampling period, and also maximize the control performance. Extensive simulations are conducted using the network simulator NS-2 to evaluate the performance of the proposed CLD. The comparative studies involve two aspects of communications, with and without using the proposed CLD, respectively. The results show that the proposed CLD is capable of fulfilling the timeliness requirement under congested network conditions, and is also able to improve the channel utilization efficiency and the proportion of effective data in WNCSs.
Resumo:
This thesis makes several contributions towards improved methods for encoding structure in computational models of word meaning. New methods are proposed and evaluated which address the requirement of being able to easily encode linguistic structural features within a computational representation while retaining the ability to scale to large volumes of textual data. Various methods are implemented and evaluated on a range of evaluation tasks to demonstrate the effectiveness of the proposed methods.
Resumo:
Background: Procedural sedation and analgesia (PSA) administered by nurses in the cardiac catheterisation laboratory (CCL) is unlikely to yield serious complications. However, the safety of this practice is dependent on timely identification and treatment of depressed respiratory function. Aim: Describe respiratory monitoring in the CCL. Methods: Retrospective medical record audit of adult patients who underwent a procedure in the CCLs of one private hospital in Brisbane during May and June 2010. An electronic database was used to identify subjects and an audit tool ensured data collection was standardised. Results: Nurses administered PSA during 172/473 (37%) procedures including coronary angiographies, percutaneous coronary interventions, electrophysiology studies, radiofrequency ablations, cardiac pacemakers, implantable cardioverter defibrillators, temporary pacing leads and peripheral vascular interventions. Oxygen saturations were recorded during 160/172 (23%) procedures, respiration rate was recorded during 17/172 (10%) procedures, use of oxygen supplementation was recorded during 40/172 (23%) procedures and 13/172 (7.5%; 95% CI=3.59–11.41%) patients experienced oxygen desaturation. Conclusion: Although oxygen saturation was routinely documented, nurses did not regularly record respiration observations. It is likely that surgical draping and the requirement to minimise radiation exposure interfered with nurses’ ability to observe respiration. Capnography could overcome these barriers to respiration assessment as its accurate measurement of exhaled carbon dioxide coupled with the easily interpretable waveform output it produces, which displays a breath-by-breath account of ventilation, enables identification of respiratory depression in real-time. Results of this audit emphasise the need to ascertain the clinical benefits associated with using capnography to assess ventilation during PSA in the CCL.
Resumo:
In order to meet the land use and infrastructure needs of the community with the additional challenges posed by climate change and a global recession, it is essential that Queensland local governments test their proposed integrated land use and infrastructure plans to ensure the maximum achievement of triple-bottom line sus-tainability goals. Extensive regulatory impact assessment systems are in place at the Australian and state government levels to substantiate and test policy and legislative proposals, however no such requirement has been extended to the local government level. This paper contends that with the devolution of responsibility to local government and growing impacts of local government planning and development assessment activities, impact assessment of regulatory planning instruments is appropriate and overdue. This is particularly so in the Queensland context where local governments manage metropolitan and regional scale responsibilities and their planning schemes under the Sustainable Planning Act 2009 integrate land use and infrastructure planning to direct development rights, the spatial allocation of land, and infrastructure investment. It is critical that urban planners have access to fit-for-purpose impact assessment frameworks which support this challenging task and address the important relationship between local planning and sustainable urban development. This paper uses two examples of sustainability impact assessment and a case study from the Queensland local urban planning context to build an argument and potential starting point for impact assessment in local planning processes.
Resumo:
The recognition and enforcement of foreign judgments is an aspect of private international law, and concerns situations where a successful party to litigation seeks to rely on a judgment obtained in one court, in a court in another jurisdiction. The most common example where the recognition and enforcement of foreign judgments may arise is where a party who has obtained a favourable judgment in one state or country may seek to recognise and enforce the judgment in another state or country. This occurs because there is no sufficient asset in the state or country where the judgment was rendered to satisfy that judgment. As technological advancements in communications over vast geographical distances have improved exponentially in recent years, there has been an increase in cross-border transactions, as well as litigation arising from these transactions. As a result, the recognition and enforcement of foreign judgments is of increasing importance, since a party who has obtained a judgment in cross-border litigation may wish to recognise and enforce the judgment in another state or country, where the defendant’s assets may be located without having to re-litigate substantive issues that have already been resolved in another court. The purpose of the study is to examine whether the current state of laws for the recognition and enforcement of foreign judgments in Australia, the United States and the European Community are in line with modern-commercial needs. The study is conducted by weighing two competing objectives between the notion of finality of litigation, which encourages courts to recognise and enforce judgments foreign to them, on the one hand, and the adequacy of protection to safeguard the recognition and enforcement proceedings, so that there would be no injustice or unfairness if a foreign judgment is recognised and enforced, on the other. The findings of the study are as follows. In both Australia and the United States, there is a different approach concerning the recognition and enforcement of judgments rendered by courts interstate or in a foreign country. In order to maintain a single and integrated nation, there are constitutional and legislative requirements authorising courts to give conclusive effects to interstate judgments. In contrast, if the recognition and enforcement actions involve judgments rendered by a foreign country’s court, an Australian or a United States court will not recognise and enforce the foreign judgment unless the judgment has satisfied a number of requirements and does not fall under any of the exceptions to justify its non-recognition and non-enforcement. In the European Community, the Brussels I Regulation which governs the recognition and enforcement of judgments among European Union Member States has created a scheme, whereby there is only a minimal requirement that needs to be satisfied for the purposes of recognition and enforcement. Moreover, a judgment that is rendered by a Member State and based on any of the jurisdictional bases set forth in the Brussels I Regulation is entitled to be recognised and enforced in another Member State without further review of its underlying jurisdictional basis. However, there are concerns as to the adequacy of protection available under the Brussels I Regulation to safeguard the judgment-enforcing Member States, as well as those against whom recognition or enforcement is sought. This dissertation concludes by making two recommendations aimed at improving the means by which foreign judgments are recognised and enforced in the selected jurisdictions. The first is for the law in both Australia and the United States to undergo reform, including: adopting the real and substantial connection test as the new jurisdictional basis for the purposes of recognition and enforcement; liberalising the existing defences to safeguard the application of the real and substantial connection test; extending the application of the Foreign Judgments Act 1991 (Cth) in Australia to include at least its important trading partners; and implementing a federal statutory scheme in the United States to govern the recognition and enforcement of foreign judgments. The second recommendation is to introduce a convention on jurisdiction and the recognition and enforcement of foreign judgments. The convention will be a convention double, which provides uniform standards for the rules of jurisdiction a court in a contracting state must exercise when rendering a judgment and a set of provisions for the recognition and enforcement of resulting judgments.
Resumo:
The Australian Government’s current workforce reforms in early childhood education and care (ECEC) include a major shift in qualification requirements. The new requirement is that university four-year degree-qualified teachers are employed in before-school contexts, including child care. Ironically, recent research studies show that, in Australia, the very preservice teachers who are enrolled in these degree programs have a reluctance to work in childcare. This article reports on part of a larger study which is inquiring into how early childhood teacher professional identities are discursively produced, and provides a partial mapping of the literature. One preservice teacher’s comment provides the starting point, and the paper locates some the discourses that are accessible to preservice teachers as they prepare for the early years workforce. An awareness of the discursive field provides a sound background for preparing early childhood teachers. A challenge for the field is to consider which discourses are dominant, and how they potentially work to privilege work in some ECEC contexts over others.
Resumo:
This study is an inquiry into early childhood teacher professional identities. In Australia, workforce reforms in early childhood include major shifts in qualification requirements that call for a university four-year degree-qualified teacher to be employed in child care. This marks a shift in the early years workforce, where previously there was no such requirement. At the same time as these reforms to quality measures are being implemented, and requiring a substantive up skilling of the workforce, there is a growing body of evidence through recent studies that suggests these same four-year degree-qualified early childhood teachers have an aversion to working in child care. Their preferred employment option is to work in the early years of more formal schooling, not in before-school contexts. This collision of agendas warrants investigation. This inquiry is designed to investigate the site at which advocacy for higher qualification requirements meets early childhood teachers who are reluctant to choose child care as a possible career pathway. The key research question for this study is: How are early childhood teachers’ professional identities currently produced? The work of this thesis is to problematise the early childhood teacher in child care through a particular method of discourse analysis. There are two sets of data. The first was a key early childhood political document that read as a "moment of arising" (Foucault, 1984a, p. 83). It is a political document which was selected for its current influence on the early childhood field, and in particular, workforce reforms that call for four-year degree-qualified teachers to work in before-school contexts, including child care. The second data set was generated through four focus group discussions conducted with preservice early childhood teachers. The document and transcripts of the focus groups were both analysed as text, as conceptualised by Foucault (1981). Foucault’s work spans a number of years and a range of philosophical matters. This thesis draws particularly on Foucault’s writings on discourse, power/knowledge, regimes of truth and resistance. In order to consider the production of early childhood teachers’ professional identities, the study is also informed by identity theorists, who have worked on gender, performativity and investment (Davies, 2004/2006; McNay, 1992; Osgood, 2012; Walkerdine, 1990; Weedon, 1997). The ways in which discourses intersect, compete and collide produce the subject (Foucault, 1981) and, in the case of this inquiry, there are a number of competing discourses at play, which produce the early childhood teacher. These particular theories turn particular lenses on the question of professional identities in early childhood, and such a study calls for the application of particular methodologies. Discourse analysis was used as the methodological framework, and the analysis was informed by Foucauldian concepts of discourse. While Foucault did not prescribe a form of discourse analysis as a method, his writings nonetheless provide a valuable framework for illuminating discursive practices and, in turn, how people are affected, through the shifts and distribution of power (Foucault, 1980a). The treatment used with both data sets involved redescription. For the policy document, a technique for reading document-as-text applied a genealogical approach (Foucault, 1984a). For the focus groups, the process of redescription (Rorty, 1989) involved reading talk-as-text. As a method, redescription involves describing "lots and lots of things in new ways until you have created a pattern of linguistic behaviour which will tempt the new generation to adopt it" (Rorty, 1989, p. 9). The development and application of categories (Davies, 2004/2006) built on a poststructuralist theoretical framework and the literature review informed the data analysis method of discourse analysis. Irony provided a rhetorical and playful tool (Haraway, 1991; Rorty, 1989), to look to how seemingly opposing discourses are held together. This opens a space to collapse binary thinking and consider seemingly contradictory terms in a way in which both terms are possible and both are true. Irony resists the choice of one or the other being right, and holds the opposites together in tension. The thesis concludes with proposals for new, ironic categories, which work to bring together seemingly opposing terms, located at sites in the field of early childhood where discourses compete, collide and intersect to produce and maintain early childhood teacher professional identities. The process of mapping these discourses goes some way to investigating the complexities about identities and career choices of early childhood teachers. The category of "the cost of loving" captures the collision between care/love, inherent in child care, and new discourses of investment/economics. Investment/economics has not completely replaced care/love, and these apparent opposites were not read as a binary because both are necessary and both are true (Haraway, 1991). They are held together in tension to produce early childhood teacher professional identities. The policy document under scrutiny was New Directions, released in 2007 by the then opposition ALP leader, Kevin Rudd. The claim was made strongly that the "economic prosperity" of Australia relies on investment in early childhood. The arguments to invest are compelling and the neuroscience/brain research/child development together with economic/investment discourses demand that early childhood is funding is increased. The intersection of these discourses produces professional identities of early childhood teachers as a necessary part of the country’s economy, and thus, worthy of high status. The child care sector and work in child care settings are necessary, with children and the early childhood teacher playing key roles in the economy of the nation. Through New Directions it becomes sayable (Foucault, 1972/1989) that the work the early childhood teacher performs is legitimated and valued. The children are produced as "economic units". A focus on what children are able to contribute to the future economy of the nation re-positions children and produces these "smart productive citizens", making future economic contribution. The early childhood teacher is produced through this image of a child and "the cost of loving" is emphasised. A number of these categories were produced through the readings of the document-as text and the talk-as-text. Two ironic categories were read in the analysis of the transcripts of the focus group discussions, when treated as talk-as-text data: the early childhood teacher as a "heroic victim"; and the early childhood teacher as a "glorified babysitter". This thesis raises new questions about professional identities in early childhood. These new questions might go some way to prompt re-thinking of some government policy, as well as some aspects of early childhood teacher education course design. The images of children and images of child care provide provocations to consider preservice teacher education course design. In particular, how child care, as one of the early childhood contexts, is located, conceptualised and spoken throughout the course. Consideration by course designers and teacher educators of what discourses are privileged in course content —what discourses are diminished or silenced—would go some way to reconceptualising child care within preservice teacher education and challenging dominant ways of speaking child care, and work in child care. This inquiry into early childhood teachers’ professional identities has gone some way to exploring the complexities around the early childhood teacher in child care. It is anticipated that the significance of this study will thus have immediate applicably and relevance for the Australian early childhood policy landscape. The early childhood field is in a state of rapid change, and this inquiry has examined some of the disconnects between policy and practice. Awareness of the discourses that are in play in the field will continue to allow space for conversations that challenge dominant assumptions about child care, work in child care and ways of being an early childhood teacher in child care.
Resumo:
Amongst the current reforms in early childhood in Australia is the requirement for four year university degree-qualified teachers to be employed to provide a kindergarten program for four-year-old children in the year prior to school entry. The possibility for long day care to provide a funded kindergarten program, with an early childhood teacher (ECT) presents a change for the field. With this change come challenges, though also opportunities to think in new and different ways about what long day care and working in long day care might look like.