903 resultados para Principle component
Resumo:
This report provides an evaluation of the implementation of the Polluter Pays Principle (PPP) – a principle of international environmental law – in the context of pollution from sugarcane farming affecting Australia’s Great Barrier Reef (GBR). The research was part of an experiment to test methods for evaluating the effectiveness of environmental laws. Overall, we found that whilst the PPP is reflected to a limited extent in Australian law (more so in Queensland law, than at the national level), the behaviour one might expect in terms of implementing the principle was largely inadequate. Evidence of a longer term, explicit commitment to the PPP was particularly weak.
Resumo:
This article addresses the need of an implementation mechanism for the protection of refugees’ rights. However, it is contended that the principle forms part of Customary International Law, under which it is binding on all states irrespective of whether or not they are parties to the Convention Relating to the Status of Refugees 1951 or its Protocol 1967. Since last decade, U.S and its allies have been fighting to curve terrorism which has raised many issues such as human rights violation, deportation, expulsion, extradition, rendition and many more. Pakistan has played a very critical role in War against Terrorism, particularly in reference of war in Afghanistan. Particular concern of this article is the violation of refugees’ rights in Pakistan in 2008 and 2010. This article would highlight the legislation regarding non-expulsion of Afghan refugees from Pakistan to a territory where they have well founded fear of persecution. Article is divided into three parts, the first one deals with “Principle of Non-Refoulement”, the second one deals with “exceptions to the principle” whereas the last one discusses the violation of the very principle in Pakistan with reference to Afghan refugees.
Resumo:
For a multiarmed bandit problem with exponential discounting the optimal allocation rule is defined by a dynamic allocation index defined for each arm on its space. The index for an arm is equal to the expected immediate reward from the arm, with an upward adjustment reflecting any uncertainty about the prospects of obtaining rewards from the arm, and the possibilities of resolving those uncertainties by selecting that arm. Thus the learning component of the index is defined to be the difference between the index and the expected immediate reward. For two arms with the same expected immediate reward the learning component should be larger for the arm for which the reward rate is more uncertain. This is shown to be true for arms based on independent samples from a fixed distribution with an unknown parameter in the cases of Bernoulli and normal distributions, and similar results are obtained in other cases.
Resumo:
In recent years, there has been increasing interest from growers, merchants, supermarkets and consumers in the establishment of a national mild onion industry. Imperative to the success of the emergent industry is the application of the National Mild Onion Certification Scheme that will establish standards and recommendations to be met by growers to allow them to declare their product as certified mild onions. The use of sensory evaluation techniques has played an imperative role throughout the project timeline that has also included varietal evaluation, evaluation of current agronomic practices and correlation of chemical analysis data. Raw onion consumer acceptance testing on five different onion varieties established preferences amongst the varieties for odour, appearance, flavour, texture and overall and differences in the level of pungency and aftertaste perceived. Demographic information was obtained regarding raw and cooked onion use, frequency of consumption and responses to the idea of a mild, less pungent onion. Additionally, focus groups were conducted to further investigate consumer attitudes to onions. Currently, a trained onion panel is being established to evaluate several odour, flavour and aftertaste attributes. Sample assessments will be conducted in January 2004 and correlated with chemical analyses that will hopefully provide the corner-stone for the anticipated Certification Scheme.
Resumo:
We demonstrate the aptitude of supramolecular hydrogel formation using simple bile acid such as lithocholic acid in aqueous solution in the presence of various dimeric or oligomeric amines. By variation of the choice of the amines in such mixtures the gelation properties could be modulated. However, the replacement of lithocholic acid (LCA) by cholic acid or deoxycholic acid resulted in no hydrogel formation. FT-IR studies confirm that the carboxylate and ammonium residues of the two components are involved in the salt (ion-pair) formation. This promotes further assembly of the components reinforced by a continuous hydrogen bonded network leading to gelation. Electron microscopy shows the morphology of the internal organization of gels of two component systems which also depends significantly on the amine part. Variation of the amine component from the simple 1,2-ethanediamine (EDA) to oligomeric amines in such gels of lithocholic acid changes the morphology of the assembly from long one-dimensional nanotubes to three-dimensional complex structures. Single crystal X-ray diffraction analysis with one of the amine-LCA complexes suggested the motif of fiber formation where the amines interact with the carboxylate and hydroxyl moieties through electrostatic forces and hydrogen bonding. From small angle neutron scattering study, it becomes clear that the weak gel from LCA-EDA shows scattering oscillation due to the presence of non-interacting nanotubules while for gels of LCA with oligomeric amines the individual fibers come together to form complex three-dimensional organizations of higher length scale. The rheological properties of this class of two component system provide clear evidence that the flow behavior can be modulated varying the acid-amine ratio.
Resumo:
- Purpose Communication of risk management practices are a critical component of good corporate governance. Research to date has been of little benefit in informing regulators internationally. This paper seeks to contribute to the literature by investigating how listed Australian companies in a setting where disclosures are explicitly required by the ASX corporate governance framework, disclose risk management (RM) information in the corporate governance statements within annual reports. - Design/methodology/approach To address our study’s research questions and related hypotheses, we examine the top 300 ASX-listed companies by market capitalisation at 30 June 2010. For these firms, we identify, code and categorise RM disclosures made in the annual reports according to the disclosure categories specified in Australian Stock Exchange Corporate Governance Principles and Recommendations (ASX CGPR). The derived data is then examined using a comprehensive approach comprising thematic content analysis and regression analysis. - Findings The results indicate widespread divergence in disclosure practices and low conformance with the Principle 7 of the ASX CGPR. This result suggests that companies are not disclosing all ‘material business risks’ possibly due to ignorance at the board level, or due to the intentional withholding of sensitive information from financial statement users. The findings also show mixed results across the factors expected to influence disclosure behaviour. Notably, the presence of a risk committee (RC) (in particular, a standalone RC) and technology committee (TC) are found to be associated with improved levels of disclosure. we do not find evidence that company risk measures (as proxied by equity beta and the market-to-book ratio) are significantly associated with greater levels of RM disclosure. Also, contrary to common findings in the disclosure literature, factors such as board independence and expertise, audit committee independence, and the usage of a Big-4 auditor do not seem to impact the level of RM disclosure in the Australian context. - Research limitation/implications The study is limited by the sample and study period selection as the RM disclosures of only the largest (top 300) ASX firms are examined for the fiscal year 2010. Thus, the finding may not be generalisable to smaller firms, or earlier/later years. Also, the findings may have limited applicability in other jurisdictions with different regulatory environments. - Practical implications The study’s findings suggest that insufficient attention has been applied to RM disclosures by listed companies in Australia. These results suggest that the RM disclosures practices observed in the Australian setting may not be meeting the objectives of regulators and the needs of stakeholders. - Originality/value Despite the importance of risk management communication, it is unclear whether disclosures in annual financial reports achieve this communication. The Australian setting provides an ideal environment to examine the nature and extent of risk management communication as the Australian Securities Exchange (ASX) has recommended risk management disclosures follow Principle 7 of its principle-based governance rules since 2007.
Resumo:
Background: Sorghum genome mapping based on DNA markers began in the early 1990s and numerous genetic linkage maps of sorghum have been published in the last decade, based initially on RFLP markers with more recent maps including AFLPs and SSRs and very recently, Diversity Array Technology (DArT) markers. It is essential to integrate the rapidly growing body of genetic linkage data produced through DArT with the multiple genetic linkage maps for sorghum generated through other marker technologies. Here, we report on the colinearity of six independent sorghum component maps and on the integration of these component maps into a single reference resource that contains commonly utilized SSRs, AFLPs, and high-throughput DArT markers. Results: The six component maps were constructed using the MultiPoint software. The lengths of the resulting maps varied between 910 and 1528 cM. The order of the 498 markers that segregated in more than one population was highly consistent between the six individual mapping data sets. The framework consensus map was constructed using a "Neighbours" approach and contained 251 integrated bridge markers on the 10 sorghum chromosomes spanning 1355.4 cM with an average density of one marker every 5.4 cM, and were used for the projection of the remaining markers. In total, the sorghum consensus map consisted of a total of 1997 markers mapped to 2029 unique loci ( 1190 DArT loci and 839 other loci) spanning 1603.5 cM and with an average marker density of 1 marker/0.79 cM. In addition, 35 multicopy markers were identified. On average, each chromosome on the consensus map contained 203 markers of which 58.6% were DArT markers. Non-random patterns of DNA marker distribution were observed, with some clear marker-dense regions and some marker-rare regions. Conclusion: The final consensus map has allowed us to map a larger number of markers than possible in any individual map, to obtain a more complete coverage of the sorghum genome and to fill a number of gaps on individual maps. In addition to overall general consistency of marker order across individual component maps, good agreement in overall distances between common marker pairs across the component maps used in this study was determined, using a difference ratio calculation. The obtained consensus map can be used as a reference resource for genetic studies in different genetic backgrounds, in addition to providing a framework for transferring genetic information between different marker technologies and for integrating DArT markers with other genomic resources. DArT markers represent an affordable, high throughput marker system with great utility in molecular breeding programs, especially in crops such as sorghum where SNP arrays are not publicly available.
Resumo:
The feasibility of realising a high-order LC filter with a small set of different capacitor values, without sacrificing the frequency response specifications, is indicated. This idea could be conveniently adopted in other filter structures also—for example the FDNR transformed filter realisations.
Resumo:
Use of socket prostheses Currently, for individuals with limb loss, the conventional method of attaching a prosthetic limb relies on a socket that fits over the residual limb. However, there are a number of issues concerning the use of a socket (e.g., blisters, irritation, and discomfort) that result in dissatisfaction with socket prostheses, and these lead ultimately a significant decrease in quality of life. Bone-anchored prosthesis Alternatively, the concept of attaching artificial limbs directly to the skeletal system has been developed (bone anchored prostheses), as it alleviates many of the issues surrounding the conventional socket interface.Bone anchored prostheses rely on two critical components: the implant, and the percutaneous abutment or adapter, which forms the connection for the external prosthetic system (Figure 1). To date, an implant that screws into the long bone of the residual limb has been the most common intervention. However, more recently, press-fit implants have been introduced and their use is increasing. Several other devices are currently at various stages of development, particularly in Europe and the United States. Benefits of bone-anchored prostheses Several key studies have demonstrated that bone-anchored prostheses have major clinical benefits when compared to socket prostheses (e.g., quality of life, prosthetic use, body image, hip range of motion, sitting comfort, ease of donning and doffing, osseoperception (proprioception), walking ability) and acceptable safety, in terms of implant stability and infection. Additionally, this method of attachment allows amputees to participate in a wide range of daily activities for a substantially longer duration. Overall, the system has demonstrated a significant enhancement to quality of life. Challenges of direct skeletal attachment However, due to the direct skeletal attachment, serious injury and damage can occur through excessive loading events such as during a fall (e.g., component damage, peri-prosthetic fracture, hip dislocation, and femoral head fracture). These incidents are costly (e.g., replacement of components) and could require further surgical interventions. Currently, these risks are limiting the acceptance of bone-anchored technology and the substantial improvement to quality of life that this treatment offers. An in-depth investigation into these risks highlighted a clear need to re-design and improve the componentry in the system (Figure 2), to improve the overall safety during excessive loading events. Aim and purposes The ultimate aim of this doctoral research is to improve the loading safety of bone-anchored prostheses, to reduce the incidence of injury and damage through the design of load restricting components, enabling individuals fitted with the system to partake in everyday activities, with increased security and self-assurance. The safety component will be designed to release or ‘fail’ external to the limb, in a way that protects the internal bone-implant interface, thus removing the need for restorative surgery and potential damage to the bone. This requires detailed knowledge of the loads typically experienced by the limb and an understanding of potential overload situations that might occur. Hence, a comprehensive review of the loading literature surrounding bone anchored prostheses will be conducted as part of this project, with the potential for additional experimental studies of the loads during normal activities to fill in gaps in the literature. This information will be pivotal in determining the specifications for the properties of the safety component, and the bone-implant system. The project will follow the Stanford Biodesign process for the development of the safety component.
Resumo:
Recent axiomatic derivations of the maximum entropy principle from consistency conditions are critically examined. We show that proper application of consistency conditions alone allows a wider class of functionals, essentially of the form ∝ dx p(x)[p(x)/g(x)] s , for some real numbers, to be used for inductive inference and the commonly used form − ∝ dx p(x)ln[p(x)/g(x)] is only a particular case. The role of the prior densityg(x) is clarified. It is possible to regard it as a geometric factor, describing the coordinate system used and it does not represent information of the same kind as obtained by measurements on the system in the form of expectation values.
Resumo:
Various reasons, such as ethical issues in maintaining blood resources, growing costs, and strict requirements for safe blood, have increased the pressure for efficient use of resources in blood banking. The competence of blood establishments can be characterized by their ability to predict the volume of blood collection to be able to provide cellular blood components in a timely manner as dictated by hospital demand. The stochastically varying clinical need for platelets (PLTs) sets a specific challenge for balancing supply with requests. Labour has been proven a primary cost-driver and should be managed efficiently. International comparisons of blood banking could recognize inefficiencies and allow reallocation of resources. Seventeen blood centres from 10 countries in continental Europe, Great Britain, and Scandinavia participated in this study. The centres were national institutes (5), parts of the local Red Cross organisation (5), or integrated into university hospitals (7). This study focused on the departments of blood component preparation of the centres. The data were obtained retrospectively by computerized questionnaires completed via Internet for the years 2000-2002. The data were used in four original articles (numbered I through IV) that form the basis of this thesis. Non-parametric data envelopment analysis (DEA, II-IV) was applied to evaluate and compare the relative efficiency of blood component preparation. Several models were created using different input and output combinations. The focus of comparisons was on the technical efficiency (II-III) and the labour efficiency (I, IV). An empirical cost model was tested to evaluate the cost efficiency (IV). Purchasing power parities (PPP, IV) were used to adjust the costs of the working hours and to make the costs comparable among countries. The total annual number of whole blood (WB) collections varied from 8,880 to 290,352 in the centres (I). Significant variation was also observed in the annual volume of produced red blood cells (RBCs) and PLTs. The annual number of PLTs produced by any method varied from 2,788 to 104,622 units. In 2002, 73% of all PLTs were produced by the buffy coat (BC) method, 23% by aphaeresis and 4% by the platelet-rich plasma (PRP) method. The annual discard rate of PLTs varied from 3.9% to 31%. The mean discard rate (13%) remained in the same range throughout the study period and demonstrated similar levels and variation in 2003-2004 according to a specific follow-up question (14%, range 3.8%-24%). The annual PLT discard rates were, to some extent, associated with production volumes. The mean RBC discard rate was 4.5% (range 0.2%-7.7%). Technical efficiency showed marked variation (median 60%, range 41%-100%) among the centres (II). Compared to the efficient departments, the inefficient departments used excess labour resources (and probably) production equipment to produce RBCs and PLTs. Technical efficiency tended to be higher when the (theoretical) proportion of lost WB collections (total RBC+PLT loss) from all collections was low (III). The labour efficiency varied remarkably, from 25% to 100% (median 47%) when working hours were the only input (IV). Using the estimated total costs as the input (cost efficiency) revealed an even greater variation (13%-100%) and overall lower efficiency level compared to labour only as the input. In cost efficiency only, the savings potential (observed inefficiency) was more than 50% in 10 departments, whereas labour and cost savings potentials were both more than 50% in six departments. The association between department size and efficiency (scale efficiency) could not be verified statistically in the small sample. In conclusion, international evaluation of the technical efficiency in component preparation departments revealed remarkable variation. A suboptimal combination of manpower and production output levels was the major cause of inefficiency, and the efficiency did not directly relate to production volume. Evaluation of the reasons for discarding components may offer a novel approach to study efficiency. DEA was proven applicable in analyses including various factors as inputs and outputs. This study suggests that analytical models can be developed to serve as indicators of technical efficiency and promote improvements in the management of limited resources. The work also demonstrates the importance of integrating efficiency analysis into international comparisons of blood banking.
Resumo:
A pheromone-based trapping system will be developed for both A. lutescens and A. nitida to improve insecticide timing and to rationalise use.
Resumo:
Three new three-dimensional zinc-triazolate-oxybis(benzoate) compounds. [{Zn-3(H2O)(2)}{C12H8O(COO)(2)}(2)-{C2H2N3}(2)]center dot 2H(2)O(I), [Zn-7{C12H8O(COO)(2)}(4){C2H2N3}(6)]center dot H2O, (II), and[{Zn-5(OH)(2)}{C12H8O(COO)(2)}(3){C2H2N3}(2)] (III), synthesized by a hydrothermal reaction of a mixture of Zn(OAc)(2)center dot 2H(2)O, 4,4'-oxybis(benzoic acid), 1,2,4-triazole, NaOH, and water. Compound I has an interpenetrated diamond structure and II and III have pillared-layer related structures. The formation of a hydrated phase (I) at low temperature and a completely dehydrated phase (III) at high temperature suggests the importance of thermodynamic factors in the formation of three compounds. Transformation studies of I in the presence of water shows the formation of a simple Zn-OBA compound, [Zn(OBA)(H2O)] (IV), at 150 and 180 degrees C and compound III at 200 degrees C. The compounds have been characterized by single-crystal X-ray diffraction, powder X-ray diffraction. thermogravimetric analysis, IR, and photoluminescence studies.
Resumo:
From consideration of 'H-lH vicinal coupling constants and '"G'H long-range coupling constants in a series of amino acid derivatives, the precise values of uC component vicinal coupling constants have been calculated for the three minimum energy staggered rotamers for the C(or)H-C(P)H, side-chains of amino acids.