8 resultados para Proportionality
em Queensland University of Technology - ePrints Archive
Resumo:
This article suggests that the issue of proportionality in anti-doping sanctions has been inconsistently dealt with by the Court of Arbitration for Sport (CAS). Given CAS’s pre-eminent role in interpreting and applying the World Anti-Doping Code under the anti-doping policies of its signatories, an inconsistent approach to the application of the proportionality principle will cause difficulties for domestic anti-doping tribunals seeking guidance as to the appropriateness of their doping sanctions.
Resumo:
Relative abundance data is common in the life sciences, but appreciation that it needs special analysis and interpretation is scarce. Correlation is popular as a statistical measure of pairwise association but should not be used on data that carry only relative information. Using timecourse yeast gene expression data, we show how correlation of relative abundances can lead to conclusions opposite to those drawn from absolute abundances, and that its value changes when different components are included in the analysis. Once all absolute information has been removed, only a subset of those associations will reliably endure in the remaining relative data, specifically, associations where pairs of values behave proportionally across observations. We propose a new statistic φ to describe the strength of proportionality between two variables and demonstrate how it can be straightforwardly used instead of correlation as the basis of familiar analyses and visualization methods.
Resumo:
Cold-formed steel members are extensively used in the building construction industry, especially in residential, commercial and industrial buildings. In recent times, fire safety has become important in structural design due to increased fire damage to properties and loss of lives. However, past research into the fire performance of cold-formed steel members has been limited, and was confined to compression members. Therefore a research project was undertaken to investigate the structural behaviour of compact cold-formed steel lipped channel beams subject to inelastic local buckling and yielding, and lateral-torsional buckling effects under simulated fire conditions and associated section and member moment capacities. In the first phase of this research, an experimental study based on tensile coupon tests was undertaken to obtain the mechanical properties of elastic modulus and yield strength and the stress-strain relationship of cold-formed steels at uniform ambient and elevated temperatures up to 700oC. The mechanical properties deteriorated with increasing temperature and are likely to reduce the strength of cold-formed beams under fire conditions. Predictive equations were developed for yield strength and elastic modulus reduction factors while a modification was proposed for the stressstrain model at elevated temperatures. These results were used in the numerical modelling phases investigating the section and member moment capacities. The second phase of this research involved the development and validation of two finite element models to simulate the behaviour of compact cold-formed steel lipped channel beams subject to local buckling and yielding, and lateral-torsional buckling effects. Both models were first validated for elastic buckling. Lateral-torsional buckling tests of compact lipped channel beams were conducted at ambient temperature in order to validate the finite element model in predicting the non-linear ultimate strength behaviour. The results from this experimental study did not agree well with those from the developed experimental finite element model due to some unavoidable problems with testing. However, it highlighted the importance of magnitude and direction of initial geometric imperfection as well as the failure direction, and thus led to further enhancement of the finite element model. The finite element model for lateral-torsional buckling was then validated using the available experimental and numerical ultimate moment capacity results from past research. The third phase based on the validated finite element models included detailed parametric studies of section and member moment capacities of compact lipped channel beams at ambient temperature, and provided the basis for similar studies at elevated temperatures. The results showed the existence of inelastic reserve capacity for compact cold-formed steel beams at ambient temperature. However, full plastic capacity was not achieved by the mono-symmetric cold-formed steel beams. Suitable recommendations were made in relation to the accuracy and suitability of current design rules for section moment capacity. Comparison of member capacity results from finite element analyses with current design rules showed that they do not give accurate predictions of lateral-torsional buckling capacities at ambient temperature and hence new design rules were developed. The fourth phase of this research investigated the section and member moment capacities of compact lipped channel beams at uniform elevated temperatures based on detailed parametric studies using the validated finite element models. The results showed the existence of inelastic reserve capacity at elevated temperatures. Suitable recommendations were made in relation to the accuracy and suitability of current design rules for section moment capacity in fire design codes, ambient temperature design codes as well as those proposed by other researchers. The results showed that lateral-torsional buckling capacities are dependent on the ratio of yield strength and elasticity modulus reduction factors and the level of non-linearity in the stress-strain curves at elevated temperatures in addition to the temperature. Current design rules do not include the effects of non-linear stress-strain relationship and therefore their predictions were found to be inaccurate. Therefore a new design rule that uses a nonlinearity factor, which is defined as the ratio of the limit of proportionality to the yield stress at a given temperature, was developed for cold-formed steel beams subject to lateral-torsional buckling at elevated temperatures. This thesis presents the details and results of the experimental and numerical studies conducted in this research including a comparison of results with predictions using available design rules. It also presents the recommendations made regarding the accuracy of current design rules as well as the new developed design rules for coldformed steel beams both at ambient and elevated temperatures.
Resumo:
This paper argues that any future copyright policy should be proportional and flexible and be developed from a clear and evidence-based approach. An approach is required that carefully balances the incentives and rewards provided to economic rights holders against fundamental rights of privacy, self-expression, due process and the user rights embodied in copyright law to protect access, learning, critique, and reuse. This paper also suggests that while adequate enforcement measures are certainly part of a solution to a well functioning lawful, enforcement alone can never solve the root cause of unlawful file-sharing, since it utterly fails to address supply-side market barriers. Focus on enforcement measures alone continues to leave out a legitimate but un-served market demand, susceptible to unlawful alternatives. A competitive and consumer friendly digital content market and an appropriate legal framework to enable easy lawful access to digital content are essential preconditions for the creation of a culture of lawful, rather than unlawful, consumption.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Indium tin-oxide (ITO) and polycrystalline boron-doped diamond (BDD) have been examined in detail using the scanning electrochemical microscopy technique in feedback mode. For the interrogation of electrodes made from these materials, the choice of mediator has been varied. Using Ru(CN) 4− 6 (aq), ferrocene methanol (FcMeOH), Fe(CN) 3− 6 (aq) and Ru(NH 3) 3+ 6 (aq), approach curve experiments have been performed, and for purposes of comparison, calculations of the apparent heterogeneous electron transfer rates (k app) have been made using these data. In general, it would appear that values of k app are affected mainly by the position of the mediator reversible potential relative to the relevant semiconductor band edge (associated with majority carriers). For both the ITO (n type) and BDD (p type) electrodes, charge transfer is impeded and values are very low when using FcMeOH and Fe(CN) 3− 6 (aq) as mediators, and the use of Ru(NH 3) 3+ 6(aq) results in the largest value of k app. With ITO, the surface is chemically homogeneous and no variation is observed for any given mediator. Data is also presented where the potential of the ITO electrode is fixed using a ratio of the mediators Fe(CN) 3− 6(aq) and Fe(CN) 4− 6(aq). In stark contrast, the BDD electrode is quite the opposite and a range of k app values are observed for all mediators depending on the position on the surface. Both electrode surfaces are very flat and very smooth, and hence, for BDD, variations in feedback current imply a variation in the electrochemical activity. A comparison of the feedback current where the substrate is biased and unbiased shows a surprising degree of proportionality.
Resumo:
New public management (NPFM), with its hands-on, private sector-style performance measurement, output control, parsimonious use of resources, disaggreation of public sector units and greater competition in the public sector, has significantly affected charitable and nonprofit organisations delivering community services (Hood, 1991; Dunleavy, 1994; George & Wilding, 2002). The literature indicates that nonprofit organisations under NPM believe they are doing more for less: while administration is increasing, core costs are not being met; their dependence on government funding comes at the expense of other funding strategies; and there are concerns about proportionality and power asymmetries in the relationship (Kerr & Savelsberg, 2001; Powell & Dalton, 2011; Smith, 2002, p. 175; Morris, 1999, 2000a). Government agencies are under increased pressure to do more with less, demonstrate value for money, measure social outcomes, not merely outputs and minimise political risk (Grant, 2008; McGreogor-Lowndes, 2008). Government-community service organisation relationships are often viewed as 'uneasy alliances' characterised by the pressures that come with the parties' differing roles and expectations and the pressures that come with the parties' differing roles and expectations and the pressurs of funding and security (Productivity Commission, 2010, p. 308; McGregor-Lowndes, 2008, p. 45; Morris, 200a). Significant community services are now delivered to citizens through such relationships, often to the most disadvantaged in the community, and it is important for this to be achieved with equity, efficiently and effectively. On one level, the welfare state was seen as a 'risk management system' for the poor, with the state mitigating the risks of sickness, job loss and old age (Giddens, 1999) with the subsequent neoliberalist outlook shifting this risk back to households (Hacker, 2006). At the core of this risk shift are written contracts. Vincent-Jones (1999,2006) has mapped how NPM is characterised by the use of written contracts for all manner of relations; e.g., relgulation of dealings between government agencies, between individual citizens and the state, and the creation of quais-markets of service providers and infrastructure partners. We take this lens of contracts to examine where risk falls in relation to the outsourcing of community services. First we examine the concept of risk. We consider how risk might be managed and apportioned between governments and community serivce organisations (CSOs) in grant agreements, which are quasiy-market transactions at best. This is informed by insights from the law and economics literature. Then, standard grant agreements covering several years in two jurisdictions - Australia and the United Kingdom - are analysed, to establish the risk allocation between government and CSOs. This is placed in the context of the reform agenda in both jurisdictions. In Australia this context is th enonprofit reforms built around the creation of a national charities regulator, and red tape reduction. In the United Kingdom, the backdrop is the THird Way agenda with its compacts, succeed by Big Society in a climate of austerity. These 'case studies' inform a discussion about who is best placed to bear and manage the risks of community service provision on behalf of government. We conclude by identifying the lessons to be learned from our analysis and possible pathways for further scholarship.
Resumo:
We argue that safeguards are necessary to ensure human rights are adequately protected. All systems of blocking access to online content necessarily raise difficult and problematic issues of infringement of freedom of speech and access to information. Given the importance of access to information across the breadth of modern life, great care must be taken to ensure that any measures designed to protect copyright by blocking access to online locations are proportionate. Any measures to block access to online content must be carefully tailored to avoid serious and disproportionate impact on human rights. This means first that the measures must be effective and adapted to achieve a legitimate purpose. The experience of foreign jurisdictions suggests that this legislation is unlikely to be effective. Unless and until there is clear evidence that the proposed scheme is likely to increase effective returns to Australian creators, this legislation should not be introduced. Second, the principle of proportionality requires ensuring that the proposed legislation does not unnecessarily burden legitimate speech or access to information. As currently worded, the draft legislation may result in online locations being blocked even though they would, if operated in Australia, not contravene Australian law. This is unacceptable, and if introduced, the law should be drafted so that it is clearly limited only to foreign locations where there is clear and compelling evidence that the location would authorise copyright infringement if it were in Australia. Third, proportionality requires that measures are reasonable and strike an appropriate balance between competing interests. This draft legislation provides few safeguards for the public interest or the interests of private actors who would access legitimate information. New safeguards should be introduced to ensure that the public interest is well represented at both the stage of the primary application and at any applications to rescind or vary injunctions. We recommend that: The legislation not be introduced unless and until there is compelling evidence that it will have a real and significant positive impact on the effective incomes of Australian creators. The ‘facilitates an infringement’ test in s 115A(1)(b) should be replaced with ‘authorises infringement’. The ‘primary purpose’ test in s 115A(1)(c) should be replaced with: “the online location has no substantial non-infringing uses”. An explicit role for public interest groups as amici curiae should be introduced. Costs of successful applications should be borne by applicants. Injunctions should be valid only for renewable two year terms. Section 115A(5) should be clarified, and cl (b) and (c) be removed. The effectiveness of the scheme should be evaluated in two years.