273 resultados para Proportionality


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper considers a general model of electoral systems combining district-based elections with a compensatory mechanism in order to implement any outcome between strictly majoritarian and purely proportional seat allocation. It contains vote transfer and allows for the application of three different correction formulas. Analysis in a two-party system shows that a trade-off exists for the dominant party between the expected seat share and the chance of obtaining majority. Vote transfer rules are also investigated by focusing on the possibility of manipulation. The model is applied to the 2014 Hungarian parliamentary election. Hypothetical results reveal that the vote transfer rule cannot be evaluated in itself, only together with the share of constituency seats. With an appropriate choice of the latter, the three mechanisms can be made functionally equivalent.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Whilst the principle of proportionality indisputably plays a crucial role in the protection of fundamental rights, it is still unclear to what extent it applies to other fields in international law. The paper therefore explores the role it plays in selected fields of public international law, beyond human rights. The examination begins in the classical domain of reprisals and in maritime boundary delimitation and continues to analyse the role played in the law of multilateral trade regulation of the World Trade Organization and in bilateral investment protection. In an attempt to explain differences in recourse to proportionality in the various fields, we develop in our conclusions a distinction between horizontal and vertical constellations of legal protection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cold-formed steel members are extensively used in the building construction industry, especially in residential, commercial and industrial buildings. In recent times, fire safety has become important in structural design due to increased fire damage to properties and loss of lives. However, past research into the fire performance of cold-formed steel members has been limited, and was confined to compression members. Therefore a research project was undertaken to investigate the structural behaviour of compact cold-formed steel lipped channel beams subject to inelastic local buckling and yielding, and lateral-torsional buckling effects under simulated fire conditions and associated section and member moment capacities. In the first phase of this research, an experimental study based on tensile coupon tests was undertaken to obtain the mechanical properties of elastic modulus and yield strength and the stress-strain relationship of cold-formed steels at uniform ambient and elevated temperatures up to 700oC. The mechanical properties deteriorated with increasing temperature and are likely to reduce the strength of cold-formed beams under fire conditions. Predictive equations were developed for yield strength and elastic modulus reduction factors while a modification was proposed for the stressstrain model at elevated temperatures. These results were used in the numerical modelling phases investigating the section and member moment capacities. The second phase of this research involved the development and validation of two finite element models to simulate the behaviour of compact cold-formed steel lipped channel beams subject to local buckling and yielding, and lateral-torsional buckling effects. Both models were first validated for elastic buckling. Lateral-torsional buckling tests of compact lipped channel beams were conducted at ambient temperature in order to validate the finite element model in predicting the non-linear ultimate strength behaviour. The results from this experimental study did not agree well with those from the developed experimental finite element model due to some unavoidable problems with testing. However, it highlighted the importance of magnitude and direction of initial geometric imperfection as well as the failure direction, and thus led to further enhancement of the finite element model. The finite element model for lateral-torsional buckling was then validated using the available experimental and numerical ultimate moment capacity results from past research. The third phase based on the validated finite element models included detailed parametric studies of section and member moment capacities of compact lipped channel beams at ambient temperature, and provided the basis for similar studies at elevated temperatures. The results showed the existence of inelastic reserve capacity for compact cold-formed steel beams at ambient temperature. However, full plastic capacity was not achieved by the mono-symmetric cold-formed steel beams. Suitable recommendations were made in relation to the accuracy and suitability of current design rules for section moment capacity. Comparison of member capacity results from finite element analyses with current design rules showed that they do not give accurate predictions of lateral-torsional buckling capacities at ambient temperature and hence new design rules were developed. The fourth phase of this research investigated the section and member moment capacities of compact lipped channel beams at uniform elevated temperatures based on detailed parametric studies using the validated finite element models. The results showed the existence of inelastic reserve capacity at elevated temperatures. Suitable recommendations were made in relation to the accuracy and suitability of current design rules for section moment capacity in fire design codes, ambient temperature design codes as well as those proposed by other researchers. The results showed that lateral-torsional buckling capacities are dependent on the ratio of yield strength and elasticity modulus reduction factors and the level of non-linearity in the stress-strain curves at elevated temperatures in addition to the temperature. Current design rules do not include the effects of non-linear stress-strain relationship and therefore their predictions were found to be inaccurate. Therefore a new design rule that uses a nonlinearity factor, which is defined as the ratio of the limit of proportionality to the yield stress at a given temperature, was developed for cold-formed steel beams subject to lateral-torsional buckling at elevated temperatures. This thesis presents the details and results of the experimental and numerical studies conducted in this research including a comparison of results with predictions using available design rules. It also presents the recommendations made regarding the accuracy of current design rules as well as the new developed design rules for coldformed steel beams both at ambient and elevated temperatures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper argues that any future copyright policy should be proportional and flexible and be developed from a clear and evidence-based approach. An approach is required that carefully balances the incentives and rewards provided to economic rights holders against fundamental rights of privacy, self-expression, due process and the user rights embodied in copyright law to protect access, learning, critique, and reuse. This paper also suggests that while adequate enforcement measures are certainly part of a solution to a well functioning lawful, enforcement alone can never solve the root cause of unlawful file-sharing, since it utterly fails to address supply-side market barriers. Focus on enforcement measures alone continues to leave out a legitimate but un-served market demand, susceptible to unlawful alternatives. A competitive and consumer friendly digital content market and an appropriate legal framework to enable easy lawful access to digital content are essential preconditions for the creation of a culture of lawful, rather than unlawful, consumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Indium tin-oxide (ITO) and polycrystalline boron-doped diamond (BDD) have been examined in detail using the scanning electrochemical microscopy technique in feedback mode. For the interrogation of electrodes made from these materials, the choice of mediator has been varied. Using Ru(CN) 4− 6 (aq), ferrocene methanol (FcMeOH), Fe(CN) 3− 6 (aq) and Ru(NH 3) 3+ 6 (aq), approach curve experiments have been performed, and for purposes of comparison, calculations of the apparent heterogeneous electron transfer rates (k app) have been made using these data. In general, it would appear that values of k app are affected mainly by the position of the mediator reversible potential relative to the relevant semiconductor band edge (associated with majority carriers). For both the ITO (n type) and BDD (p type) electrodes, charge transfer is impeded and values are very low when using FcMeOH and Fe(CN) 3− 6 (aq) as mediators, and the use of Ru(NH 3) 3+ 6(aq) results in the largest value of k app. With ITO, the surface is chemically homogeneous and no variation is observed for any given mediator. Data is also presented where the potential of the ITO electrode is fixed using a ratio of the mediators Fe(CN) 3− 6(aq) and Fe(CN) 4− 6(aq). In stark contrast, the BDD electrode is quite the opposite and a range of k app values are observed for all mediators depending on the position on the surface. Both electrode surfaces are very flat and very smooth, and hence, for BDD, variations in feedback current imply a variation in the electrochemical activity. A comparison of the feedback current where the substrate is biased and unbiased shows a surprising degree of proportionality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

New public management (NPFM), with its hands-on, private sector-style performance measurement, output control, parsimonious use of resources, disaggreation of public sector units and greater competition in the public sector, has significantly affected charitable and nonprofit organisations delivering community services (Hood, 1991; Dunleavy, 1994; George & Wilding, 2002). The literature indicates that nonprofit organisations under NPM believe they are doing more for less: while administration is increasing, core costs are not being met; their dependence on government funding comes at the expense of other funding strategies; and there are concerns about proportionality and power asymmetries in the relationship (Kerr & Savelsberg, 2001; Powell & Dalton, 2011; Smith, 2002, p. 175; Morris, 1999, 2000a). Government agencies are under increased pressure to do more with less, demonstrate value for money, measure social outcomes, not merely outputs and minimise political risk (Grant, 2008; McGreogor-Lowndes, 2008). Government-community service organisation relationships are often viewed as 'uneasy alliances' characterised by the pressures that come with the parties' differing roles and expectations and the pressures that come with the parties' differing roles and expectations and the pressurs of funding and security (Productivity Commission, 2010, p. 308; McGregor-Lowndes, 2008, p. 45; Morris, 200a). Significant community services are now delivered to citizens through such relationships, often to the most disadvantaged in the community, and it is important for this to be achieved with equity, efficiently and effectively. On one level, the welfare state was seen as a 'risk management system' for the poor, with the state mitigating the risks of sickness, job loss and old age (Giddens, 1999) with the subsequent neoliberalist outlook shifting this risk back to households (Hacker, 2006). At the core of this risk shift are written contracts. Vincent-Jones (1999,2006) has mapped how NPM is characterised by the use of written contracts for all manner of relations; e.g., relgulation of dealings between government agencies, between individual citizens and the state, and the creation of quais-markets of service providers and infrastructure partners. We take this lens of contracts to examine where risk falls in relation to the outsourcing of community services. First we examine the concept of risk. We consider how risk might be managed and apportioned between governments and community serivce organisations (CSOs) in grant agreements, which are quasiy-market transactions at best. This is informed by insights from the law and economics literature. Then, standard grant agreements covering several years in two jurisdictions - Australia and the United Kingdom - are analysed, to establish the risk allocation between government and CSOs. This is placed in the context of the reform agenda in both jurisdictions. In Australia this context is th enonprofit reforms built around the creation of a national charities regulator, and red tape reduction. In the United Kingdom, the backdrop is the THird Way agenda with its compacts, succeed by Big Society in a climate of austerity. These 'case studies' inform a discussion about who is best placed to bear and manage the risks of community service provision on behalf of government. We conclude by identifying the lessons to be learned from our analysis and possible pathways for further scholarship.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We argue that safeguards are necessary to ensure human rights are adequately protected. All systems of blocking access to online content necessarily raise difficult and problematic issues of infringement of freedom of speech and access to information. Given the importance of access to information across the breadth of modern life, great care must be taken to ensure that any measures designed to protect copyright by blocking access to online locations are proportionate. Any measures to block access to online content must be carefully tailored to avoid serious and disproportionate impact on human rights. This means first that the measures must be effective and adapted to achieve a legitimate purpose. The experience of foreign jurisdictions suggests that this legislation is unlikely to be effective. Unless and until there is clear evidence that the proposed scheme is likely to increase effective returns to Australian creators, this legislation should not be introduced. Second, the principle of proportionality requires ensuring that the proposed legislation does not unnecessarily burden legitimate speech or access to information. As currently worded, the draft legislation may result in online locations being blocked even though they would, if operated in Australia, not contravene Australian law. This is unacceptable, and if introduced, the law should be drafted so that it is clearly limited only to foreign locations where there is clear and compelling evidence that the location would authorise copyright infringement if it were in Australia. Third, proportionality requires that measures are reasonable and strike an appropriate balance between competing interests. This draft legislation provides few safeguards for the public interest or the interests of private actors who would access legitimate information. New safeguards should be introduced to ensure that the public interest is well represented at both the stage of the primary application and at any applications to rescind or vary injunctions. We recommend that: The legislation not be introduced unless and until there is compelling evidence that it will have a real and significant positive impact on the effective incomes of Australian creators. The ‘facilitates an infringement’ test in s 115A(1)(b) should be replaced with ‘authorises infringement’. The ‘primary purpose’ test in s 115A(1)(c) should be replaced with: “the online location has no substantial non-infringing uses”. An explicit role for public interest groups as amici curiae should be introduced. Costs of successful applications should be borne by applicants. Injunctions should be valid only for renewable two year terms. Section 115A(5) should be clarified, and cl (b) and (c) be removed. The effectiveness of the scheme should be evaluated in two years.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent measurements on the resistivity of (La-Sr)(2)CuO4 are shown to tit within the general framework of Luttinger liquid transport theory. They exhibit a crossover from the spin-charge separated ''holon nondrag regime'' usually observed, with rho(ab) similar to T, to a ''localizing'' regime dominated by impurity scattering at low temperature. The proportionality of rho(c) and rho(ab) and the giant anisotropy follow directly from the theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We evaluate the mixed partition function for dyonic BPS black holes using the recently proposed degeneracy formula for the STU model. The result factorizes into the OSV mixed partition function times a proportionality factor. The latter is in agreement with the measure factor that was recently conjectured for a class of N = 2 black holes that contains the STU model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A theory and generalized synthesis procedure is advocated for the design of weir notches and orifice-notches having a base in any given shape, to a depth a, such that the discharge through it is proportional to any singular monotonically-increasing function of the depth of flow measured above a certain datum. The problem is reduced to finding an exact solution of a Volterra integral equation in Abel form. The maximization of the depth of the datum below the crest of the notch is investigated. Proof is given that for a weir notch made out of one continuous curve, and for a flow proportional to the mth power of the head, it is impossible to bring the datum lower than (2m − 1)a below the crest of the notch. A new concept of an orifice-notch, having discontinuity in the curve and a division of flow into two distinct portions, is presented. The division of flow is shown to have a beneficial effect in reducing the datum below (2m − 1)a from the crest of the weir and still maintaining the proportionality of the flow. Experimental proof with one such orifice-notch is found to have a constant coefficient of discharge of 0.625. The importance of this analysis in the design of grit chambers is emphasized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Random walks describe diffusion processes, where movement at every time step is restricted to only the neighboring locations. We construct a quantum random walk algorithm, based on discretization of the Dirac evolution operator inspired by staggered lattice fermions. We use it to investigate the spatial search problem, that is, to find a marked vertex on a d-dimensional hypercubic lattice. The restriction on movement hardly matters for d > 2, and scaling behavior close to Grover's optimal algorithm (which has no restriction on movement) can be achieved. Using numerical simulations, we optimize the proportionality constants of the scaling behavior, and demonstrate the approach to that for Grover's algorithm (equivalent to the mean-field theory or the d -> infinity limit). In particular, the scaling behavior for d = 3 is only about 25% higher than the optimal d -> infinity value.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the spatial search problem on the two-dimensional square lattice, using the Dirac evolution operator discretized according to the staggered lattice fermion formalism. d = 2 is the critical dimension for the spatial search problem, where infrared divergence of the evolution operator leads to logarithmic factors in the scaling behavior. As a result, the construction used in our accompanying article A. Patel and M. A. Rahaman, Phys. Rev. A 82, 032330 (2010)] provides an O(root N ln N) algorithm, which is not optimal. The scaling behavior can be improved to O(root N ln N) by cleverly controlling the massless Dirac evolution operator by an ancilla qubit, as proposed by Tulsi Phys. Rev. A 78, 012310 (2008)]. We reinterpret the ancilla control as introduction of an effective mass at the marked vertex, and optimize the proportionality constants of the scaling behavior of the algorithm by numerically tuning the parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flow visualization studies of plane laminar bubble plumes have been conducted to yield quantitative data on transition height, wavelength and wave velocity of the most unstable disturbance leading to transition. These are believed to be the first results of this kind. Most earlier studies are restricted to turbulent bubble plumes. In the present study, the bubble plumes were generated by electrolysis of water and hence very fine control over bubble size distribution and gas flow rate was possible to enable studies with laminar bubble plumes. Present observations show that (a) the dominant mode of instability in plane bubble plumes is the sinuous mode, (b) transition height and wavelength are related linearly with the proportionality constant being about 4, (c) wave velocity is about 40 % of the mean plume velocity, and (d) normalized transition height data correlate very well with a source Grashof number. Some agreement and some differences in transition characteristics of bubble plumes have been observed compared to those for similar single-phase flows.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the hydrodynamic properties of strongly coupled SU(N) Yang-Mills theory of the D1-brane at finite temperature and at a non-zero density of R-charge in the framework of gauge/gravity duality. The gravity dual description involves a charged black hole solution of an Einstein-Maxwell-dilaton system in 3 dimensions which is obtained by a consistent truncation of the spinning D1-brane in 10 dimensions. We evaluate thermal and electrical conductivity as well as the bulk viscosity as a function of the chemical potential conjugate to the R-charges of the D1-brane. We show that the ratio of bulk viscosity to entropy density is independent of the chemical potential and is equal to 1/4 pi. The thermal conductivity and bulk viscosity obey a relationship similar to the Wiedemann-Franz law. We show that at the boundary of thermodynamic stability, the charge diffusion mode becomes unstable and the transport coefficients exhibit critical behaviour. Our method for evaluating the transport coefficients relies on expressing the second order differential equations in terms of a first order equation which dictates the radial evolution of the transport coefficient. The radial evolution equations can be solved exactly for the transport coefficients of our interest. We observe that transport coefficients of the D1-brane theory are related to that of the M2-brane by an overall proportionality constant which sets the dimensions.