153 resultados para Inequality measures
Resumo:
Although conditioning is routinely used in mechanical tests of tendon in vitro, previous in vivo research evaluating the influence of body anthropometry on Achilles tendon thickness has not considered its potential effects on tendon structure. This study evaluated the relationship between Achilles tendon thickness and body anthropometry in healthy adults both before and after resistive ankle plantarflexion exercise. A convenience sample of 30 healthy male adults underwent sonographic examination of the Achilles tendon in addition to standard anthropometric measures of stature and body weight. A 10-5 MHz linear array transducer was used to acquire longitudinal sonograms of the Achilles tendon, 20 mm proximal to the tendon insertion. Participants then completed a series (90-100 repetitions) of conditioning exercises against an effective resistance between 100% and 150% body weight. Longitudinal sonograms were repeated immediately on completion of the exercise intervention, and anteroposterior Achilles tendon thickness was determined. Achilles tendon thickness was significantly reduced immediately following conditioning exercise (t = 9.71, P < 0.001), resulting in an average transverse strain of -18.8%. In contrast to preexercise measures, Achilles tendon thickness was significantly correlated with body weight (r = 0.72, P < 0.001) and to a lesser extent height (r = 0.45, P < 0.01) and body mass index (r = 0.63, P < 0.001) after exercise. Conditioning of the Achilles tendon via resistive ankle exercises induces alterations in tendon structure that substantially improve correlations between Achilles tendon thickness and body anthropometry. It is recommended that conditioning exercises, which standardize the load history of tendon, are employed before measurements of sonographic tendon thickness in vivo.
Resumo:
Forecasts of volatility and correlation are important inputs into many practical financial problems. Broadly speaking, there are two ways of generating forecasts of these variables. Firstly, time-series models apply a statistical weighting scheme to historical measurements of the variable of interest. The alternative methodology extracts forecasts from the market traded value of option contracts. An efficient options market should be able to produce superior forecasts as it utilises a larger information set of not only historical information but also the market equilibrium expectation of options market participants. While much research has been conducted into the relative merits of these approaches, this thesis extends the literature along several lines through three empirical studies. Firstly, it is demonstrated that there exist statistically significant benefits to taking the volatility risk premium into account for the implied volatility for the purposes of univariate volatility forecasting. Secondly, high-frequency option implied measures are shown to lead to superior forecasts of the intraday stochastic component of intraday volatility and that these then lead on to superior forecasts of intraday total volatility. Finally, the use of realised and option implied measures of equicorrelation are shown to dominate measures based on daily returns.
Resumo:
While the justice implications of climate change are well understood by the international climate regime, solutions to meaningfully address climate injustice are still emerging. This article explores how a number of different theories of justice have influenced the development of international climate regime policies and measures. Such analysis is undertaken by examining the theories of remedial justice, environmental justice, energy justice, social justice and international justice. This article demonstrates how each of these theories has influenced the development of international climate policies or measures. No one theory of justice has the ability to respond to the multifaceted justice implications that arise as a result of climate change. It is argued that a variety of lenses of justice are useful when examining issues of injustice in the climate context. It is believed that articulating the justice implications of climate change by reference to theories of justice assists in clarifying the key issues giving rise to injustice. This article finds that while there has been some progress by the regime in recognising the injustices associated with climate change, such recognition is piecemeal and the implementation of many of the policies and measures discussed within this article needs to be either scaled up, or extended into more far-reaching policies and measures to overcome climate justice concerns. Overall it is suggested that climate justice concerns need to be clearly enunciated within key adaptation instruments so as to provide a legal and legitimate basis upon which to leverage action.
Resumo:
Purpose: The objective was to investigate the association between corneal sensitivity and established measures of diabetic peripheral neuropathy (DPN). Methods: Corneal sensitivity was measured in 93 individuals with diabetes, 146 diabetic individuals without neuropathy and 61 control individuals without diabetes or neuropathy using a non-contact corneal aesthesiometer at the baseline visit of a five-year longitudinal natural history study of DPN. The correlation between corneal sensitivity and established measures of neuropathy was estimated and multi-dimensional scaling was used to represent similarities and dissimilarities between variables. Results: The corneal sensitivity threshold was significantly correlated with a majority of established measures of DPN. Correlation coefficients ranged from -0.32 to 0.26. Using multi-dimensional scaling, non-contact corneal aesthesiometry was closer to the neuropathy disability score, diabetic neuropathy symptom score and Neuropad and most dissimilar to electrophysiological parameters and quantitative sensory testing. Conclusion: Corneal sensitivity, although not strongly related, is associated with other functional measures of DPN and might provide a useful adjunct in identifying functional loss of small nerve fibre integrity.
Resumo:
Noncompliance with speed limits is one of the major safety concerns in roadwork zones. Although numerous studies have attempted to evaluate the effectiveness of safety measures on speed limit compliance, many report inconsistent findings. This paper aims to review the effectiveness of four categories of roadwork zone speed control measures: Informational, Physical, Enforcement, and Educational measures. While informational measures (static signage, variable message signage) evidently have small to moderate effects on speed reduction, physical measures (rumble strips, optical speed bars) are found ineffective for transient and moving work zones. Enforcement measures (speed camera, police presence) have the greatest effects, while educational measures also have significant potential to improve public awareness of roadworker safety and to encourage slower speeds in work zones. Inadequate public understanding of roadwork risks and hazards, failure to notice signs, and poor appreciation of safety measures are the major causes of noncompliance with speed limits.
Resumo:
The standard approach to tax compliance applies the economics-of-crime methodology pioneered by Becker (1968): in its first application, due to Allingham and Sandmo (1972) it models the behaviour of agents as a decision involving a choice of the extent of their income to report to tax authorities, given a certain institutional environment, represented by parameters such as the probability of detection and penalties in the event the agent is caught. While this basic framework yields important insights on tax compliance behavior, it has some critical limitations. Specifically, it indicates a level of compliance that is significantly below what is observed in the data. This thesis revisits the original framework with a view towards addressing this issue, and examining the political economy implications of tax evasion for progressivity in the tax structure. The approach followed involves building a macroeconomic, dynamic equilibrium model for the purpose of examining these issues, by using a step-wise model building procedure starting with some very simple variations of the basic Allingham and Sandmo construct, which are eventually integrated to a dynamic general equilibrium overlapping generations framework with heterogeneous agents. One of the variations involves incorporating the Allingham and Sandmo construct into a two-period model of a small open economy of the type originally attributed to Fisher (1930). A further variation of this simple construct involves allowing agents to initially decide whether to evade taxes or not. In the event they decide to evade, the agents then have to decide the extent of income or wealth they wish to under-report. We find that the ‘evade or not’ assumption has strikingly different and more realistic implications for the extent of evasion, and demonstrate that it is a more appropriate modeling strategy in the context of macroeconomic models, which are essentially dynamic in nature, and involve consumption smoothing across time and across various states of nature. Specifically, since deciding to undertake tax evasion impacts on the consumption smoothing ability of the agent by creating two states of nature in which the agent is ‘caught’ or ‘not caught’, there is a possibility that their utility under certainty, when they choose not to evade, is higher than the expected utility obtained when they choose to evade. Furthermore, the simple two-period model incorporating an ‘evade or not’ choice can be used to demonstrate some strikingly different political economy implications relative to its Allingham and Sandmo counterpart. In variations of the two models that allow for voting on the tax parameter, we find that agents typically choose to vote for a high degree of progressivity by choosing the highest available tax rate from the menu of choices available to them. There is, however, a small range of inequality levels for which agents in the ‘evade or not’ model vote for a relatively low value of the tax rate. The final steps in the model building procedure involve grafting the two-period models with a political economy choice into a dynamic overlapping generations setting with more general, non-linear tax schedules and a ‘cost-of evasion’ function that is increasing in the extent of evasion. Results based on numerical simulations of these models show further improvement in the model’s ability to match empirically plausible levels of tax evasion. In addition, the differences between the political economy implications of the ‘evade or not’ version of the model and its Allingham and Sandmo counterpart are now very striking; there is now a large range of values of the inequality parameter for which agents in the ‘evade or not’ model vote for a low degree of progressivity. This is because, in the ‘evade or not’ version of the model, low values of the tax rate encourages a large number of agents to choose the ‘not-evade’ option, so that the redistributive mechanism is more ‘efficient’ relative to the situations in which tax rates are high. Some further implications of the models of this thesis relate to whether variations in the level of inequality, and parameters such as the probability of detection and penalties for tax evasion matter for the political economy results. We find that (i) the political economy outcomes for the tax rate are quite insensitive to changes in inequality, and (ii) the voting outcomes change in non-monotonic ways in response to changes in the probability of detection and penalty rates. Specifically, the model suggests that changes in inequality should not matter, although the political outcome for the tax rate for a given level of inequality is conditional on whether there is a large or small or large extent of evasion in the economy. We conclude that further theoretical research into macroeconomic models of tax evasion is required to identify the structural relationships underpinning the link between inequality and redistribution in the presence of tax evasion. The models of this thesis provide a necessary first step in that direction.
Resumo:
"Defrauding land titles systems impacts upon us all. Those who deal in land include ordinary citizens, big business, small business, governments, not-for-profit organisation, deceased estates...Fraud here touches almost everybody." the thesis presented in this paper is that the current and disparate steps taken by jurisdictions to alleviate land fraud associated with identity-based crimes are inadequate. The centrepiece of the analysis is the consideration of two scenarios that have recently occurred. One is the typical scenario where a spouse forges the partner's signature to obtain a mortgage from a financial institution. The second is atypical. It involves a sophisticated overseas fraud duping many stakeholders involved in the conveyancing process. After outlining these scenarios, we will examine how identity verification requirements of the United Kingdom, Ontario, the Australian states, and New Zealand would have been applied to these two frauds. Our conclusion is that even though some jurisdictions may have prevented the frauds from occurring, the current requirements are inadequate. We use the lessons learnt to propose what we consider core principles for identity verification in land transactions.
Resumo:
Objectives: To compare measures of fat-free mass (FFM) by three different bioelectrical impedance analysis (BIA) devices and to assess the agreement between three different equations validated in older adult and/or overweight populations. Design: Cross-sectional study. Setting: Orthopaedics ward of Brisbane public hospital, Australia. Participants: Twenty-two overweight, older Australians (72 yr ± 6.4, BMI 34 kg/m2 ± 5.5) with knee osteoarthritis. Measurements: Body composition was measured using three BIA devices: Tanita 300-GS (foot-to-foot), Impedimed DF50 (hand-to-foot) and Impedimed SFB7 (bioelectrical impedance spectroscopy (BIS)). Three equations for predicting FFM were selected based on their ability to be applied to an older adult and/ or overweight population. Impedance values were extracted from the hand-to-foot BIA device and included in the equations to estimate FFM. Results: The mean FFM measured by BIS (57.6 kg ± 9.1) differed significantly from those measured by foot-to-foot (54.6 kg ± 8.7) and hand-to-foot BIA (53.2 kg ± 10.5) (P < 0.001). The mean ± SD FFM predicted by three equations using raw data from hand-to-foot BIA were 54.7 kg ± 8.9, 54.7 kg ± 7.9 and 52.9 kg ± 11.05 respectively. These results did not differ from the FFM predicted by the hand-to-foot device (F = 2.66, P = 0.118). Conclusions: Our results suggest that foot-to-foot and hand-to-foot BIA may be used interchangeably in overweight older adults at the group level but due to the large limits of agreement may lead to unacceptable error in individuals. There was no difference between the three prediction equations however these results should be confirmed within a larger sample and against a reference standard.
Resumo:
Food insecurity is the limited access to, or availability of, nutritious, culturally-appropriate and safe foods, or the inability to access these foods by socially acceptable means. In Australia, the monitoring of food insecurity is limited to the use of a single item, included in the three-yearly National Health Survey (NHS). The current research comprised a) a review of the literature and available tools to measure food security, b) piloting and adaptation of the more comprehensive 16-item United States Department of Agriculture (USDA) Food Security Survey Module (FSSM), and c) a cross-sectional study comparing this more comprehensive tool, and it’s 10- and 6- item short forms, with the current single-item used in the NHS, among a sample of households in disadvantaged urban-areas of Brisbane, Australia. Findings have shown that internationally the 16-item USDA-FSSM is the most widely used tool for the measurement of food insecurity. Furthermore, of the validated tools that exist to measure food insecurity, sensitivity and reliability decline as the number of questions in a tool decreases. Among an Australian sample, the current single-measure utilised in the NHS yielded a significantly lower prevalence for food insecurity compared to the 16-item USDA-FSSM and it’s two shorter forms respectively (four and two percentage points lower respectively). These findings suggest that the current prevalence of food insecurity (estimated at 6% in the most recent NHS) may have been underestimated, and have important implications for the development of an effective means of monitoring food security within the context of a developed country.
Resumo:
A characteristic of Parkinson's disease (PD) is the development of tremor within the 4–6 Hz range. One method used to better understand pathological tremor is to compare the responses to tremor-type actions generated intentionally in healthy adults. This study was designed to investigate the similarities and differences between voluntarily generated 4–6 Hz tremor and PD tremor in regards to their amplitude, frequency and coupling characteristics. Tremor responses for 8 PD individuals (on- and off-medication) and 12 healthy adults were assessed under postural and resting conditions. Results showed that the voluntary and PD tremor were essentially identical with regards to the amplitude and peak frequency. However, differences between the groups were found for the variability (SD of peak frequency, proportional power) and regularity (Approximate Entropy, ApEn) of the tremor signal. Additionally, coherence analysis revealed strong inter-limb coupling during voluntary conditions while no bilateral coupling was seen for the PD persons. Overall, healthy participants were able to produce a 5 Hz tremulous motion indistinguishable to that of PD patients in terms of peak frequency and amplitude. However, differences in the structure of variability and level of inter-limb coupling were found for the tremor responses of the PD and healthy adults. These differences were preserved irrespective of the medication state of the PD persons. The results illustrate the importance of assessing the pattern of signal structure/variability to discriminate between different tremor forms, especially where no differences emerge in standard measures of mean amplitude as traditionally defined.
Resumo:
This paper outlines existing matching diagnostics, which may be used for identifying invalid matches and estimating the probability of a correct match. In addition, it proposes a new diagnostic for error prediction which can be used with the rank and census transforms. Both the existing and the new diagnostics have been evaluated and compared for a number of test images. In each case, a confidence estimate was computed for every location of the disparity map, and disparities having a low confidence estimate removed from the disparity map. Collectively, these confidence estimates may be termed a confidence map. Such information would be useful for potential applications of stereo vision such as automation and navigation.
Resumo:
The authors present a qualitative and quantitative comparison of various similarity measures that form the kernel of common area-based stereo-matching systems. The authors compare classical difference and correlation measures as well as nonparametric measures based on the rank and census transforms for a number of outdoor images. For robotic applications, important considerations include robustness to image defects such as intensity variation and noise, the number of false matches, and computational complexity. In the absence of ground truth data, the authors compare the matching techniques based on the percentage of matches that pass the left-right consistency test. The authors also evaluate the discriminatory power of several match validity measures that are reported in the literature for eliminating false matches and for estimating match confidence. For guidance applications, it is essential to have and estimate of confidence in the three-dimensional points generated by stereo vision. Finally, a new validity measure, the rank constraint, is introduced that is capable of resolving ambiguous matches for rank transform-based matching.
Resumo:
We develop a stochastic endogenous growth model to explain the diversity in growth and inequality patterns and the non-convergence of incomes in transitional economies where an underdeveloped financial sector imposes an implicit, fixed cost on the diversification of idiosyncratic risk. In the model endogenous growth occurs through physical and human capital deepening, with the latter being the more dominant element. We interpret the fixed cost as a ‘learning by doing’ cost for entrepreneurs who undertake risk in the absence of well developed financial markets and institutions that help diversify such risk. As such, this cost may be interpreted as the implicit returns foregone due to the lack of diversification opportunities that would otherwise have been available, had such institutions been present. The analytical and numerical results of the model suggest three growth outcomes depending on the productivity differences between the projects and the fixed cost associated with the more productive project. We label these outcomes as poverty trap, dual economy and balanced growth. Further analysis of these three outcomes highlights the existence of a diversity within diversity. Specifically, within the ‘poverty trap’ and ‘dual economy’ scenarios growth and inequality patterns differ, depending on the initial conditions. This additional diversity allows the model to capture a richer range of outcomes that are consistent with the empirical experience of several transitional economies.