887 resultados para whether possessory lien over file until fees paid


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This latest briefing by Professor Reece Walters in the What is crime? series, draws attention to an area of harm that is often absent from criminological debate. He highlights the human costs of air pollution and failed attempts to adequately regulate and control such harm. Arguing for a cross disciplinary ‘eco-crime’ narrative, the author calls for greater understanding of the far-reaching consequences of air pollution which could set in train changes which may lead to a ‘more robust and meaningful system of justice’. Describing current arrangements in place to control and regulate air pollution, Walters draws attention to the lack of neutrality in current arrangements and the bias ‘towards the economic imperatives of free trade over and above the centrality of environmental protection’. While attention is often given to direct and individualised instances of ‘crime’, the serious consequences of air pollution are frequently neglected. The negative effects of pollution on health and well-being are often borne by people already experiencing a range of other disadvantages. In a global and national context, it is often the poor who are affected most. Ultimately, political and economic imperatives have historically helped to shape legal and regulatory regimes. Whether this is an inherent flaw in current systems or something that can be overcome in favour of dealing with more wide-ranging harms is an area that requires further discussion and debate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is frequently reported that the actual weight loss achieved through exercise interventions is less than theoretically expected. Amongst other compensatory adjustments that accompany exercise training (e.g., increases in resting metabolic rate and energy intake), a possible cause of the less than expected weight loss is a failure to produce a marked increase in total daily energy expenditure due to a compensatory reduction in non-exercise activity thermogenesis (NEAT). Therefore, there is a need to understand how behaviour is modified in response to exercise interventions. The proposed benefits of exercise training are numerous, including changes to fat oxidation. Given that a diminished capacity to oxidise fat could be a factor in the aetiology of obesity, an exercise training intensity that optimises fat oxidation in overweight/obese individuals would improve impaired fat oxidation, and potentially reduce health risks that are associated with obesity. To improve our understanding of the effectiveness of exercise for weight management, it is important to ensure exercise intensity is appropriately prescribed, and to identify and monitor potential compensatory behavioural changes consequent to exercise training. In line with the gaps in the literature, three studies were performed. The aim of Study 1 was to determine the effect of acute bouts of moderate- and high-intensity walking exercise on NEAT in overweight and obese men. Sixteen participants performed a single bout of either moderate-intensity walking exercise (MIE) or high-intensity walking exercise (HIE) on two separate occasions. The MIE consisted of walking for 60-min on a motorised treadmill at 6 km.h-1. The 60-min HIE session consisted of walking in 5-min intervals at 6 km.h-1 and 10% grade followed by 5-min at 0% grade. NEAT was assessed by accelerometer three days before, on the day of, and three days after the exercise sessions. There was no significant difference in NEAT vector magnitude (counts.min-1) between the pre-exercise period (days 1-3) and the exercise day (day 4) for either protocol. In addition, there was no change in NEAT during the three days following the MIE session, however NEAT increased by 16% on day 7 (post-exercise) compared with the exercise day (P = 0.32). During the post-exercise period following the HIE session, NEAT was increased by 25% on day 7 compared with the exercise day (P = 0.08), and by 30-33% compared with the pre-exercise period (day 1, day 2 and day 3); P = 0.03, 0.03, 0.02, respectively. To conclude, a single bout of either MIE or HIE did not alter NEAT on the exercise day or on the first two days following the exercise session. However, extending the monitoring of NEAT allowed the detection of a 48 hour delay in increased NEAT after performing HIE. A longer-term intervention is needed to determine the effect of accumulated exercise sessions over a week on NEAT. In Study 2, there were two primary aims. The first aim was to test the reliability of a discontinuous incremental exercise protocol (DISCON-FATmax) to identify the workload at which fat oxidation is maximised (FATmax). Ten overweight and obese sedentary male men (mean BMI of 29.5 ¡Ó 4.5 kg/m2 and mean age of 28.0 ¡Ó 5.3 y) participated in this study and performed two identical DISCON-FATmax tests one week apart. Each test consisted of alternate 4-min exercise and 2-min rest intervals on a cycle ergometer. The starting work load of 28 W was increased every 4-min using 14 W increments followed by 2-min rest intervals. When the respiratory exchange ratio was consistently >1.0, the workload was increased by 14 W every 2-min until volitional exhaustion. Fat oxidation was measured by indirect calorimetry. The mean FATmax, ƒtV O2peak, %ƒtV O2peak and %Wmax at which FATmax occurred during the two tests were 0.23 ¡Ó 0.09 and 0.18 ¡Ó 0.08 (g.min-1); 29.7 ¡Ó 7.8 and 28.3 ¡Ó 7.5 (ml.kg-1.min-1); 42.3 ¡Ó 7.2 and 42.6 ¡Ó 10.2 (%ƒtV O2max) and 36.4 ¡Ó 8.5 and 35.4 ¡Ó 10.9 (%), respectively. A paired-samples T-test revealed a significant difference in FATmax (g.min-1) between the tests (t = 2.65, P = 0.03). The mean difference in FATmax was 0.05 (g.min-1) with the 95% confidence interval ranging from 0.01 to 0.18. Paired-samples T-test, however, revealed no significant difference in the workloads (i.e. W) between the tests, t (9) = 0.70, P = 0.4. The intra-class correlation coefficient for FATmax (g.min-1) between the tests was 0.84 (95% confidence interval: 0.36-0.96, P < 0.01). However, Bland-Altman analysis revealed a large disagreement in FATmax (g.min-1) related to W between the two tests; 11 ¡Ó 14 (W) (4.1 ¡Ó 5.3 ƒtV O2peak (%)).These data demonstrate two important phenomena associated with exercise-induced substrate oxidation; firstly, that maximal fat oxidation derived from a discontinuous FATmax protocol differed statistically between repeated tests, and secondly, there was large variability in the workload corresponding with FATmax. The second aim of Study 2 was to test the validity of a DISCON-FATmax protocol by comparing maximal fat oxidation (g.min-1) determined by DISCON-FATmax with fat oxidation (g.min-1) during a continuous exercise protocol using a constant load (CONEX). Ten overweight and obese sedentary males (BMI = 29.5 ¡Ó 4.5 kg/m2; age = 28.0 ¡Ó 4.5 y) with a ƒtV O2max of 29.1 ¡Ó 7.5 ml.kg-1.min-1 performed a DISCON-FATmax test consisting of alternate 4-min exercise and 2-min rest intervals on a cycle ergometer. The 1-h CONEX protocol used the workload from the DISCON-FATmax to determine FATmax. The mean FATmax, ƒtV O2max, %ƒtV O2max and workload at which FATmax occurred during the DISCON-FATmax were 0.23 ¡Ó 0.09 (g.min-1); 29.1 ¡Ó 7.5 (ml.kg-1.min-1); 43.8 ¡Ó 7.3 (%ƒtV O2max) and 58.8 ¡Ó 19.6 (W), respectively. The mean fat oxidation during the 1-h CONEX protocol was 0.19 ¡Ó 0.07 (g.min-1). A paired-samples T-test revealed no significant difference in fat oxidation (g.min-1) between DISCON-FATmax and CONEX, t (9) = 1.85, P = 0.097 (two-tailed). There was also no significant correlation in fat oxidation between the DISCON-FATmax and CONEX (R=0.51, P = 0.14). Bland- Altman analysis revealed a large disagreement in fat oxidation between the DISCONFATmax and CONEX; the upper limit of agreement was 0.13 (g.min-1) and the lower limit of agreement was ¡V0.03 (g.min-1). These data suggest that the CONEX and DISCONFATmax protocols did not elicit different rates of fat oxidation (g.min-1). However, the individual variability in fat oxidation was large, particularly in the DISCON-FATmax test. Further research is needed to ascertain the validity of graded exercise tests for predicting fat oxidation during constant load exercise sessions. The aim of Study 3 was to compare the impact of two different intensities of four weeks of exercise training on fat oxidation, NEAT, and appetite in overweight and obese men. Using a cross-over design 11 participants (BMI = 29 ¡Ó 4 kg/m2; age = 27 ¡Ó 4 y) participated in a training study and were randomly assigned initially to: [1] a lowintensity (45%ƒtV O2max) exercise (LIT) or [2] a high-intensity interval (alternate 30 s at 90%ƒtV O2max followed by 30 s rest) exercise (HIIT) 40-min duration, three times a week. Participants completed four weeks of supervised training and between cross-over had a two week washout period. At baseline and the end of each exercise intervention,ƒtV O2max, fat oxidation, and NEAT were measured. Fat oxidation was determined during a standard 30-min continuous exercise bout at 45%ƒtV O2max. During the steady state exercise expired gases were measured intermittently for 5-min periods and HR was monitored continuously. In each training period, NEAT was measured for seven consecutive days using an accelerometer (RT3) the week before, at week 3 and the week after training. Subjective appetite sensations and food preferences were measured immediately before and after the first exercise session every week for four weeks during both LIT and HIIT. The mean fat oxidation rate during the standard continuous exercise bout at baseline for both LIT and HIIT was 0.14 ¡Ó 0.08 (g.min-1). After four weeks of exercise training, the mean fat oxidation was 0.178 ¡Ó 0.04 and 0.183 ¡Ó 0.04 g.min-1 for LIT and HIIT, respectively. The mean NEAT (counts.min-1) was 45 ¡Ó 18 at baseline, 55 ¡Ó 22 and 44 ¡Ó 16 during training, and 51 ¡Ó 14 and 50 ¡Ó 21 after training for LIT and HIIT, respectively. There was no significant difference in fat oxidation between LIT and HIIT. Moreover, although not statistically significant, there was some evidence to suggest that LIT and HIIT tend to increase fat oxidation during exercise at 45% ƒtV O2max (P = 0.14 and 0.08, respectively). The order of training treatment did not significantly influence changes in fat oxidation, NEAT, and appetite. NEAT (counts.min-1) was not significantly different in the week following training for either LIT or HIIT. Although not statistically significant (P = 0.08), NEAT was 20% lower during week 3 of exercise training in HIIT compared with LIT. Examination of appetite sensations revealed differences in the intensity of hunger, with higher ratings after LIT compared with HIIT. No differences were found in preferences for high-fat sweet foods between LIT and HIIT. In conclusion, the results of this thesis suggest that while fat oxidation during steady state exercise was not affected by the level of exercise intensity, there is strong evidence to suggest that intense exercise could have a debilitative effect on NEAT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Contact lenses are a common method for the correction of refractive errors of the eye. While there have been significant advancements in contact lens designs and materials over the past few decades, the lenses still represent a foreign object in the ocular environment and may lead to physiological as well as mechanical effects on the eye. When contact lenses are placed in the eye, the ocular anatomical structures behind and in front of the lenses are directly affected. This thesis presents a series of experiments that investigate the mechanical and physiological effects of the short-term use of contact lenses on anterior and posterior corneal topography, corneal thickness, the eyelids, tarsal conjunctiva and tear film surface quality. The experimental paradigm used in these studies was a repeated measures, cross-over study design where subjects wore various types of contact lenses on different days and the lenses were varied in one or more key parameters (e.g. material or design). Both, old and newer lens materials were investigated, soft and rigid lenses were used, high and low oxygen permeability materials were tested, toric and spherical lens designs were examined, high and low powers and small and large diameter lenses were used in the studies. To establish the natural variability in the ocular measurements used in the studies, each experiment also contained at least one “baseline” day where an identical measurement protocol was followed, with no contact lenses worn. In this way, changes associated with contact lens wear were considered in relation to those changes that occurred naturally during the 8 hour period of the experiment. In the first study, the regional distribution and magnitude of change in corneal thickness and topography was investigated in the anterior and posterior cornea after short-term use of soft contact lenses in 12 young adults using the Pentacam. Four different types of contact lenses (Silicone hydrogel/ Spherical/–3D, Silicone Hydrogel/Spherical/–7D, Silicone Hydrogel/Toric/–3D and HEMA/Toric/–3D) of different materials, designs and powers were worn for 8 hours each, on 4 different days. The natural diurnal changes in corneal thickness and curvature were measured on two separate days before any contact lens wear. Significant diurnal changes in corneal thickness and curvature within the duration of the study were observed and these were taken into consideration for calculating the contact lens induced corneal changes. Corneal thickness changed significantly with lens wear and the greatest corneal swelling was seen with the hydrogel (HEMA) toric lens with a noticeable regional swelling of the cornea beneath the stabilization zones, the thickest regions of the lenses. The anterior corneal surface generally showed a slight flattening with lens wear. All contact lenses resulted in central posterior corneal steepening, which correlated with the relative degree of corneal swelling. The corneal swelling induced by the silicone hydrogel contact lenses was typically less than the natural diurnal thinning of the cornea over this same period (i.e. net thinning). This highlights why it is important to consider the natural diurnal variations in corneal thickness observed from morning to afternoon to accurately interpret contact lens induced corneal swelling. In the second experiment, the relative influence of lenses of different rigidity (polymethyl methacrylate – PMMA, rigid gas permeable – RGP and silicone hydrogel – SiHy) and diameters (9.5, 10.5 and 14.0) on corneal thickness, topography, refractive power and wavefront error were investigated. Four different types of contact lenses (PMMA/9.5, RGP/9.5, RGP/10.5, SiHy/14.0), were worn by 14 young healthy adults for a period of 8 hours on 4 different days. There was a clear association between fluorescein fitting pattern characteristics (i.e. regions of minimum clearance in the fluorescein pattern) and the resulting corneal shape changes. PMMA lenses resulted in significant corneal swelling (more in the centre than periphery) along with anterior corneal steepening and posterior flattening. RGP lenses, on the other hand, caused less corneal swelling (more in the periphery than centre) along with opposite effects on corneal curvature, anterior corneal flattening and posterior steepening. RGP lenses also resulted in a clinically and statistically significant decrease in corneal refractive power (ranging from 0.99 to 0.01 D), large enough to affect vision and require adjustment in the lens power. Wavefront analysis also showed a significant increase in higher order aberrations after PMMA lens wear, which may partly explain previous reports of "spectacle blur" following PMMA lens wear. We further explored corneal curvature, thickness and refractive changes with back surface toric and spherical RGP lenses in a group of 6 subjects with toric corneas. The lenses were worn for 8 hours and measurements were taken before and after lens wear, as in previous experiments. Both lens types caused anterior corneal flattening and a decrease in corneal refractive power but the changes were greater with the spherical lens. The spherical lens also caused a significant decrease in WTR astigmatism (WRT astigmatism defined as major axis within 30 degrees of horizontal). Both the lenses caused slight posterior corneal steepening and corneal swelling, with a greater effect in the periphery compared to the central cornea. Eyelid position, lid-wiper and tarsal conjunctival staining were also measured in Experiment 2 after short-term use of the rigid and SiHy contact lenses. Digital photos of the external eyes were captured for lid position analysis. The lid-wiper region of the marginal conjunctiva was stained using fluorescein and lissamine green dyes and digital photos were graded by an independent masked observer. A grading scale was developed in order to describe the tarsal conjunctival staining. A significant decrease in the palpebral aperture height (blepharoptosis) was found after wearing of PMMA/9.5 and RGP/10.5 lenses. All three rigid contact lenses caused a significant increase in lid-wiper and tarsal staining after 8 hours of lens wear. There was also a significant diurnal increase in tarsal staining, even without contact lens wear. These findings highlight the need for better contact lens edge design to minimise the interactions between the lid and contact lens edge during blinking and more lubricious contact lens surfaces to reduce ocular surface micro-trauma due to friction and for. Tear film surface quality (TFSQ) was measured using a high-speed videokeratoscopy technique in Experiment 2. TFSQ was worse with all the lenses compared to baseline (PMMA/9.5, RGP/9.5, RGP/10.5, and SiHy/14) in the afternoon (after 8 hours) during normal and suppressed blinking conditions. The reduction in TFSQ was similar with all the contact lenses used, irrespective of their material and diameter. An unusual pattern of change in TFSQ in suppressed blinking conditions was also found. The TFSQ with contact lens was found to decrease until a certain time after which it improved to a value even better than the bare eye. This is likely to be due to the tear film drying completely over the surface of the contact lenses. The findings of this study also show that there is still a scope for improvement in contact lens materials in terms of better wettability and hydrophilicity in order to improve TFSQ and patient comfort. These experiments showed that a variety of changes can occur in the anterior eye as a result of the short-term use of a range of commonly used contact lens types. The greatest corneal changes occurred with lenses manufactured from older HEMA and PMMA lens materials, whereas modern SiHy and rigid gas permeable materials caused more subtle changes in corneal shape and thickness. All lenses caused signs of micro-trauma to the eyelid wiper and palpebral conjunctiva, although rigid lenses appeared to cause more significant changes. Tear film surface quality was also significantly reduced with all types of contact lenses. These short-term changes in the anterior eye are potential markers for further long term changes and the relative differences between lens types that we have identified provide an indication of areas of contact lens design and manufacture that warrant further development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent times, light gauge steel framed (LSF) structures, such as cold-formed steel wall systems, are increasingly used, but without a full understanding of their fire performance. Traditionally the fire resistance rating of these load-bearing LSF wall systems is based on approximate prescriptive methods developed based on limited fire tests. Very often they are limited to standard wall configurations used by the industry. Increased fire rating is provided simply by adding more plasterboards to these walls. This is not an acceptable situation as it not only inhibits innovation and structural and cost efficiencies but also casts doubt over the fire safety of these wall systems. Hence a detailed fire research study into the performance of LSF wall systems was undertaken using full scale fire tests and extensive numerical studies. A new composite wall panel developed at QUT was also considered in this study, where the insulation was used externally between the plasterboards on both sides of the steel wall frame instead of locating it in the cavity. Three full scale fire tests of LSF wall systems built using the new composite panel system were undertaken at a higher load ratio using a gas furnace designed to deliver heat in accordance with the standard time temperature curve in AS 1530.4 (SA, 2005). Fire tests included the measurements of load-deformation characteristics of LSF walls until failure as well as associated time-temperature measurements across the thickness and along the length of all the specimens. Tests of LSF walls under axial compression load have shown the improvement to their fire performance and fire resistance rating when the new composite panel was used. Hence this research recommends the use of the new composite panel system for cold-formed LSF walls. The numerical study was undertaken using a finite element program ABAQUS. The finite element analyses were conducted under both steady state and transient state conditions using the measured hot and cold flange temperature distributions from the fire tests. The elevated temperature reduction factors for mechanical properties were based on the equations proposed by Dolamune Kankanamge and Mahendran (2011). These finite element models were first validated by comparing their results with experimental test results from this study and Kolarkar (2010). The developed finite element models were able to predict the failure times within 5 minutes. The validated model was then used in a detailed numerical study into the strength of cold-formed thin-walled steel channels used in both the conventional and the new composite panel systems to increase the understanding of their behaviour under nonuniform elevated temperature conditions and to develop fire design rules. The measured time-temperature distributions obtained from the fire tests were used. Since the fire tests showed that the plasterboards provided sufficient lateral restraint until the failure of LSF wall panels, this assumption was also used in the analyses and was further validated by comparison with experimental results. Hence in this study of LSF wall studs, only the flexural buckling about the major axis and local buckling were considered. A new fire design method was proposed using AS/NZS 4600 (SA, 2005), NAS (AISI, 2007) and Eurocode 3 Part 1.3 (ECS, 2006). The importance of considering thermal bowing, magnified thermal bowing and neutral axis shift in the fire design was also investigated. A spread sheet based design tool was developed based on the above design codes to predict the failure load ratio versus time and temperature for varying LSF wall configurations including insulations. Idealised time-temperature profiles were developed based on the measured temperature values of the studs. This was used in a detailed numerical study to fully understand the structural behaviour of LSF wall panels. Appropriate equations were proposed to find the critical temperatures for different composite panels, varying in steel thickness, steel grade and screw spacing for any load ratio. Hence useful and simple design rules were proposed based on the current cold-formed steel structures and fire design standards, and their accuracy and advantages were discussed. The results were also used to validate the fire design rules developed based on AS/NZS 4600 (SA, 2005) and Eurocode Part 1.3 (ECS, 2006). This demonstrated the significant improvements to the design method when compared to the currently used prescriptive design methods for LSF wall systems under fire conditions. In summary, this research has developed comprehensive experimental and numerical thermal and structural performance data for both the conventional and the proposed new load bearing LSF wall systems under standard fire conditions. Finite element models were developed to predict the failure times of LSF walls accurately. Idealized hot flange temperature profiles were developed for non-insulated, cavity and externally insulated load bearing wall systems. Suitable fire design rules and spread sheet based design tools were developed based on the existing standards to predict the ultimate failure load, failure times and failure temperatures of LSF wall studs. Simplified equations were proposed to find the critical temperatures for varying wall panel configurations and load ratios. The results from this research are useful to both structural and fire engineers and researchers. Most importantly, this research has significantly improved the knowledge and understanding of cold-formed LSF loadbearing walls under standard fire conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Each financial year concessions, benefits and incentives are delivered to taxpayers via the tax system. These concessions, benefits and incentives, referred to as tax expenditure, differ from direct expenditure because of the recurring fiscal impact without regular scrutiny through the federal budget process. There are approximately 270 different tax expenditures existing within the current tax regime with total measured tax expenditures in the 2005-06 financial year estimated to be around $42.1 billion, increasing to $52.7 billion by 2009-10. Each year, new tax expenditures are introduced, while existing tax expenditures are modified and deleted. In recognition of some of the problems associated with tax expenditure, a Tax Expenditure Statement, as required by the Charter of Budget Honesty Act 1988, is produced annually by the Australian Federal Treasury. The Statement details the various expenditures and measures in the form of concessions, benefits and incentives provided to taxpayers by the Australian Government and calculates the tax expenditure in terms of revenue forgone. A similar approach to reporting tax expenditure, with such a report being a legal requirement, is followed by most OECD countries. The current Tax Expenditure Statement lists 270 tax expenditures and where it is able to, reports on the estimated pecuniary value of those expenditures. Apart from the annual Tax Expenditure Statement, there is very little other scrutiny of Australia’s Federal tax expenditure program. While there has been various academic analysis of tax expenditure in Australia, when compared to the North American literature, it is suggested that the Australian literature is still in its infancy. In fact, one academic author who has contributed to tax expenditure analysis recently noted that there is ‘remarkably little secondary literature which deals at any length with tax expenditures in the Australian context.’ Given this perceived gap in the secondary literature, this paper examines fundamental concept of tax expenditure and considers the role it plays in to the current tax regime as a whole, along with the effects of the introduction of new tax expenditures. In doing so, tax expenditure is contrasted with direct expenditure. An analysis of tax expenditure versus direct expenditure is already a sophisticated and comprehensive body of work stemming from the US over the last three decades. As such, the title of this paper is rather misleading. However, given the lack of analysis in Australia, it is appropriate that this paper undertakes a consideration of tax expenditure versus direct expenditure in an Australian context. Given this proposition, rather than purport to undertake a comprehensive analysis of tax expenditure which has already been done, this paper discusses the substantive considerations of any such analysis to enable further investigation into the tax expenditure regime both as a whole and into individual tax expenditure initiatives. While none of the propositions in this paper are new in a ‘tax expenditure analysis’ sense, this debate is a relatively new contribution to the Australian literature on the tax policy. Before the issues relating to tax expenditure can be determined, it is necessary to consider what is meant by ‘tax expenditure’. As such, part two if this paper defines ‘tax expenditure’. Part three determines the framework in which tax expenditure can be analysed. It is suggested that an analysis of tax expenditure must be evaluated within the framework of the design criteria of an income tax system with the key features of equity, efficiency, and simplicity. Tax expenditure analysis can then be applied to deviations from the ideal tax base. Once it is established what is meant by tax expenditure and the framework for evaluation is determined, it is possible to establish the substantive issues to be evaluated. This paper suggests that there are four broad areas worthy of investigation; economic efficiency, administrative efficiency, whether tax expenditure initiatives achieve their policy intent, and the impact on stakeholders. Given these areas of investigation, part four of this paper considers the issues relating to the economic efficiency of the tax expenditure regime, in particular, the effect on resource allocation, incentives for taxpayer behaviour and distortions created by tax expenditures. Part five examines the notion of administrative efficiency in light of the fact that most tax expenditures could simply be delivered as direct expenditures. Part six explores the notion of policy intent and considers the two questions that need to be asked; whether any tax expenditure initiative reaches its target group and whether the financial incentives are appropriate. Part seven examines the impact on stakeholders. Finally, part eight considers the future of tax expenditure analysis in Australia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In redefining our understanding of women’s roles in contemporary Australian philanthropy, the impact of major contextual and demographic changes, as well as changes in women’s roles, responsibilities and opportunities need to be considered. Although academic study of philanthropy and the wider third sector is increasing in Australia, literature searches have revealed little current data on the giving patterns and philanthropic drivers for contemporary Australian women, particularly emerging cohorts (one ABS survey looks at giving patterns – ABS, 2000b: 32). In contrast, there is increasing interest in the US, where it is acknowledged that more women are becoming independent holders of wealth; and that interested donors have specific needs, desires and motivations in terms of knowledge, power, marketing and response to their philanthropy (see for example, Grace 2000; McCarthy 2001; Women’s Philanthropy Institute 2002). These varied demographic, social and economic drivers, which could also be expected to encourage new cohorts of Australian women to give, will be examined within our definition of women in philanthropy, and a brief history of women’s philanthropy in Australia, in order to inform future in-depth analyses of Australian women donors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Toolbox, combined with MATLAB ® and a modern workstation computer, is a useful and convenient environment for investigation of machine vision algorithms. For modest image sizes the processing rate can be sufficiently ``real-time'' to allow for closed-loop control. Focus of attention methods such as dynamic windowing (not provided) can be used to increase the processing rate. With input from a firewire or web camera (support provided) and output to a robot (not provided) it would be possible to implement a visual servo system entirely in MATLAB. Provides many functions that are useful in machine vision and vision-based control. Useful for photometry, photogrammetry, colorimetry. It includes over 100 functions spanning operations such as image file reading and writing, acquisition, display, filtering, blob, point and line feature extraction, mathematical morphology, homographies, visual Jacobians, camera calibration and color space conversion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Grant Stevens is ambivalent. The young Brisbane artist made his name with a series of computer-generated animated-text videos that explore clichés but seem undecided as to whether they are trivial and vacuous, profound and authentic or somehow both at once. Stevens plunders mass-media sources (the familiar image repertoire dished up by Hollywood, television, pop music and the Internet) as readymade content. He explores this everyday language, sometimes for its ambiguity, but more often for its almost uncanny lucidity. Resembling meditation and relaxation guides, his recent videos beg the question: what made us so anxious? This book examines Stevens' artistic output over the first ten years of his practice. It includes essays by Mark Pennings and Chris Kraus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose. To investigate whether diurnal variation occurs in retinal thickness measures derived from spectral domain optical coherence tomography (SD-OCT). Methods. Twelve healthy adult subjects had retinal thickness measured with SD-OCT every 2 h over a 10 h period. At each measurement session, three average B-scan images were derived from a series of multiple B-scans (each from a 5 mm horizontal raster scan along the fovea, containing 1500 A-scans/B-scan) and analyzed to determine the thickness of the total retina, as well as the thickness of the outer retinal layers. Average thickness values were calculated at the foveal center, at the 0.5 mm diameter foveal region, and for the temporal parafovea (1.5 mm from foveal center) and nasal parafovea (1.5 mm from foveal center). Results. Total retinal thickness did not exhibit significant diurnal variation in any of the considered retinal regions (p > 0.05). Evidence of significant diurnal variation was found in the thickness of the outer retinal layers (p < 0.05), with the most prominent changes observed in the photoreceptor layers at the foveal center. The photoreceptor inner and outer segment layer thickness exhibited mean amplitude (peak to trough) of daily change of 7 ± 3 μm at the foveal center. The peak in thickness was typically observed at the third measurement session (mean measurement time, 13:06). Conclusions. The total retinal thickness measured with SD-OCT does not exhibit evidence of significant variation over the course of the day. However, small but significant diurnal variation occurs in the thickness of the foveal outer retinal layers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background The four principles of Beauchamp and Childress - autonomy, non-maleficence, beneficence and justice - have been extremely influential in the field of medical ethics, and are fundamental for understanding the current approach to ethical assessment in health care. This study tests whether these principles can be quantitatively measured on an individual level, and then subsequently if they are used in the decision making process when individuals are faced with ethical dilemmas. Methods The Analytic Hierarchy Process was used as a tool for the measurement of the principles. Four scenarios, which involved conflicts between the medical ethical principles, were presented to participants and they made judgments about the ethicality of the action in the scenario, and their intentions to act in the same manner if they were in the situation. Results Individual preferences for these medical ethical principles can be measured using the Analytic Hierarchy Process. This technique provides a useful tool in which to highlight individual medical ethical values. On average individuals have a significant preference for non-maleficence over the other principles, however, and perhaps counter-intuitively, this preference does not seem to relate to applied ethical judgements in specific ethical dilemmas. Conclusions People state they value these medical ethical principles but they do not actually seem to use them directly in the decision making process. The reasons for this are explained through the lack of a behavioural model to account for the relevant situational factors not captured by the principles. The limitations of the principles in predicting ethical decision making are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Studies continue to report ancient DNA sequences and viable microbial cells that are many millions of years old. In this paper we evaluate some of the most extravagant claims of geologically ancient DNA. We conclude that although exciting, the reports suffer from inadequate experimental setup and insufficient authentication of results. Consequently, it remains doubtful whether amplifiable DNA sequences and viable bacteria can survive over geological timescales. To enhance the credibility of future studies and assist in discarding false-positive results, we propose a rigorous set of authentication criteria for work with geologically ancient DNA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The opening phrase of the title is from Charles Darwin’s notebooks (Schweber 1977). It is a double reminder, firstly that mainstream evolutionary theory is not just about describing nature but is particularly looking for mechanisms or ‘causes’, and secondly, that there will usually be several causes affecting any particular outcome. The second part of the title is our concern at the almost universal rejection of the idea that biological mechanisms are sufficient for macroevolutionary changes, thus rejecting a cornerstone of Darwinian evolutionary theory. Our primary aim here is to consider ways of making it easier to develop and to test hypotheses about evolution. Formalizing hypotheses can help generate tests. In an absolute sense, some of the discussion by scientists about evolution is little better than the lack of reasoning used by those advocating intelligent design. Our discussion here is in a Popperian framework where science is defined by that area of study where it is possible, in principle, to find evidence against hypotheses – they are in principle falsifiable. However, with time, the boundaries of science keep expanding. In the past, some aspects of evolution were outside the current boundaries of falsifiable science, but increasingly new techniques and ideas are expanding the boundaries of science and it is appropriate to re-examine some topics. It often appears that over the last few decades there has been an increasingly strong assumption to look first (and only) for a physical cause. This decision is virtually never formally discussed, just an assumption is made that some physical factor ‘drives’ evolution. It is necessary to examine our assumptions much more carefully. What is meant by physical factors ‘driving’ evolution, or what is an ‘explosive radiation’. Our discussion focuses on two of the six mass extinctions, the fifth being events in the Late Cretaceous, and the sixth starting at least 50,000 years ago (and is ongoing). Cretaceous/Tertiary boundary; the rise of birds and mammals. We have had a long-term interest (Cooper and Penny 1997) in designing tests to help evaluate whether the processes of microevolution are sufficient to explain macroevolution. The real challenge is to formulate hypotheses in a testable way. For example the numbers of lineages of birds and mammals that survive from the Cretaceous to the present is one test. Our first estimate was 22 for birds, and current work is tending to increase this value. This still does not consider lineages that survived into the Tertiary, and then went extinct later. Our initial suggestion was probably too narrow in that it lumped four models from Penny and Phillips (2004) into one model. This reduction is too simplistic in that we need to know about survival and ecological and morphological divergences during the Late Cretaceous, and whether Crown groups of avian or mammalian orders may have existed back into the Cretaceous. More recently (Penny and Phillips 2004) we have formalized hypotheses about dinosaurs and pterosaurs, with the prediction that interactions between mammals (and groundfeeding birds) and dinosaurs would be most likely to affect the smallest dinosaurs, and similarly interactions between birds and pterosaurs would particularly affect the smaller pterosaurs. There is now evidence for both classes of interactions, with the smallest dinosaurs and pterosaurs declining first, as predicted. Thus, testable models are now possible. Mass extinction number six: human impacts. On a broad scale, there is a good correlation between time of human arrival, and increased extinctions (Hurles et al. 2003; Martin 2005; Figure 1). However, it is necessary to distinguish different time scales (Penny 2005) and on a finer scale there are still large numbers of possibilities. In Hurles et al. (2003) we mentioned habitat modification (including the use of Geogenes III July 2006 31 fire), introduced plants and animals (including kiore) in addition to direct predation (the ‘overkill’ hypothesis). We need also to consider prey switching that occurs in early human societies, as evidenced by the results of Wragg (1995) on the middens of different ages on Henderson Island in the Pitcairn group. In addition, the presence of human-wary or humanadapted animals will affect the distribution in the subfossil record. A better understanding of human impacts world-wide, in conjunction with pre-scientific knowledge will make it easier to discuss the issues by removing ‘blame’. While continued spontaneous generation was accepted universally, there was the expectation that animals continued to reappear. New Zealand is one of the very best locations in the world to study many of these issues. Apart from the marine fossil record, some human impact events are extremely recent and the remains less disrupted by time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

According to a recent report Australian higher education is not in crisis. However, we could be doing it better. The report Mapping Australian Higher Education (Norton, 2012) highlights comparative weaknesses such as levels f student engagement; interactions between students and academic staff; and academic staff preferences for research over teaching. The report points out that despite these concerns most graduates continue to get good, well-paid jobs, student satisfaction is improving, and levels of public confidence in Australian higher education are high. It also stresses that ‘the promise of higher education is that it provides adaptable cognitive skills, not that it always provides the job specific skills graduates will need in their future employment’ (Norton, 2012, p.58). This is worth keeping in mind as we contribute to the significant growth in curriculum initiatives aimed at preparing graduates for the world of work. Work Integrated Learning (WIL) is not a new concept but there is increased pressure on higher education globally to address graduate employability skills. The sector is under pressure in an increasingly competitive environment to demonstrate the relevance of courses, accountability and effective use of public funds (Peach & Gamble, 2011). In the Australian context this also means responding to the skills shortage in areas such as engineering, health, construction and business (DEEWR, 2010). This paper provides a brief overview of collaborative efforts over several years to improve the activity of WIL at the Queensland University of Technology (QUT). These efforts have resulted in changes to curriculum, pedagogy, systems and processes, and the initiation of local, regional, national, and international networks. The willingness of students, staff, and industry partners to ‘get stuck in’ and try new approaches in these different contexts can be understood as a form of boundary spanning. That is, the development of the capability to mediate between different forms of expertise and the demands of different contexts in order to nurture student learning and improve the outcomes of higher education through WIL (Peach, Cates, Ilg, Jones, Lechleiter, 2011).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Australian women make decisions about return to paid work and care for their child within a policy environment that presents mixed messages about maternal employment and child care standards. Against this background an investigation of first-time mothers’ decision-making about workforce participation and child care was undertaken. Four women were studied from pregnancy through the first postnatal year using interview and diary methods. Inductive analyses identified three themes, all focused on dimensions of family security: financial security relating to family income, emotional security relating to child care quality, and pragmatic security relating to child care access. The current policy changes that aim to increase child care quality standards in Australia present a positive step toward alleviating family insecurities but are insufficient to alleviate the evidently high levels of tension between workforce participation and family life experienced by women transitioning back into the workforce in Australia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electronic services are a leitmotif in ‘hot’ topics like Software as a Service, Service Oriented Architecture (SOA), Service oriented Computing, Cloud Computing, application markets and smart devices. We propose to consider these in what has been termed the Service Ecosystem (SES). The SES encompasses all levels of electronic services and their interaction, with human consumption and initiation on its periphery in much the same way the ‘Web’ describes a plethora of technologies that eventuate to connect information and expose it to humans. Presently, the SES is heterogeneous, fragmented and confined to semi-closed systems. A key issue hampering the emergence of an integrated SES is Service Discovery (SD). A SES will be dynamic with areas of structured and unstructured information within which service providers and ‘lay’ human consumers interact; until now the two are disjointed, e.g., SOA-enabled organisations, industries and domains are choreographed by domain experts or ‘hard-wired’ to smart device application markets and web applications. In a SES, services are accessible, comparable and exchangeable to human consumers closing the gap to the providers. This requires a new SD with which humans can discover services transparently and effectively without special knowledge or training. We propose two modes of discovery, directed search following an agenda and explorative search, which speculatively expands knowledge of an area of interest by means of categories. Inspired by conceptual space theory from cognitive science, we propose to implement the modes of discovery using concepts to map a lay consumer’s service need to terminologically sophisticated descriptions of services. To this end, we reframe SD as an information retrieval task on the information attached to services, such as, descriptions, reviews, documentation and web sites - the Service Information Shadow. The Semantic Space model transforms the shadow's unstructured semantic information into a geometric, concept-like representation. We introduce an improved and extended Semantic Space including categorization calling it the Semantic Service Discovery model. We evaluate our model with a highly relevant, service related corpus simulating a Service Information Shadow including manually constructed complex service agendas, as well as manual groupings of services. We compare our model against state-of-the-art information retrieval systems and clustering algorithms. By means of an extensive series of empirical evaluations, we establish optimal parameter settings for the semantic space model. The evaluations demonstrate the model’s effectiveness for SD in terms of retrieval precision over state-of-the-art information retrieval models (directed search) and the meaningful, automatic categorization of service related information, which shows potential to form the basis of a useful, cognitively motivated map of the SES for exploratory search.