827 resultados para Full time school
Resumo:
The common approach to estimate bus dwell time at a BRT station is to apply the traditional dwell time methodology derived for suburban bus stops. In spite of being sensitive to boarding and alighting passenger numbers and to some extent towards fare collection media, these traditional dwell time models do not account for the platform crowding. Moreover, they fall short in accounting for the effects of passenger/s walking along a relatively longer BRT platform. Using the experience from Brisbane busway (BRT) stations, a new variable, Bus Lost Time (LT), is introduced in traditional dwell time model. The bus lost time variable captures the impact of passenger walking and platform crowding on bus dwell time. These are two characteristics which differentiate a BRT station from a bus stop. This paper reports the development of a methodology to estimate bus lost time experienced by buses at a BRT platform. Results were compared with the Transit Capacity and Quality of Servce Manual (TCQSM) approach of dwell time and station capacity estimation. When the bus lost time was used in dwell time calculations it was found that the BRT station platform capacity reduced by 10.1%.
Resumo:
The common approach to estimate bus dwell time at a BRT station platform is to apply the traditional dwell time methodology derived for suburban bus stops. Current dwell time models are sensitive towards bus type, fare collection policy along with the number of boarding and alighting passengers. However, they fall short in accounting for the effects of passenger/s walking on a relatively longer BRT station platform. Analysis presented in this paper shows that the average walking time of a passenger at BRT platform is 10 times more than that of bus stop. The requirement of walking to the bus entry door at the BRT station platform may lead to the bus experiencing a higher dwell time. This paper presents a theory for a BRT network which explains the loss of station capacity during peak period operation. It also highlights shortcomings of present available bus dwell time models suggested for the analysis of BRT operation.
Resumo:
Background: The 2003 Bureau of Labor Statistics American Time Use Survey (ATUS) contains 438 distinct primary activity variables that can be analyzed with regard to how time is spent by Americans. The Compendium of Physical Activities is used to code physical activities derived from various surveys, logs, diaries, etc to facilitate comparison of coded intensity levels across studies. ------ ----- Methods: This paper describes the methods, challenges, and rationale for linking Compendium estimates of physical activity intensity (METs, metabolic equivalents) with all activities reported in the 2003 ATUS. ----- ----- Results: The assigned ATUS intensity levels are not intended to compute the energy costs of physical activity in individuals. Instead, they are intended to be used to identify time spent in activities broadly classified by type and intensity. This function will complement public health surveillance systems and aid in policy and health-promotion activities. For example, at least one of the future projects of this process is the descriptive epidemiology of time spent in common physical activity intensity categories. ----- ----- Conclusions: The process of metabolic coding of the ATUS by linking it with the Compendium of Physical Activities can make important contributions to our understanding of Americans’ time spent in health-related physical activity.
Resumo:
Where airports were once the sole responsibility of their governments, liberalisation of economies has seen administrative interests in airport spaces divested increasingly towards market led authority. Extant literature suggests that actions in decision spaces can be described under broad idealised forms of governance. However in looking at a sample of 18 different airports it is apparent that these classic models are insufficient to appreciate the contextual complexity of each case. Issues of institutional arrangements, privatisation, and management focus are reviewed against existing governance modes to produce a model for informing privatisation decisions, based on the contextual needs of the individual airport and region. Expanding governance modes to include emergent airport arrangements both contribute to the existing literature, and provides a framework to assist policy makers and those charged with the operation of airports to design effective governance models. In progressing this framework, contributions are made to government decision makers for the development of new, or review of existing strategies for privatisation, while the private sector can identify the intent and expectations of privatisation initiatives to make better informed decisions.
Resumo:
Purpose: The purpose of this paper is to provide a labour process theory interpretation of four case studies within the Australian construction industry. In each case study a working time intervention (a shift to a five-day working week from the industry standard six days) was implemented as an attempt to improve the work-life balance of employees. ----- ----- Design/methodology/approach: This paper was based on four case studies with mixed methods. Each case study has a variety of data collection methods which include questionnaires, short and long interviews, and focus groups. ----- ----- Findings: It was found that the complex mix of wage- and salary-earning staff within the construction industry, along with labour market pressures, means that changing to a five-day working week is quite a radical notion within the industry. However, there are some organisations willing to explore opportunities for change with mixed experiences. ----- ----- Practical implications: The practical implications of this research include understanding the complexity within the Australian construction industry, based around hours of work and pay systems. Decision-makers within the construction industry must recognize a range of competing pressures that mean that “preferred” managerial styles might not be appropriate. ----- ----- Originality/value:– This paper shows that construction firms must take an active approach to reducing the culture of long working hours. This can only be achieved by addressing issues of project timelines and budgets and assuring that take-home pay is not reliant on long hours of overtime.
Resumo:
This article examined the relationship between time structure and Macan's process model of time management. This study proposed that time structure—‘appraisal of effective time usage’—would be a more parsimonious mediator than perceived control over time in the relationship between time management behaviours and outcome variables, such as job satisfaction and psychological well-being. Alternative structure models were compared using a sample of 111 university students. Model 1 tested Macan's process model of time management with perceived control over time as the mediator. Model 2 replaced perceived control over time by the construct of time structure. Model 3 examined the possibility of perceived control over time and time structure as being parallel mediators of the relationships between time management and outcomes. Results of this study showed that Model 1 and Model 2 fitted the data equally well. On the other hand, the mediated effects were small and partial in both models. This pattern of results calls for reassessment of the process model.
Resumo:
Automated visual surveillance of crowds is a rapidly growing area of research. In this paper we focus on motion representation for the purpose of abnormality detection in crowded scenes. We propose a novel visual representation called textures of optical flow. The proposed representation measures the uniformity of a flow field in order to detect anomalous objects such as bicycles, vehicles and skateboarders; and can be combined with spatial information to detect other forms of abnormality. We demonstrate that the proposed approach outperforms state-of-the-art anomaly detection algorithms on a large, publicly-available dataset.
Resumo:
Current knowledge about the relationship between transport disadvantage and activity space size is limited to urban areas, and as a result, very little is known to date about this link in a rural context. In addition, although research has identified transport disadvantaged groups based on their size of activity spaces, these studies have, however, not empirically explained such differences and the result is often a poor identification of the problems facing disadvantaged groups. Research has shown that transport disadvantage varies over time. The static nature of analysis using the activity space concept in previous research studies has lacked the ability to identify transport disadvantage in time. Activity space is a dynamic concept; and therefore possesses a great potential in capturing temporal variations in behaviour and access opportunities. This research derives measures of the size and fullness of activity spaces for 157 individuals for weekdays, weekends, and for a week using weekly activity-travel diary data from three case study areas located in rural Northern Ireland. Four focus groups were also conducted in order to triangulate the quantitative findings and to explain the differences between different socio-spatial groups. The findings of this research show that despite having a smaller sized activity space, individuals were not disadvantaged because they were able to access their required activities locally. Car-ownership was found to be an important life line in rural areas. Temporal disaggregation of the data reveals that this is true only on weekends due to a lack of public transport services. In addition, despite activity spaces being at a similar size, the fullness of activity spaces of low-income individuals was found to be significantly lower compared to their high-income counterparts. Focus group data shows that financial constraint, poor connections both between public transport services and between transport routes and opportunities forced individuals to participate in activities located along the main transport corridors.
Resumo:
This paper discusses the relationship between law and morality. Morality does not necessarily coincide with the law, but it contributes to it. An act may be legal but nevertheless considered to be immoral in a particular society. For example, the use of pornography may be considered by many to be immoral. Nevertheless, the sale and distribution of non-violent, non-child related, sexually explicit material is legal (or regulated) in many jurisdictions. Many laws are informed by, and even created by, morality. This paper examines the historical influence of morality on the law and on society in general. It aims to develop a theoretical framework for examining legal moralism and the social construction of morality and crime as well as the relationship between sex, desire and taboo. Here, we refer to the moral temporality of sex and taboo, which examines the way in which moral judgments about sex and what is considered taboo change over time, and the kinds of justifications that are employed in support of changing moralities. It unpacks the way in which abstract and highly tenuous concepts such as ‘‘desire’’, ‘‘art’’ and ‘‘entertainment’’ may be ‘‘out of time’’ with morality, and how morality shapes laws over time, fabricating justifications from within socially constructed communities of practice. This theoretical framework maps the way in which these concepts have become temporally dominated by heteronormative structures such as the family, marriage, reproduction, and longevity. It is argued that the logic of these structures is inexorably tied to the heterosexual life-path, charting individual lives and relationships through explicit phases of childhood, adolescence and adulthood that, in the twenty-first century, delimit the boundaries of taboo surrounding sex more than any other time in history.
Resumo:
Objective To quantify the lagged effects of mean temperature on deaths from cardiovascular diseases in Brisbane, Australia. Design Polynomial distributed lag models were used to assess the percentage increase in mortality up to 30 days associated with an increase (or decrease) of 1°C above (or below) the threshold temperature. Setting Brisbane, Australia. Patients 22 805 cardiovascular deaths registered between 1996 and 2004. Main outcome measures Deaths from cardiovascular diseases. Results The results show a longer lagged effect in cold days and a shorter lagged effect in hot days. For the hot effect, a statistically significant association was observed only for lag 0–1 days. The percentage increase in mortality was found to be 3.7% (95% CI 0.4% to 7.1%) for people aged ≥65 years and 3.5% (95% CI 0.4% to 6.7%) for all ages associated with an increase of 1°C above the threshold temperature of 24°C. For the cold effect, a significant effect of temperature was found for 10–15 lag days. The percentage estimates for older people and all ages were 3.1% (95% CI 0.7% to 5.7%) and 2.8% (95% CI 0.5% to 5.1%), respectively, with a decrease of 1°C below the threshold temperature of 24°C. Conclusions The lagged effects lasted longer for cold temperatures but were apparently shorter for hot temperatures. There was no substantial difference in the lag effect of temperature on mortality between all ages and those aged ≥65 years in Brisbane, Australia.
Resumo:
Suburbanisation has been internationally a major phenomenon in the last decades. Suburb-to-suburb routes are nowadays the most widespread road journeys; and this resulted in an increment of distances travelled, particularly on faster suburban highways. The design of highways tends to over-simplify the driving task and this can result in decreased alertness. Driving behaviour is consequently impaired and drivers are then more likely to be involved in road crashes. This is particularly dangerous on highways where the speed limit is high. While effective countermeasures to this decrement in alertness do not currently exist, the development of in-vehicle sensors opens avenues for monitoring driving behaviour in real-time. The aim of this study is to evaluate in real-time the level of alertness of the driver through surrogate measures that can be collected from in-vehicle sensors. Slow EEG activity is used as a reference to evaluate driver's alertness. Data are collected in a driving simulator instrumented with an eye tracking system, a heart rate monitor and an electrodermal activity device (N=25 participants). Four different types of highways (driving scenario of 40 minutes each) are implemented through the variation of the road design (amount of curves and hills) and the roadside environment (amount of buildings and traffic). We show with Neural Networks that reduced alertness can be detected in real-time with an accuracy of 92% using lane positioning, steering wheel movement, head rotation, blink frequency, heart rate variability and skin conductance level. Such results show that it is possible to assess driver's alertness with surrogate measures. Such methodology could be used to warn drivers of their alertness level through the development of an in-vehicle device monitoring in real-time drivers' behaviour on highways, and therefore it could result in improved road safety.
Resumo:
This paper presents a method for measuring the in-bucket payload volume on a dragline excavator for the purpose of estimating the material's bulk density in real-time. Knowledge of the payload's bulk density can provide feedback to mine planning and scheduling to improve blasting and therefore provide a more uniform bulk density across the excavation site. This allows a single optimal bucket size to be used for maximum overburden removal per dig and in turn reduce costs and emissions in dragline operation and maintenance. The proposed solution uses a range bearing laser to locate and scan full buckets between the lift and dump stages of the dragline cycle. The bucket is segmented from the scene using cluster analysis, and the pose of the bucket is calculated using the Iterative Closest Point (ICP) algorithm. Payload points are identified using a known model and subsequently converted into a height grid for volume estimation. Results from both scaled and full scale implementations show that this method can achieve an accuracy of above 95%.
Resumo:
In 2008, a three-year pilot ‘pay for performance’ (P4P) program, known as ‘Clinical Practice Improvement Payment’ (CPIP) was introduced into Queensland Health (QHealth). QHealth is a large public health sector provider of acute, community, and public health services in Queensland, Australia. The organisation has recently embarked on a significant reform agenda including a review of existing funding arrangements (Duckett et al., 2008). Partly in response to this reform agenda, a casemix funding model has been implemented to reconnect health care funding with outcomes. CPIP was conceptualised as a performance-based scheme that rewarded quality with financial incentives. This is the first time such a scheme has been implemented into the public health sector in Australia with a focus on rewarding quality, and it is unique in that it has a large state-wide focus and includes 15 Districts. CPIP initially targeted five acute and community clinical areas including Mental Health, Discharge Medication, Emergency Department, Chronic Obstructive Pulmonary Disease, and Stroke. The CPIP scheme was designed around key concepts including the identification of clinical indicators that met the set criteria of: high disease burden, a well defined single diagnostic group or intervention, significant variations in clinical outcomes and/or practices, a good evidence, and clinician control and support (Ward, Daniels, Walker & Duckett, 2007). This evaluative research targeted Phase One of implementation of the CPIP scheme from January 2008 to March 2009. A formative evaluation utilising a mixed methodology and complementarity analysis was undertaken. The research involved three research questions and aimed to determine the knowledge, understanding, and attitudes of clinicians; identify improvements to the design, administration, and monitoring of CPIP; and determine the financial and economic costs of the scheme. Three key studies were undertaken to ascertain responses to the key research questions. Firstly, a survey of clinicians was undertaken to examine levels of knowledge and understanding and their attitudes to the scheme. Secondly, the study sought to apply Statistical Process Control (SPC) to the process indicators to assess if this enhanced the scheme and a third study examined a simple economic cost analysis. The CPIP Survey of clinicians elicited 192 clinician respondents. Over 70% of these respondents were supportive of the continuation of the CPIP scheme. This finding was also supported by the results of a quantitative altitude survey that identified positive attitudes in 6 of the 7 domains-including impact, awareness and understanding and clinical relevance, all being scored positive across the combined respondent group. SPC as a trending tool may play an important role in the early identification of indicator weakness for the CPIP scheme. This evaluative research study supports a previously identified need in the literature for a phased introduction of Pay for Performance (P4P) type programs. It further highlights the value of undertaking a formal risk assessment of clinician, management, and systemic levels of literacy and competency with measurement and monitoring of quality prior to a phased implementation. This phasing can then be guided by a P4P Design Variable Matrix which provides a selection of program design options such as indicator target and payment mechanisms. It became evident that a clear process is required to standardise how clinical indicators evolve over time and direct movement towards more rigorous ‘pay for performance’ targets and the development of an optimal funding model. Use of this matrix will enable the scheme to mature and build the literacy and competency of clinicians and the organisation as implementation progresses. Furthermore, the research identified that CPIP created a spotlight on clinical indicators and incentive payments of over five million from a potential ten million was secured across the five clinical areas in the first 15 months of the scheme. This indicates that quality was rewarded in the new QHealth funding model, and despite issues being identified with the payment mechanism, funding was distributed. The economic model used identified a relative low cost of reporting (under $8,000) as opposed to funds secured of over $300,000 for mental health as an example. Movement to a full cost effectiveness study of CPIP is supported. Overall the introduction of the CPIP scheme into QHealth has been a positive and effective strategy for engaging clinicians in quality and has been the catalyst for the identification and monitoring of valuable clinical process indicators. This research has highlighted that clinicians are supportive of the scheme in general; however, there are some significant risks that include the functioning of the CPIP payment mechanism. Given clinician support for the use of a pay–for-performance methodology in QHealth, the CPIP scheme has the potential to be a powerful addition to a multi-faceted suite of quality improvement initiatives within QHealth.
Resumo:
Background: In the last decade, there has been increasing interest in the health effects of sedentary behavior, which is often assessed using self-report sitting-time questions. The aim of this qualitative study was to document older adults’ understanding of sitting-time questions from the International Physical Activity (PA) Questionnaire (IPAQ) and the PA Scale for the Elderly (PASE). Methods: Australian community-dwelling adults aged 65+ years answered the IPAQ and PASE sitting questions in face-to-face semi-structured interviews. IPAQ uses one open-ended question to assess sitting on a weekday in the last 7 days 'at work, at home, while doing coursework and during leisure time'; PASE uses a three-part closed question about daily leisure-time sitting in the last 7 days. Participants expressed their thoughts out loud while answering each question. They were then probed about their responses. Interviews were recorded, transcribed and coded into themes. Results: Mean age of the 28 male and 27 female participants was 73 years (range 65-89). The most frequently reported activity was watching TV. For both questionnaires, many participants had difficulties understanding what activities to report. Some had difficulty understanding what activities should be classified as ‘leisure-time sitting’. Some assumed they were being asked to only report activities provided as examples. Most reported activities they normally do, rather than those performed on a day in the previous week. Participants used a variety of strategies to select ‘a day’ for which they reported their sitting activities and to calculate sitting time on that day. Therefore, many different ways of estimating sitting time were used. Participants had particular difficulty reporting their daily sitting-time when their schedules were not consistent across days. Some participants declared the IPAQ sitting question too difficult to answer. Conclusion: The accuracy of older adults’ self-reported sitting time is questionable given the challenges they have in answering sitting-time questions. Their responses to sitting-time questions may be more accurate if our recommendations for clarifying the sitting domains, providing examples relevant to older adults and suggesting strategies for formulating responses are incorporated. Future quantitative studies should include objective criterion measures to assess validity and reliability of these questions.