262 resultados para Motor Unit Number Estimates
Resumo:
Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites
Resumo:
Safety at roadway intersections is of significant interest to transportation professionals due to the large number of intersections in transportation networks, the complexity of traffic movements at these locations that leads to large numbers of conflicts, and the wide variety of geometric and operational features that define them. A variety of collision types including head-on, sideswipe, rear-end, and angle crashes occur at intersections. While intersection crash totals may not reveal a site deficiency, over exposure of a specific crash type may reveal otherwise undetected deficiencies. Thus, there is a need to be able to model the expected frequency of crashes by collision type at intersections to enable the detection of problems and the implementation of effective design strategies and countermeasures. Statistically, it is important to consider modeling collision type frequencies simultaneously to account for the possibility of common unobserved factors affecting crash frequencies across crash types. In this paper, a simultaneous equations model of crash frequencies by collision type is developed and presented using crash data for rural intersections in Georgia. The model estimation results support the notion of the presence of significant common unobserved factors across crash types, although the impact of these factors on parameter estimates is found to be rather modest.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
A number of studies have focused on estimating the effects of accessibility on housing values by using the hedonic price model. In the majority of studies, estimation results have revealed that housing values increase as accessibility improves, although the magnitude of estimates has varied across studies. Adequately estimating the relationship between transportation accessibility and housing values is challenging for at least two reasons. First, the monocentric city assumption applied in location theory is no longer valid for many large or growing cities. Second, rather than being randomly distributed in space, housing values are clustered in space—often exhibiting spatial dependence. Recognizing these challenges, a study was undertaken to develop a spatial lag hedonic price model in the Seoul, South Korea, metropolitan region, which includes a measure of local accessibility as well as systemwide accessibility, in addition to other model covariates. Although the accessibility measures can be improved, the modeling results suggest that the spatial interactions of apartment sales prices occur across and within traffic analysis zones, and the sales prices for apartment communities are devalued as accessibility deteriorates. Consistent with findings in other cities, this study revealed that the distance to the central business district is still a significant determinant of sales price.
Resumo:
Many studies focused on the development of crash prediction models have resulted in aggregate crash prediction models to quantify the safety effects of geometric, traffic, and environmental factors on the expected number of total, fatal, injury, and/or property damage crashes at specific locations. Crash prediction models focused on predicting different crash types, however, have rarely been developed. Crash type models are useful for at least three reasons. The first is motivated by the need to identify sites that are high risk with respect to specific crash types but that may not be revealed through crash totals. Second, countermeasures are likely to affect only a subset of all crashes—usually called target crashes—and so examination of crash types will lead to improved ability to identify effective countermeasures. Finally, there is a priori reason to believe that different crash types (e.g., rear-end, angle, etc.) are associated with road geometry, the environment, and traffic variables in different ways and as a result justify the estimation of individual predictive models. The objectives of this paper are to (1) demonstrate that different crash types are associated to predictor variables in different ways (as theorized) and (2) show that estimation of crash type models may lead to greater insights regarding crash occurrence and countermeasure effectiveness. This paper first describes the estimation results of crash prediction models for angle, head-on, rear-end, sideswipe (same direction and opposite direction), and pedestrian-involved crash types. Serving as a basis for comparison, a crash prediction model is estimated for total crashes. Based on 837 motor vehicle crashes collected on two-lane rural intersections in the state of Georgia, six prediction models are estimated resulting in two Poisson (P) models and four NB (NB) models. The analysis reveals that factors such as the annual average daily traffic, the presence of turning lanes, and the number of driveways have a positive association with each type of crash, whereas median widths and the presence of lighting are negatively associated. For the best fitting models covariates are related to crash types in different ways, suggesting that crash types are associated with different precrash conditions and that modeling total crash frequency may not be helpful for identifying specific countermeasures.
Resumo:
Emergency departments (EDs) are often the first point of contact with an abused child. Despite legal mandate, the reporting of definite or suspected abusive injury to child safety authorities by ED clinicians varies due to a number of factors including training, access to child safety professionals, departmental culture and a fear of ‘getting it wrong’. This study examined the quality of documentation and coding of child abuse captured by ED based injury surveillance data and ED medical records in the state of Queensland and the concordance of these data with child welfare records. A retrospective medical record review was used to examine the clinical documentation of almost 1000 injured children included in the Queensland Injury Surveillance Unit database (QISU) from 10 hospitals in urban and rural centres. Independent experts re-coded the records based on their review of the notes. A data linkage methodology was then used to link these records with records in the state government’s child welfare database. Cases were sampled from three sub-groups according to the surveillance intent codes: Maltreatment by parent, Undetermined and Unintentional injury. Only 0.1% of cases coded as unintentional injury were recoded to maltreatment by parent, while 1.2% of cases coded as maltreatment by parent were reclassified as unintentional and 5% of cases where the intent was undetermined by the triage nurse were recoded as maltreatment by parent. Quality of documentation varied across type of hospital (tertiary referral centre, children’s, urban, regional and remote). Concordance of health data with child welfare data varied across patient subgroups. Outcomes from this research will guide initiatives to improve the quality of intentional child injury surveillance systems.
Resumo:
Objective: With growing recognition of the role of inflammation in the development of chronic and acute disease, fish oil is increasingly used as a therapeutic agent, but the nature of the intervention may pose barriers to adherence in clinical populations. Our objective was to investigate the feasibility of using a fish oil supplement in hemodialysis patients. ---------- Design: This was a nonrandomized intervention study.---------- Setting: Eligible patients were recruited at the Hemodialysis Unit of Wesley Hospital, Brisbane, Queensland, Australia. Patients The sample included 28 maintenance hemodialysis patients out of 43 eligible patients in the unit. Exclusion criteria included patients regularly taking a fish oil supplement at baseline, receiving hemodialysis for less than 3 months, or being unable to give informed consent.---------- Intervention: Eicosapentaenoic acid (EPA) was administered at 2000 mg/day (4 capsules) for 12 weeks. Adherence was measured at baseline and weekly throughout the study according to changes in plasma EPA, and was further measured subjectively by self-report.---------- Results: Twenty patients (74%) adhered to the prescription based on changes in plasma EPA, whereas an additional two patients self-reported good adherence. There was a positive relationship between fish oil intake and change in plasma EPA. Most patients did not report problems with taking the fish oil. Using the baseline data, it was not possible to characterize adherent patients.---------- Conclusions: Despite potential barriers, including the need to take a large number of prescribed medications already, 74% of hemodialysis patients adhered to the intervention. This study demonstrated the feasibility of using fish oil in a clinical population.
Resumo:
Adherence to medicines is a major determinant of the effectiveness of medicines. However, estimates of non-adherence in the older-aged with chronic conditions vary from 40 to 75%. The problems caused by non-adherence in the older-aged include residential care and hospital admissions, progression of the disease, and increased costs to society. The reasons for non-adherence in the older-aged include items related to the medicine (e.g. cost, number of medicines, adverse effects) and those related to person (e.g. cognition, vision, depression). It is also known that there are many ways adherence can be increased (e.g. use of blister packs, cues). It is assumed that interventions by allied health professions, including a discussion of adherence, will improve adherence to medicines in the older aged but the evidence for this has not been reviewed. There is some evidence that telephone counselling about adherence by a nurse or pharmacist does improve adherence, short- and long-term. However, face-to-face intervention counselling at the pharmacy, or during a home visit by a pharmacist, has shown variable results with some studies showing improved adherence and some not. Education programs during hospital stays have not been shown to improve adherence on discharge, but education programs for subjects with hypertension have been shown to improve adherence. In combination with an education program, both counselling and a medicine review program have been shown to improve adherence short-term in the older-aged. Thus, there are many unanswered questions about the most effective interventions to promote adherence. More studies are needed to determine the most appropriate interventions by allied health professions, and these need to consider the disease state, demographics, and socio-economic status of the older-aged subject, and the intensity and duration of intervention needed.
Resumo:
Australia, road crash trauma costs the nation A$15 billion annually whilst the US estimates an economic impact of around US$ 230 billion on its network. Worldwide economic cost of road crashes is estimated to be around US$ 518 billion each year. Road accidents occur due to a number of factors including driver behaviour, geometric alignment, vehicle characteristics, environmental impacts, and the type and condition of the road surfacing. Skid resistance is considered one of the most important road surface characteristics because it has a direct effect on traffic safety. In 2005, Austroads (the Association of Australian and New Zealand Road Transport and Traffic Authorities) published a guideline for the management of skid resistance and Queensland Department of Main Roads (QDMR) developed a skid resistance management plan (SRMP). The current QDMR strategy is based on rationale analytical methodology supported by field inspection with related asset management decision tools. The Austroads’s guideline and QDMR's skid resistance management plan have prompted QDMR to review its skid resistance management practice. As a result, a joint research project involving QDMR, Queensland University of Technology (QUT) and the Corporative Research Centre for Integrated Engineering Asset Management (CRC CIEAM) was formed. The research project aims at investigating whether there is significant relationship between road crashes and skid resistance on Queensland’s road networks. If there is, the current skid resistance management practice of QDMR will be reviewed and appropriate skid resistance investigatory levels will be recommended. This paper presents analysis results in assessing the relationship between wet crashes and skid resistance on Queensland roads. Attributes considered in the analysis include surface types, annual average daily traffic (AADT), speed and seal age.
Resumo:
Compressed natural gas (CNG) engines are thought to be less harmful to the environment than conventional diesel engines, especially in terms of particle emissions. Although, this is true with respect to particulate matter (PM) emissions, results of particle number (PN) emission comparisons have been inconclusive. In this study, results of on-road and dynamometer studies of buses were used to derive several important conclusions. We show that, although PN emissions from CNG buses are significantly lower than from diesel buses at low engine power, they become comparable at high power. For diesel buses, PN emissions are not significantly different between acceleration and operation at steady maximum power. However, the corresponding PN emissions from CNG buses when accelerating are an order of magnitude greater than when operating at steady maximum power. During acceleration under heavy load, PN emissions from CNG buses are an order of magnitude higher than from diesel buses. The particles emitted from CNG buses are too small to contribute to PM10 emissions or contribute to a reduction of visibility, and may consist of semivolatile nanoparticles.
Resumo:
Background, Aim and Scope The impact of air pollution on school children’s health is currently one of the key foci of international and national agencies. Of particular concern are ultrafine particles which are emitted in large quantities, contain large concentrations of toxins and are deposited deeply in the respiratory tract. Materials and methods In this study, an intensive sampling campaign of indoor and outdoor airborne particulate matter was carried out in a primary school in February 2006 to investigate indoor and outdoor particle number (PN) and mass concentrations (PM2.5), and particle size distribution, and to evaluate the influence of outdoor air pollution on the indoor air. Results For outdoor PN and PM2.5, early morning and late afternoon peaks were observed on weekdays, which are consistent with traffic rush hours, indicating the predominant effect of vehicular emissions. However, the temporal variations of outdoor PM2.5 and PN concentrations occasionally showed extremely high peaks, mainly due to human activities such as cigarette smoking and the operation of mower near the sampling site. The indoor PM2.5 level was mainly affected by the outdoor PM2.5 (r = 0.68, p<0.01), whereas the indoor PN concentration had some association with outdoor PN values (r = 0.66, p<0.01) even though the indoor PN concentration was occasionally influenced by indoor sources, such as cooking, cleaning and floor polishing activities. Correlation analysis indicated that the outdoor PM2.5 was inversely correlated with the indoor to outdoor PM2.5 ratio (I/O ratio) (r = -0.49, p<0.01), while the indoor PN had a weak correlation with the I/O ratio for PN (r = 0.34, p<0.01). Discussion and Conclusions The results showed that occupancy did not cause any major changes to the modal structure of particle number and size distribution, even though the I/O ratio was different for different size classes. The I/O curves had a maximum value for particles with diameters of 100 – 400 nm under both occupied and unoccupied scenarios, whereas no significant difference in I/O ratio for PM2.5 was observed between occupied and unoccupied conditions. Inspection of the size-resolved I/O ratios in the preschool centre and the classroom suggested that the I/O ratio in the preschool centre was the highest for accumulation mode particles at 600 nm after school hours, whereas the average I/O ratios of both nucleation mode and accumulation mode particles in the classroom were much lower than those of Aitken mode particles. Recommendations and Perspectives The findings obtained in this study are useful for epidemiological studies to estimate the total personal exposure of children, and to develop appropriate control strategies for minimizing the adverse health effects on school children.
Resumo:
The aim of this work was to quantify exposure to particles emitted by wood-fired ovens in pizzerias. Overall, 15 microenvironments were chosen and analyzed in a 14-month experimental campaign. Particle number concentration and distribution were measured simultaneously using a Condensation Particle Counter (CPC), a Scanning Mobility Particle Sizer (SMPS), an Aerodynamic Particle Sizer (APS). The surface area and mass distributions and concentrations, as well as the estimation of lung deposition surface area and PM1 were evaluated using the SMPS-APS system with dosimetric models, by taking into account the presence of aggregates on the basis of the Idealized Aggregate (IA) theory. The fraction of inhaled particles deposited in the respiratory system and different fractions of particulate matter were also measured by means of a Nanoparticle Surface Area Monitor (NSAM) and a photometer (DustTrak DRX), respectively. In this way, supplementary data were obtained during the monitoring of trends inside the pizzerias. We found that surface area and PM1 particle concentrations in pizzerias can be very high, especially when compared to other critical microenvironments, such as the transport hubs. During pizza cooking under normal ventilation conditions, concentrations were found up to 74, 70 and 23 times higher than background levels for number, surface area and PM1, respectively. A key parameter is the oven shape factor, defined as the ratio between the size of the face opening in respect
Resumo:
A composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. Hence, this model was able to quickly quantify the time spent in each segment within the considered zone, as well as the composition and position of the requisite segments based on the vehicle fleet information, which not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bi-directional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. Although the CLSE model is intended to be applied in traffic management and transport analysis systems for the evaluation of exposure, as well as the simulation of vehicle emissions in traffic interrupted microenvironments, the bus station model can also be used for the input of initial source definitions in future dispersion models.
Resumo:
This paper describes a thorough thermal study on a fleet of DC traction motors which were found to suffer from overheating after 3 years of full operation. Overheating of these traction motors is attributed partly because of the higher than expected number of starts and stops between train terminals. Another probable cause of overheating is the design of the traction motor and/or its control strategy. According to the motor manufacturer, a current shunt is permanently connected across the motor field winding. Hence, some of the armature current is bypassed into the current shunt. The motor then runs above its rated speed in the field weakening mode. In this study, a finite difference model has been developed to simulate the temperature profile at different parts inside the traction motor. In order to validate the simulation result, an empty vehicle loaded with drums of water was also used to simulate the full pay-load of a light rail vehicle experimentally. The authors report that the simulation results agree reasonably well with experimental data, and it is likely that the armature of the traction motor will run cooler if its field shunt is disconnected at low speeds
Resumo:
Background/objectives The provision of the patient bed-bath is a fundamental nursing care activity yet few quantitative data and no qualitative data are available on registered nurses’ (RNs) clinical practice in this domain in the intensive care unit (ICU). The aim of this study was to describe ICU RNs current practice with respect to the timing, frequency and duration of the patient bed-bath and the cleansing and emollient agents used. Methods The study utilised a two-phase sequential explanatory mixed method design. Phase one used a questionnaire to survey RNs and phase two employed semi-structured focus group (FG) interviews with RNs. Data was collected over 28 days across four Australian metropolitan ICUs. Ethical approval was granted from the relevant hospital and university human research ethics committees. RNs were asked to complete a questionnaire following each episode of care (i.e. bed-bath) and then to attend one of three FG interviews: RNs with less than 2 years ICU experience; RNs with 2–5 years ICU experience; and RNs with greater than 5 years ICU experience. Results During the 28-day study period the four ICUs had 77.25 beds open. In phase one a total of 539 questionnaires were returned, representing 30.5% of episodes of patient bed-baths (based on 1767 bed occupancy and one bed-bath per patient per day). In 349 bed-bath episodes 54.7% patients were mechanically ventilated. The bed-bath was given between 02.00 and 06.00 h in 161 episodes (30%), took 15–30 min to complete (n = 195, 36.2%) and was completed within the last 8 h in 304 episodes (56.8%). Cleansing agents used were predominantly pH balanced soap or liquid soap and water (n = 379, 71%) in comparison to chlorhexidine impregnated sponges/cloths (n = 86, 16.1%) or other agents such as pre-packaged washcloths (n = 65, 12.2%). In 347 episodes (64.4%) emollients were not applied after the bed-bath. In phase two 12 FGs were conducted (three FGs at each ICU) with a total of 42 RN participants. Thematic analysis of FG transcripts across the three levels of RN ICU experience highlighted a transition of patient hygiene practice philosophy from shades of grey – falling in line for inexperienced clinicians to experienced clinicians concrete beliefs about patient bed-bath needs. Conclusions This study identified variation in process and products used in patient hygiene practices in four ICUs. Further study to improve patient outcomes is required to determine the appropriate timing of patient hygiene activities and cleansing agents used to improve skin integrity.