989 resultados para Buckling analysis
Resumo:
Motorcycles are particularly vulnerable in right-angle crashes at signalized intersections. The objective of this study is to explore how variations in roadway characteristics, environmental factors, traffic factors, maneuver types, human factors as well as driver demographics influence the right-angle crash vulnerability of motorcycles at intersections. The problem is modeled using a mixed logit model with a binary choice category formulation to differentiate how an at-fault vehicle collides with a not-at-fault motorcycle in comparison to other collision types. The mixed logit formulation allows randomness in the parameters and hence takes into account the underlying heterogeneities potentially inherent in driver behavior, and other unobserved variables. A likelihood ratio test reveals that the mixed logit model is indeed better than the standard logit model. Night time riding shows a positive association with the vulnerability of motorcyclists. Moreover, motorcyclists are particularly vulnerable on single lane roads, on the curb and median lanes of multi-lane roads, and on one-way and two-way road type relative to divided-highway. Drivers who deliberately run red light as well as those who are careless towards motorcyclists especially when making turns at intersections increase the vulnerability of motorcyclists. Drivers appear more restrained when there is a passenger onboard and this has decreased the crash potential with motorcyclists. The presence of red light cameras also significantly decreases right-angle crash vulnerabilities of motorcyclists. The findings of this study would be helpful in developing more targeted countermeasures for traffic enforcement, driver/rider training and/or education, safety awareness programs to reduce the vulnerability of motorcyclists.
Resumo:
Traditional crash prediction models, such as generalized linear regression models, are incapable of taking into account the multilevel data structure, which extensively exists in crash data. Disregarding the possible within-group correlations can lead to the production of models giving unreliable and biased estimates of unknowns. This study innovatively proposes a -level hierarchy, viz. (Geographic region level – Traffic site level – Traffic crash level – Driver-vehicle unit level – Vehicle-occupant level) Time level, to establish a general form of multilevel data structure in traffic safety analysis. To properly model the potential cross-group heterogeneity due to the multilevel data structure, a framework of Bayesian hierarchical models that explicitly specify multilevel structure and correctly yield parameter estimates is introduced and recommended. The proposed method is illustrated in an individual-severity analysis of intersection crashes using the Singapore crash records. This study proved the importance of accounting for the within-group correlations and demonstrated the flexibilities and effectiveness of the Bayesian hierarchical method in modeling multilevel structure of traffic crash data.
Resumo:
Despite of a significant contribution of transport sector in the global economy and society, it is one of the largest sources of global energy consumption, green house gas emissions and environmental pollutions. A complete look onto the whole life cycle environmental inventory of this sector will be helpful to generate a holistic understanding of contributory factors causing emissions. Previous studies were mainly based on segmental views which mostly compare environmental impacts of different modes of transport, but very few consider impacts other than the operational phase. Ignoring the impacts of non-operational phases, e.g., manufacture, construction, maintenance, may not accurately reflect total contributions on emissions. Moreover an integrated study for all motorized modes of road transport is also needed to achieve a holistic estimation. The objective of this study is to develop a component based life cycle inventory model which considers impacts of both operational and non-operational phases of the whole life as well as different transport modes. In particular, the whole life cycle of road transport has been segmented into vehicle, infrastructure, fuel and operational components and inventories have been conducted on each component. The inventory model has been demonstrated using the road transport of Singapore. Results show that total life cycle green house gas emissions from the road transport sector of Singapore is 7.8 million tons per year, among which operational phase and non-operational phases contribute about 55% and about 45%, respectively. Total amount of criteria air pollutants are 46, 8.5, 33.6, 13.6 and 2.6 thousand tons per year for CO, SO2, NOx, VOC and PM10, respectively. From the findings, it can be deduced that stringent government policies on emission control measures have a significant impact on reducing environmental pollutions. In combating global warming and environmental pollutions the promotion of public transport over private modes is an effective sustainable policy.
Resumo:
Distraction whilst driving on an approach to a signalized intersection is particularly dangerous, as potential vehicular conflicts and resulting angle collisions tend to be severe. This study examines the decisions of distracted drivers during the onset of amber lights. Driving simulator data were obtained from a sample of 58 drivers under baseline and handheld mobile phone conditions at the University of IOWA - National Advanced Driving Simulator. Explanatory variables include age, gender, cell phone use, distance to stop-line, and speed. An iterative combination of decision tree and logistic regression analyses are employed to identify main effects, non-linearities, and interactions effects. Results show that novice (16-17 years) and younger (18-25 years) drivers’ had heightened amber light running risk while distracted by cell phone, and speed and distance thresholds yielded significant interaction effects. Driver experience captured by age has a multiplicative effect with distraction, making the combined effect of being inexperienced and distracted particularly risky. Solutions are needed to combat the use of mobile phones whilst driving.
Resumo:
Denaturation of tissues can provide a unique biological environment for regenerative medicine application only if minimal disruption of their microarchitecture is achieved during the decellularization process. The goal is to keep the structural integrity of such a construct as functional as the tissues from which they were derived. In this work, cartilage-on-bone laminates were decellularized through enzymatic, non-ionic and ionic protocols. This work investigated the effects of decellularization process on the microarchitecture of cartiligous extracellular matrix; determining the extent of how each process deteriorated the structural organization of the network. High resolution microscopy was used to capture cross-sectional images of samples prior to and after treatment. The variation of the microarchitecture was then analysed using a well defined fast Fourier image processing algorithm. Statistical analysis of the results revealed how significant the alternations among aforementioned protocols were (p < 0.05). Ranking the treatments by their effectiveness in disrupting the ECM integrity, they were ordered as: Trypsin> SDS> Triton X-100.
Resumo:
Tissue-specific extracellular matrix (ECM) is known to be an ideal bioscaffold to inspire the future of regenerative medicine. It holds the secret of how nature has developed such an organization of molecules into a unique functional complexity. This work exploited an innovative image processing algorithm and high resolution microscopy associated with mechanical analysis to establish a correlation between the gradient organization of cartiligous ECM and its anisotropic biomechanical response. This was hypothesized to be a reliable determinant that can elucidate how microarchitecture interrelates with biomechanical properties. Hough-Radon transform of the ECM cross-section images revealed its conformational variation from tangential interface down to subchondral region. As the orientation varied layer by layer, the anisotropic mechanical response deviated relatively. Although, results were in good agreement (Kendall's tau-b > 90%), there were evidences proposing that alignment of the fibrous network, specifically in middle zone, is not as random as it was previously thought.
Resumo:
Lower energy and protein intakes are well documented in patients on texture modified diets. In acute hospital settings, the provision of appropriate texture modified foods to meet industry standards is essential for patient safety and nutrition outcomes. The texture modified menu at an acute private hospital was evaluated in accordance with their own nutritional standards (NS) and Australian National Standards (Dietitians Association of Australia and Speech Pathology Australia, 2007). The NS documents portion sizes and nutritional requirements for each menu. Texture B and C menus were analysed qualitatively and quantitatively over 9 days of a 6 day cyclic menu for breakfast (n=4), lunch (n=34) and dinner (n=34). Results indicated a lack of portion control, as specified by the NS, across all meals including breakfast (65–140%), soup (55–115%), meat (45–165%), vegetables (55–185%) and desserts (30–300%). Dilution factors and portion sizes influenced the protein and energy availability of Texture B & C menus. While the Texture B menu provided more energy, neither menu met the NS. Limited dessert options on the Texture C menu restricted the ability of this menu to meet protein NS. A lack of portion control and menu items incorrectly modified can compromise protein and energy intakes. Strategies to correct serving sizes and provision of alternate protein sources were recommended. Suggestions included cost-effectively increasing the variety of foods to assist protein and energy intake and the procurement of standardised equipment and visual aids to assist food preparation and presentation in accordance with texture modified guidelines and the NS.
Resumo:
With increasing rate of shipping traffic, the risk of collisions in busy and congested port waters is expected to rise. However, due to low collision frequencies it is difficult to analyze such risk in a sound statistical manner. This study aims at examining the occurrence of traffic conflicts in order to understand the characteristics of vessels involved in navigational hazards. A binomial logit model was employed to evaluate the association of vessel attributes and the kinematic conditions with conflict severity levels. Results show a positive association for vessels of small gross tonnage, overall vessel length, vessel height and draft with conflict risk. Conflicts involving a pair of dynamic vessels sailing at low speeds also have similar effects.
Resumo:
The traffic conflict technique (TCT) is a powerful technique applied in road traffic safety assessment as a surrogate of the traditional accident data analysis. It has subdued the conceptual and implemental weaknesses of the accident statistics. Although this technique has been applied effectively in road traffic, it has not been practised well in marine traffic even though this traffic system has some distinct advantages in terms of having a monitoring system. This monitoring system can provide navigational information as well as other geometric information of the ships for a larger study area over a longer time period. However, for implementing the TCT in the marine traffic system, it should be examined critically to suit the complex nature of the traffic system. This paper examines the suitability of the TCT to be applied to marine traffic and proposes a framework for a follow up comprehensive conflict study.
Resumo:
Navigational collisions are a major safety concern in many seaports. Despite the recent advances in port navigational safety research, little is known about harbor pilot’s perception of collision risks in anchorages. This study attempts to model such risks by employing a hierarchical ordered probit model, which is calibrated by using data collected through a risk perception survey conducted on Singapore port pilots. The hierarchical model is found to be useful to account for correlations in risks perceived by individual pilots. Results show higher perceived risks in anchorages attached to intersection, local and international fairway; becoming more critical at night. Lesser risks are perceived in anchorages featuring shoreline in boundary, higher water depth, lower density of stationary ships, cardinal marks and isolated danger marks. Pilotage experience shows a negative effect on perceived risks. This study indicates that hierarchical modeling would be useful for treating correlations in navigational safety data.
Resumo:
With increasing rate of shipping traffic, the risk of collisions in busy and congested port waters is likely to rise. However, due to low collision frequencies in port waters, it is difficult to analyze such risk in a sound statistical manner. A convenient approach of investigating navigational collision risk is the application of the traffic conflict techniques, which have potential to overcome the difficulty of obtaining statistical soundness. This study aims at examining port water conflicts in order to understand the characteristics of collision risk with regard to vessels involved, conflict locations, traffic and kinematic conditions. A hierarchical binomial logit model, which considers the potential correlations between observation-units, i.e., vessels, involved in the same conflicts, is employed to evaluate the association of explanatory variables with conflict severity levels. Results show higher likelihood of serious conflicts for vessels of small gross tonnage or small overall length. The probability of serious conflict also increases at locations where vessels have more varied headings, such as traffic intersections and anchorages; becoming more critical at night time. Findings from this research should assist both navigators operating in port waters as well as port authorities overseeing navigational management.
Resumo:
BACKGROUND: Hallux valgus (HV) is a foot deformity commonly seen in medical practice, often accompanied by significant functional disability and foot pain. Despite frequent mention in a diverse body of literature, a precise estimate of the prevalence of HV is difficult to ascertain. The purpose of this systematic review was to investigate prevalence of HV in the overall population and evaluate the influence of age and gender. METHODS: Electronic databases (Medline, Embase, and CINAHL) and reference lists of included papers were searched to June 2009 for papers on HV prevalence without language restriction. MeSH terms and keywords were used relating to HV or bunions, prevalence and various synonyms. Included studies were surveys reporting original data for prevalence of HV or bunions in healthy populations of any age group. Surveys reporting prevalence data grouped with other foot deformities and in specific disease groups (e.g. rheumatoid arthritis, diabetes) were excluded. Two independent investigators quality rated all included papers on the Epidemiological Appraisal Instrument. Data on raw prevalence, population studied and methodology were extracted. Prevalence proportions and the standard error were calculated, and meta-analysis was performed using a random effects model. RESULTS: A total of 78 papers reporting results of 76 surveys (total 496,957 participants) were included and grouped by study population for meta-analysis. Pooled prevalence estimates for HV were 23% in adults aged 18-65 years (CI: 16.3 to 29.6) and 35.7% in elderly people aged over 65 years (CI: 29.5 to 42.0). Prevalence increased with age and was higher in females [30% (CI: 22 to 38)] compared to males [13% (CI: 9 to 17)]. Potential sources of bias were sampling method, study quality and method of HV diagnosis. CONCLUSIONS: Notwithstanding the wide variation in estimates, it is evident that HV is prevalent; more so in females and with increasing age. Methodological quality issues need to be addressed in interpreting reports in the literature and in future research.
Resumo:
Soil organic carbon sequestration rates over 20 years based on the Intergovernmental Panel for Climate Change (IPCC) methodology were combined with local economic data to determine the potential for soil C sequestration in wheat-based production systems on the Indo-Gangetic Plain (IGP). The C sequestration potential of rice–wheat systems of India on conversion to no-tillage is estimated to be 44.1 Mt C over 20 years. Implementing no-tillage practices in maize–wheat and cotton–wheat production systems would yield an additional 6.6 Mt C. This offset is equivalent to 9.6% of India's annual greenhouse gas emissions (519 Mt C) from all sectors (excluding land use change and forestry), or less than one percent per annum. The economic analysis was summarized as carbon supply curves expressing the total additional C accumulated over 20 year for a price per tonne of carbon sequestered ranging from zero to USD 200. At a carbon price of USD 25 Mg C−1, 3 Mt C (7% of the soil C sequestration potential) could be sequestered over 20 years through the implementation of no-till cropping practices in rice–wheat systems of the Indian States of the IGP, increasing to 7.3 Mt C (17% of the soil C sequestration potential) at USD 50 Mg C−1. Maximum levels of sequestration could be attained with carbon prices approaching USD 200 Mg C−1 for the States of Bihar and Punjab. At this carbon price, a total of 34.7 Mt C (79% of the estimated C sequestration potential) could be sequestered over 20 years across the rice–wheat region of India, with Uttar Pradesh contributing 13.9 Mt C.
Resumo:
Baseline monitoring of groundwater quality aims to characterize the ambient condition of the resource and identify spatial or temporal trends. Sites comprising any baseline monitoring network must be selected to provide a representative perspective of groundwater quality across the aquifer(s) of interest. Hierarchical cluster analysis (HCA) has been used as a means of assessing the representativeness of a groundwater quality monitoring network, using example datasets from New Zealand. HCA allows New Zealand's national and regional monitoring networks to be compared in terms of the number of water-quality categories identified in each network, the hydrochemistry at the centroids of these water-quality categories, the proportions of monitoring sites assigned to each water-quality category, and the range of concentrations for each analyte within each water-quality category. Through the HCA approach, the National Groundwater Monitoring Programme (117 sites) is shown to provide a highly representative perspective of groundwater quality across New Zealand, relative to the amalgamated regional monitoring networks operated by 15 different regional authorities (680 sites have sufficient data for inclusion in HCA). This methodology can be applied to evaluate the representativeness of any subset of monitoring sites taken from a larger network.