923 resultados para accelerometer, randomness check
Resumo:
With the release of the Nintendo Wii in 2006, the use of haptic force gestures has become a very popular form of input for interactive entertainment. However, current gesture recognition techniques utilised in Nintendo Wii games fall prey to a lack of control when it comes to recognising simple gestures. This paper presents a simple gesture recognition technique called Peak Testing which gives greater control over gesture interaction. This recognition technique locates force peaks in continuous force data (provided by a gesture device such as the Wiimote) and then cancels any peaks which are not meant for input. Peak Testing is therefore technically able to identify movements in any direction. This paper applies this recognition technique to control virtual instruments and investigates how users respond to this interaction. The technique is then explored as the basis for a robust way to navigate menus with a simple flick of the wrist. We propose that this flick-form of interaction could be a very intuitive way to navigate Nintendo Wii menus instead of the current pointer techniques implemented.
Resumo:
People interact with mobile computing devices everywhere, while sitting, walking, running or even driving. Adapting the interface to suit these contexts is important, thus this paper proposes a simple human activity classification system. Our approach uses a vector magnitude recognition technique to detect and classify when a person is stationary (or not walking), casually walking, or jogging, without any prior training. The user study has confirmed the accuracy.
Resumo:
A new accelerometer, the Kenz Lifecorder EX (LC; Suzuken Co. Ltd, Nagoya, Japan), offers promise as a feasible monitor alternative to the commonly used Actigraph (AG: Actigraph LLC, Fort Walton Beach, FL). Purpose: This study compared the LC and AG accelerometers and the Yamax SW-200 pedometer (DW) under free-living conditions with regard to children's steps taken and time in light-intensity physical activity (PA) and moderate to vigorous PA (MVPA). Methods: Participants (N = 31, age = 10.2 ± 0.4 yr) wore LC, AG, and DW monitors from arrival at school (7:45 a.m.) until they went to bed. Time in light and MVPA intensities were calculated using two separate intensity classifications for the LC (LC_4 and LC_5) and four classifications for the AG (AG_Treuth, AG_Puyau, AG_Trost, and AG_Freedson). Both accelerometers provided steps as outputs. DW steps were self-recorded. Repeated-measures ANOVA was used to assess overlapping monitor outputs. Results: There was no difference between DW and LC steps (Δ = 200 steps), but a nonsignificant trend was observed in the pairwise comparison between DW and AG steps (Δ = 1001 steps, P = 0.058). AG detected significantly greater steps than the LC (Δ = 801 steps, P = 0.001). Estimates of light-intensity activity minutes ranged from a low of 75.6 ± 18.4 min (LC_4) to a high of 309 ± 69.2 min (AG_Treuth). Estimates of MVPA minutes ranged from a low of 25.9 ± 9.4 min (LC_5) to a high of 112.2 ± 34.5 min (AG_Freedson). No significant differences in MVPA were seen between LC_5 and AG_Treuth (Δ = 4.9 min) or AG_Puyau (Δ = 1.7 min). Conclusion: The LC detected a comparable number of steps as the DW but significantly fewer steps than the AG in children. Current results indicate that the LC_5 and either AG_Treuth or AG_Puyau intensity derivations provide similar mean estimates of time in MVPA during-free living activity in 10-yr-old children.
Resumo:
Two decades after its inception, Latent Semantic Analysis(LSA) has become part and parcel of every modern introduction to Information Retrieval. For any tool that matures so quickly, it is important to check its lore and limitations, or else stagnation will set in. We focus here on the three main aspects of LSA that are well accepted, and the gist of which can be summarized as follows: (1) that LSA recovers latent semantic factors underlying the document space, (2) that such can be accomplished through lossy compression of the document space by eliminating lexical noise, and (3) that the latter can best be achieved by Singular Value Decomposition. For each aspect we performed experiments analogous to those reported in the LSA literature and compared the evidence brought to bear in each case. On the negative side, we show that the above claims about LSA are much more limited than commonly believed. Even a simple example may show that LSA does not recover the optimal semantic factors as intended in the pedagogical example used in many LSA publications. Additionally, and remarkably deviating from LSA lore, LSA does not scale up well: the larger the document space, the more unlikely that LSA recovers an optimal set of semantic factors. On the positive side, we describe new algorithms to replace LSA (and more recent alternatives as pLSA, LDA, and kernel methods) by trading its l2 space for an l1 space, thereby guaranteeing an optimal set of semantic factors. These algorithms seem to salvage the spirit of LSA as we think it was initially conceived.
Resumo:
The majority of the world’s population now lives in cities (United Nations, 2008) resulting in an urban densification requiring people to live in closer proximity and share urban infrastructure such as streets, public transport, and parks within cities. However, “physical closeness does not mean social closeness” (Wellman, 2001, p. 234). Whereas it is a common practice to greet and chat with people you cross paths with in smaller villages, urban life is mainly anonymous and does not automatically come with a sense of community per se. Wellman (2001, p. 228) defines community “as networks of interpersonal ties that provide sociability, support, information, a sense of belonging and social identity.” While on the move or during leisure time, urban dwellers use their interactive information communication technology (ICT) devices to connect to their spatially distributed community while in an anonymous space. Putnam (1995) argues that available technology privatises and individualises the leisure time of urban dwellers. Furthermore, ICT is sometimes used to build a “cocoon” while in public to avoid direct contact with collocated people (Mainwaring et al., 2005; Bassoli et al., 2007; Crawford, 2008). Instead of using ICT devices to seclude oneself from the surrounding urban environment and the collocated people within, such devices could also be utilised to engage urban dwellers more with the urban environment and the urban dwellers within. Urban sociologists found that “what attracts people most, it would appear, is other people” (Whyte, 1980, p. 19) and “people and human activity are the greatest object of attention and interest” (Gehl, 1987, p. 31). On the other hand, sociologist Erving Goffman describes the concept of civil inattention, acknowledging strangers’ presence while in public but not interacting with them (Goffman, 1966). With this in mind, it appears that there is a contradiction between how people are using ICT in urban public places and for what reasons and how people use public urban places and how they behave and react to other collocated people. On the other hand there is an opportunity to employ ICT to create and influence experiences of people collocated in public urban places. The widespread use of location aware mobile devices equipped with Internet access is creating networked localities, a digital layer of geo-coded information on top of the physical world (Gordon & de Souza e Silva, 2011). Foursquare.com is an example of a location based 118 Mobile Multimedia – User and Technology Perspectives social network (LBSN) that enables urban dwellers to virtually check-in into places at which they are physically present in an urban space. Users compete over ‘mayorships’ of places with Foursquare friends as well as strangers and can share recommendations about the space. The research field of Urban Informatics is interested in these kinds of digital urban multimedia augmentations and how such augmentations, mediated through technology, can create or influence the UX of public urban places. “Urban informatics is the study, design, and practice of urban experiences across different urban contexts that are created by new opportunities of real-time, ubiquitous technology and the augmentation that mediates the physical and digital layers of people networks and urban infrastructures” (Foth et al., 2011, p. 4). One possibility to augment the urban space is to enable citizens to digitally interact with spaces and urban dwellers collocated in the past, present, and future. “Adding digital layer to the existing physical and social layers could facilitate new forms of interaction that reshape urban life” (Kjeldskov & Paay, 2006, p. 60). This methodological chapter investigates how the design of UX through such digital placebased mobile multimedia augmentations can be guided and evaluated. First, we describe three different applications that aim to create and influence the urban UX through mobile mediated interactions. Based on a review of literature, we describe how our integrated framework for designing and evaluating urban informatics experiences has been constructed. We conclude the chapter with a reflective discussion on the proposed framework.
Resumo:
Purpose: The Cobb technique is the universally accepted method for measuring the severity of spinal deformities. Traditionally, Cobb angles have been measured using protractor and pencil on hardcopy radiographic films. The new generation of mobile phones make accurate angle measurement possible using an integrated accelerometer, providing a potentially useful clinical tool for assessing Cobb angles. The purpose of this study was to compare Cobb angle measurements performed using an Apple iPhone and traditional protractor in a series of twenty Adolescent Idiopathic Scoliosis patients. Methods: Seven observers measured major Cobb angles on twenty pre-operative postero-anterior radiographs of Adolescent Idiopathic Scoliosis patients with both a standard protractor and using an Apple iPhone. Five of the observers repeated the measurements at least a week after the original measurements. Results: The mean absolute difference between pairs of iPhone/protractor measurements was 2.1°, with a small (1°) bias toward lower Cobb angles with the iPhone. 95% confidence intervals for intra-observer variability were ±3.3° for the protractor and ±3.9° for the iPhone. 95% confidence intervals for inter-observer variability were ±8.3° for the iPhone and ±7.1° for the protractor. Both of these confidence intervals were within the range of previously published Cobb measurement studies. Conclusions: We conclude that the iPhone is an equivalent Cobb measurement tool to the manual protractor, and measurement times are about 15% less. The widespread availability of inclinometer-equipped mobile phones and the ability to store measurements in later versions of the angle measurement software may make these new technologies attractive for clinical measurement applications.
Resumo:
In this study, we explore motivation in collocated and virtual project teams. The literature on motivation in a project set.,ting reveals that motivation is closely linked to team performance. Based on this literature, we propose a set., of variables related to the three dimensions of ‘Nature of work’, ‘Rewards’, and ‘Communication’. Thirteen original variables in a sample size of 66 collocated and 66 virtual respondents are investigated using one tail t test and principal component analysis. We find that there are minimal differences between the two groups with respect to the above mentioned three dimensions. (p= .06; t=1.71). Further, a principal component analysis of the combined sample of collocated and virtual project environments reveals two factors- ‘Internal Motivating Factor’ related to work and work environment, and ‘External Motivating Factor’ related to the financial and non-financial rewards that explain 59.8% of the variance and comprehensively characterize motivation in collocated and virtual project environments. A ‘sense check’ of our interpretation of the results shows conformity with the theory and existing practice of project organization
Resumo:
It is frequently reported that the actual weight loss achieved through exercise interventions is less than theoretically expected. Amongst other compensatory adjustments that accompany exercise training (e.g., increases in resting metabolic rate and energy intake), a possible cause of the less than expected weight loss is a failure to produce a marked increase in total daily energy expenditure due to a compensatory reduction in non-exercise activity thermogenesis (NEAT). Therefore, there is a need to understand how behaviour is modified in response to exercise interventions. The proposed benefits of exercise training are numerous, including changes to fat oxidation. Given that a diminished capacity to oxidise fat could be a factor in the aetiology of obesity, an exercise training intensity that optimises fat oxidation in overweight/obese individuals would improve impaired fat oxidation, and potentially reduce health risks that are associated with obesity. To improve our understanding of the effectiveness of exercise for weight management, it is important to ensure exercise intensity is appropriately prescribed, and to identify and monitor potential compensatory behavioural changes consequent to exercise training. In line with the gaps in the literature, three studies were performed. The aim of Study 1 was to determine the effect of acute bouts of moderate- and high-intensity walking exercise on NEAT in overweight and obese men. Sixteen participants performed a single bout of either moderate-intensity walking exercise (MIE) or high-intensity walking exercise (HIE) on two separate occasions. The MIE consisted of walking for 60-min on a motorised treadmill at 6 km.h-1. The 60-min HIE session consisted of walking in 5-min intervals at 6 km.h-1 and 10% grade followed by 5-min at 0% grade. NEAT was assessed by accelerometer three days before, on the day of, and three days after the exercise sessions. There was no significant difference in NEAT vector magnitude (counts.min-1) between the pre-exercise period (days 1-3) and the exercise day (day 4) for either protocol. In addition, there was no change in NEAT during the three days following the MIE session, however NEAT increased by 16% on day 7 (post-exercise) compared with the exercise day (P = 0.32). During the post-exercise period following the HIE session, NEAT was increased by 25% on day 7 compared with the exercise day (P = 0.08), and by 30-33% compared with the pre-exercise period (day 1, day 2 and day 3); P = 0.03, 0.03, 0.02, respectively. To conclude, a single bout of either MIE or HIE did not alter NEAT on the exercise day or on the first two days following the exercise session. However, extending the monitoring of NEAT allowed the detection of a 48 hour delay in increased NEAT after performing HIE. A longer-term intervention is needed to determine the effect of accumulated exercise sessions over a week on NEAT. In Study 2, there were two primary aims. The first aim was to test the reliability of a discontinuous incremental exercise protocol (DISCON-FATmax) to identify the workload at which fat oxidation is maximised (FATmax). Ten overweight and obese sedentary male men (mean BMI of 29.5 ¡Ó 4.5 kg/m2 and mean age of 28.0 ¡Ó 5.3 y) participated in this study and performed two identical DISCON-FATmax tests one week apart. Each test consisted of alternate 4-min exercise and 2-min rest intervals on a cycle ergometer. The starting work load of 28 W was increased every 4-min using 14 W increments followed by 2-min rest intervals. When the respiratory exchange ratio was consistently >1.0, the workload was increased by 14 W every 2-min until volitional exhaustion. Fat oxidation was measured by indirect calorimetry. The mean FATmax, ƒtV O2peak, %ƒtV O2peak and %Wmax at which FATmax occurred during the two tests were 0.23 ¡Ó 0.09 and 0.18 ¡Ó 0.08 (g.min-1); 29.7 ¡Ó 7.8 and 28.3 ¡Ó 7.5 (ml.kg-1.min-1); 42.3 ¡Ó 7.2 and 42.6 ¡Ó 10.2 (%ƒtV O2max) and 36.4 ¡Ó 8.5 and 35.4 ¡Ó 10.9 (%), respectively. A paired-samples T-test revealed a significant difference in FATmax (g.min-1) between the tests (t = 2.65, P = 0.03). The mean difference in FATmax was 0.05 (g.min-1) with the 95% confidence interval ranging from 0.01 to 0.18. Paired-samples T-test, however, revealed no significant difference in the workloads (i.e. W) between the tests, t (9) = 0.70, P = 0.4. The intra-class correlation coefficient for FATmax (g.min-1) between the tests was 0.84 (95% confidence interval: 0.36-0.96, P < 0.01). However, Bland-Altman analysis revealed a large disagreement in FATmax (g.min-1) related to W between the two tests; 11 ¡Ó 14 (W) (4.1 ¡Ó 5.3 ƒtV O2peak (%)).These data demonstrate two important phenomena associated with exercise-induced substrate oxidation; firstly, that maximal fat oxidation derived from a discontinuous FATmax protocol differed statistically between repeated tests, and secondly, there was large variability in the workload corresponding with FATmax. The second aim of Study 2 was to test the validity of a DISCON-FATmax protocol by comparing maximal fat oxidation (g.min-1) determined by DISCON-FATmax with fat oxidation (g.min-1) during a continuous exercise protocol using a constant load (CONEX). Ten overweight and obese sedentary males (BMI = 29.5 ¡Ó 4.5 kg/m2; age = 28.0 ¡Ó 4.5 y) with a ƒtV O2max of 29.1 ¡Ó 7.5 ml.kg-1.min-1 performed a DISCON-FATmax test consisting of alternate 4-min exercise and 2-min rest intervals on a cycle ergometer. The 1-h CONEX protocol used the workload from the DISCON-FATmax to determine FATmax. The mean FATmax, ƒtV O2max, %ƒtV O2max and workload at which FATmax occurred during the DISCON-FATmax were 0.23 ¡Ó 0.09 (g.min-1); 29.1 ¡Ó 7.5 (ml.kg-1.min-1); 43.8 ¡Ó 7.3 (%ƒtV O2max) and 58.8 ¡Ó 19.6 (W), respectively. The mean fat oxidation during the 1-h CONEX protocol was 0.19 ¡Ó 0.07 (g.min-1). A paired-samples T-test revealed no significant difference in fat oxidation (g.min-1) between DISCON-FATmax and CONEX, t (9) = 1.85, P = 0.097 (two-tailed). There was also no significant correlation in fat oxidation between the DISCON-FATmax and CONEX (R=0.51, P = 0.14). Bland- Altman analysis revealed a large disagreement in fat oxidation between the DISCONFATmax and CONEX; the upper limit of agreement was 0.13 (g.min-1) and the lower limit of agreement was ¡V0.03 (g.min-1). These data suggest that the CONEX and DISCONFATmax protocols did not elicit different rates of fat oxidation (g.min-1). However, the individual variability in fat oxidation was large, particularly in the DISCON-FATmax test. Further research is needed to ascertain the validity of graded exercise tests for predicting fat oxidation during constant load exercise sessions. The aim of Study 3 was to compare the impact of two different intensities of four weeks of exercise training on fat oxidation, NEAT, and appetite in overweight and obese men. Using a cross-over design 11 participants (BMI = 29 ¡Ó 4 kg/m2; age = 27 ¡Ó 4 y) participated in a training study and were randomly assigned initially to: [1] a lowintensity (45%ƒtV O2max) exercise (LIT) or [2] a high-intensity interval (alternate 30 s at 90%ƒtV O2max followed by 30 s rest) exercise (HIIT) 40-min duration, three times a week. Participants completed four weeks of supervised training and between cross-over had a two week washout period. At baseline and the end of each exercise intervention,ƒtV O2max, fat oxidation, and NEAT were measured. Fat oxidation was determined during a standard 30-min continuous exercise bout at 45%ƒtV O2max. During the steady state exercise expired gases were measured intermittently for 5-min periods and HR was monitored continuously. In each training period, NEAT was measured for seven consecutive days using an accelerometer (RT3) the week before, at week 3 and the week after training. Subjective appetite sensations and food preferences were measured immediately before and after the first exercise session every week for four weeks during both LIT and HIIT. The mean fat oxidation rate during the standard continuous exercise bout at baseline for both LIT and HIIT was 0.14 ¡Ó 0.08 (g.min-1). After four weeks of exercise training, the mean fat oxidation was 0.178 ¡Ó 0.04 and 0.183 ¡Ó 0.04 g.min-1 for LIT and HIIT, respectively. The mean NEAT (counts.min-1) was 45 ¡Ó 18 at baseline, 55 ¡Ó 22 and 44 ¡Ó 16 during training, and 51 ¡Ó 14 and 50 ¡Ó 21 after training for LIT and HIIT, respectively. There was no significant difference in fat oxidation between LIT and HIIT. Moreover, although not statistically significant, there was some evidence to suggest that LIT and HIIT tend to increase fat oxidation during exercise at 45% ƒtV O2max (P = 0.14 and 0.08, respectively). The order of training treatment did not significantly influence changes in fat oxidation, NEAT, and appetite. NEAT (counts.min-1) was not significantly different in the week following training for either LIT or HIIT. Although not statistically significant (P = 0.08), NEAT was 20% lower during week 3 of exercise training in HIIT compared with LIT. Examination of appetite sensations revealed differences in the intensity of hunger, with higher ratings after LIT compared with HIIT. No differences were found in preferences for high-fat sweet foods between LIT and HIIT. In conclusion, the results of this thesis suggest that while fat oxidation during steady state exercise was not affected by the level of exercise intensity, there is strong evidence to suggest that intense exercise could have a debilitative effect on NEAT.
Resumo:
Objective To examine the risk factors for Mycobacterium tuberculosis infection (MTI) among Greenlandic children for the purpose of identifying those at highest risk of infection. Methods Between 2005 and 2007, 1797 Greenlandic schoolchildren in five different areas were tested for MTI with an interferon gamma release assay (IGRA) and a tuberculin skin test (TST). Parents or guardians were surveyed using a standardized self-administered questionnaire to obtain data on crowding in the household, parents’ educational level and the child’s health status. Demographic data for each child – i.e. parents’ place of birth, number of siblings, distance between siblings (next younger and next older), birth order and mother’s age when the child was born – were also extracted from a public registry. Logistic regression was used to check for associations between these variables and MTI, and all results were expressed as odds ratios (ORs) and 95% confidence intervals (CIs). Children were considered to have MTI if they tested positive on both the IGRA assay and the TST. Findings The overall prevalence of MTI was 8.5% (152/1797). MTI was diagnosed in 26.7% of the children with a known TB contact, as opposed to 6.4% of the children without such contact. Overall, the MTI rate was higher among Inuit children (OR: 4.22; 95% CI: 1.55–11.5) and among children born less than one year after the birth of the next older sibling (OR: 2.48; 95% CI: 1.33–4.63). Self-reported TB contact modified the profile to include household crowding and low mother’s education. Children who had an older MTI-positive sibling were much more likely to test positive for MTI themselves (OR: 14.2; 95% CI: 5.75–35.0) than children without an infected older sibling. Conclusion Ethnicity, sibling relations, number of household residents and maternal level of education are factors associated with the risk of TB infection among children in Greenland. The strong household clustering of MTI suggests that family sources of exposure are important.
Resumo:
Unlicensed driving remains a serious problem in many jurisdictions, and while it does not play a direct causative role in road crashes, it undermines driver licensing systems and is linked to other high risk driving behaviours. Roadside licence check surveys represent the most direct means of estimating the prevalence of unlicensed driving. The current study involved the Queensland Police Service (QPS) checking the licences of 3,112 drivers intercepted at random breath testing operations across Queensland between February and April 2010. Data was matched with official licensing records from Transport and Main Roads (TMR) via the drivers’ licence number. In total, 2,914 (93.6%) records were matched, with the majority of the 198 unmatched cases representing international or interstate licence holders (n = 156), leaving 42 unknown cases. Among the drivers intercepted at the roadside, 20 (0.6%) were identified as being unlicensed at the time, while a further 11 (0.4%) were driving unaccompanied on a Learner Licence. However, the examination of TMR licensing records revealed that an additional 9 individuals (0.3%) had a current licence sanction but were not identified as unlicensed by QPS. Thus, in total 29 of the drivers were unlicensed at the time, representing 0.9% of all the drivers intercepted and 1% of those for whom their licence records could be checked. This is considerably lower than the involvement of unlicensed drivers in fatal and serious injury crashes in Queensland, which is consistent with other research confirming the increased crash risk of the group. However, the number of unmatched records suggest that it is possible the on-road survey may have under-estimated the prevalence of unlicensed driving, so further development of the survey method is recommended.
Resumo:
Background: The accurate evaluation of physical activity levels amongst youth is critical for quantifying physical activity behaviors and evaluating the effect of physical activity interventions. The purpose of this review is to evaluate contemporary approaches to physical activity evaluation amongst youth. Data sources: The literature from a range of sources was reviewed and synthesized to provide an overview of contemporary approaches for measuring youth physical activity. Results: Five broad categories are described: self-report, instrumental movement detection, biological approaches, direct observation, and combined methods. Emerging technologies and priorities for future research are also identified. Conclusions: There will always be a trade-off between accuracy and available resources when choosing the best approach for measuring physical activity amongst youth. Unfortunately, cost and logistical challenges may prohibit the use of "gold standard" physical activity measurement approaches such as doubly labelled water. Other objective methods such as heart rate monitoring, accelerometry, pedometry, indirect calorimetry, or a combination of measures have the potential to better capture the duration and intensity of physical activity, while self-reported measures are useful for capturing the type and context of activity.
Resumo:
The Six Sigma technique is one of the quality management strategies and is utilised for improving the quality and productivity in the manufacturing process. It is inspired by the two major project methodologies of Deming’s "Plan – Do – Check – Act (PDCA)" Cycle which consists of DMAIC and DMADV. Those two methodologies are comprised of five phases. The DMAIC project methodology will be comprehensively used in this research. In brief, DMAIC is utilised for improving the existing manufacturing process and it involves the phases Define, Measure, Analyse, Improve, and Control. Mask industry has become a significant industry in today’s society since the outbreak of some serious diseases such as the Severe Acute Respiratory Syndrome (SARS), bird flu, influenza, swine flu and hay fever. Protecting the respiratory system, then, has become the fundamental requirement for preventing respiratory deceases. Mask is the most appropriate and protective product inasmuch as it is effective in protecting the respiratory tract and resisting the virus infection through air. In order to satisfy various customers’ requirements, thousands of mask products are designed in the market. Moreover, masks are also widely used in industries including medical industries, semi-conductor industries, food industries, traditional manufacturing, and metal industries. Notwithstanding the quality of masks have become the prioritisations since they are used to prevent dangerous diseases and safeguard people, the quality improvement technique are of very high significance in mask industry. The purpose of this research project is firstly to investigate the current quality control practices in a mask industry, then, to explore the feasibility of using Six Sigma technique in that industry, and finally, to implement the Six Sigma technique in the case company to develop and evaluate the product quality process. This research mainly investigates the quality problems of musk industry and effectiveness of six sigma technique in musk industry with the United Excel Enterprise Corporation (UEE) Company as a case company. The DMAIC project methodology in the Six Sigma technique is adopted and developed in this research. This research makes significant contribution to knowledge. The main results contribute to the discovering the root causes of quality problems in a mask industry. Secondly, the company was able to increase not only acceptance rate but quality level by utilising the Six Sigma technique. Hence, utilising the Six Sigma technique could increase the production capacity of the company. Third, the Six Sigma technique is necessary to be extensively modified to improve the quality control in the mask industry. The impact of the Six Sigma technique on the overall performance in the business organisation should be further explored in future research.
Resumo:
Motorcycles are particularly vulnerable in right-angle crashes at signalized intersections. The objective of this study is to explore how variations in roadway characteristics, environmental factors, traffic factors, maneuver types, human factors as well as driver demographics influence the right-angle crash vulnerability of motorcycles at intersections. The problem is modeled using a mixed logit model with a binary choice category formulation to differentiate how an at-fault vehicle collides with a not-at-fault motorcycle in comparison to other collision types. The mixed logit formulation allows randomness in the parameters and hence takes into account the underlying heterogeneities potentially inherent in driver behavior, and other unobserved variables. A likelihood ratio test reveals that the mixed logit model is indeed better than the standard logit model. Night time riding shows a positive association with the vulnerability of motorcyclists. Moreover, motorcyclists are particularly vulnerable on single lane roads, on the curb and median lanes of multi-lane roads, and on one-way and two-way road type relative to divided-highway. Drivers who deliberately run red light as well as those who are careless towards motorcyclists especially when making turns at intersections increase the vulnerability of motorcyclists. Drivers appear more restrained when there is a passenger onboard and this has decreased the crash potential with motorcyclists. The presence of red light cameras also significantly decreases right-angle crash vulnerabilities of motorcyclists. The findings of this study would be helpful in developing more targeted countermeasures for traffic enforcement, driver/rider training and/or education, safety awareness programs to reduce the vulnerability of motorcyclists.
Resumo:
Navigational collisions are one of the major safety concerns in many seaports. To address this safety concern, a comprehensive and structured method of collision risk management is necessary. Traditionally management of port water collision risks has been relied on historical collision data. However, this collision-data-based approach is hampered by several shortcomings, such as randomness and rarity of collision occurrence leading to obtaining insufficient number of samples for a sound statistical analysis, insufficiency in explaining collision causation, and reactive approach to safety. A promising alternative approach that overcomes these shortcomings is the navigational traffic conflict technique that uses traffic conflicts as an alternative to the collision data. This paper proposes a collision risk management method by utilizing the principles of this technique. This risk management method allows safety analysts to diagnose safety deficiencies in a proactive manner, which, consequently, has great potential for managing collision risks in a fast, reliable and efficient manner.
Resumo:
Navigational collisions are one of the major safety concerns for many seaports. Continuing growth of shipping traffic in number and sizes is likely to result in increased number of traffic movements, which consequently could result higher risk of collisions in these restricted waters. This continually increasing safety concern warrants a comprehensive technique for modeling collision risk in port waters, particularly for modeling the probability of collision events and the associated consequences (i.e., injuries and fatalities). A number of techniques have been utilized for modeling the risk qualitatively, semi-quantitatively and quantitatively. These traditional techniques mostly rely on historical collision data, often in conjunction with expert judgments. However, these techniques are hampered by several shortcomings, such as randomness and rarity of collision occurrence leading to obtaining insufficient number of collision counts for a sound statistical analysis, insufficiency in explaining collision causation, and reactive approach to safety. A promising alternative approach that overcomes these shortcomings is the navigational traffic conflict technique (NTCT), which uses traffic conflicts as an alternative to the collisions for modeling the probability of collision events quantitatively. This article explores the existing techniques for modeling collision risk in port waters. In particular, it identifies the advantages and limitations of the traditional techniques and highlights the potentials of the NTCT in overcoming the limitations. In view of the principles of the NTCT, a structured method for managing collision risk is proposed. This risk management method allows safety analysts to diagnose safety deficiencies in a proactive manner, which consequently has great potential for managing collision risk in a fast, reliable and efficient manner.