989 resultados para 010401 Applied Statistics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Readily accepted knowledge regarding crash causation is consistently omitted from efforts to model and subsequently understand motor vehicle crash occurrence and their contributing factors. For instance, distracted and impaired driving accounts for a significant proportion of crash occurrence, yet is rarely modeled explicitly. In addition, spatially allocated influences such as local law enforcement efforts, proximity to bars and schools, and roadside chronic distractions (advertising, pedestrians, etc.) play a role in contributing to crash occurrence and yet are routinely absent from crash models. By and large, these well-established omitted effects are simply assumed to contribute to model error, with predominant focus on modeling the engineering and operational effects of transportation facilities (e.g. AADT, number of lanes, speed limits, width of lanes, etc.) The typical analytical approach—with a variety of statistical enhancements—has been to model crashes that occur at system locations as negative binomial (NB) distributed events that arise from a singular, underlying crash generating process. These models and their statistical kin dominate the literature; however, it is argued in this paper that these models fail to capture the underlying complexity of motor vehicle crash causes, and thus thwart deeper insights regarding crash causation and prevention. This paper first describes hypothetical scenarios that collectively illustrate why current models mislead highway safety researchers and engineers. It is argued that current model shortcomings are significant, and will lead to poor decision-making. Exploiting our current state of knowledge of crash causation, crash counts are postulated to arise from three processes: observed network features, unobserved spatial effects, and ‘apparent’ random influences that reflect largely behavioral influences of drivers. It is argued; furthermore, that these three processes in theory can be modeled separately to gain deeper insight into crash causes, and that the model represents a more realistic depiction of reality than the state of practice NB regression. An admittedly imperfect empirical model that mixes three independent crash occurrence processes is shown to outperform the classical NB model. The questioning of current modeling assumptions and implications of the latent mixture model to current practice are the most important contributions of this paper, with an initial but rather vulnerable attempt to model the latent mixtures as a secondary contribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The R statistical environment and language has demonstrated particular strengths for interactive development of statistical algorithms, as well as data modelling and visualisation. Its current implementation has an interpreter at its core which may result in a performance penalty in comparison to directly executing user algorithms in the native machine code of the host CPU. In contrast, the C++ language has no built-in visualisation capabilities, handling of linear algebra or even basic statistical algorithms; however, user programs are converted to high-performance machine code, ahead of execution. A new method avoids possible speed penalties in R by using the Rcpp extension package in conjunction with the Armadillo C++ matrix library. In addition to the inherent performance advantages of compiled code, Armadillo provides an easy-to-use template-based meta-programming framework, allowing the automatic pooling of several linear algebra operations into one, which in turn can lead to further speedups. With the aid of Rcpp and Armadillo, conversion of linear algebra centered algorithms from R to C++ becomes straightforward. The algorithms retains the overall structure as well as readability, all while maintaining a bidirectional link with the host R environment. Empirical timing comparisons of R and C++ implementations of a Kalman filtering algorithm indicate a speedup of several orders of magnitude.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background subtraction is a fundamental low-level processing task in numerous computer vision applications. The vast majority of algorithms process images on a pixel-by-pixel basis, where an independent decision is made for each pixel. A general limitation of such processing is that rich contextual information is not taken into account. We propose a block-based method capable of dealing with noise, illumination variations, and dynamic backgrounds, while still obtaining smooth contours of foreground objects. Specifically, image sequences are analyzed on an overlapping block-by-block basis. A low-dimensional texture descriptor obtained from each block is passed through an adaptive classifier cascade, where each stage handles a distinct problem. A probabilistic foreground mask generation approach then exploits block overlaps to integrate interim block-level decisions into final pixel-level foreground segmentation. Unlike many pixel-based methods, ad-hoc postprocessing of foreground masks is not required. Experiments on the difficult Wallflower and I2R datasets show that the proposed approach obtains on average better results (both qualitatively and quantitatively) than several prominent methods. We furthermore propose the use of tracking performance as an unbiased approach for assessing the practical usefulness of foreground segmentation methods, and show that the proposed approach leads to considerable improvements in tracking accuracy on the CAVIAR dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the field of face recognition, Sparse Representation (SR) has received considerable attention during the past few years. Most of the relevant literature focuses on holistic descriptors in closed-set identification applications. The underlying assumption in SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such assumption is easily violated in the more challenging face verification scenario, where an algorithm is required to determine if two faces (where one or both have not been seen before) belong to the same person. In this paper, we first discuss why previous attempts with SR might not be applicable to verification problems. We then propose an alternative approach to face verification via SR. Specifically, we propose to use explicit SR encoding on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which are then concatenated to form an overall face descriptor. Due to the deliberate loss spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment & various image deformations. Within the proposed framework, we evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN), and an implicit probabilistic technique based on Gaussian Mixture Models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the proposed local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, in both verification and closed-set identification problems. The experiments also show that l1-minimisation based encoding has a considerably higher computational than the other techniques, but leads to higher recognition rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We sought to determine the impact of electrospinning parameters on a trustworthy criterion that could evidently improve the maximum applicability of fibrous scaffolds for tissue regeneration. We used an image analysis technique to elucidate the web permeability index (WPI) by modeling the formation of electrospun scaffolds. Poly(3-hydroxybutyrate) (P3HB) scaffolds were fabricated according to predetermined conditions of levels in a Taguchi orthogonal design. The material parameters were the polymer concentration, conductivity, and volatility of the solution. The processing parameters were the applied voltage and nozzle-to-collector distance. With a law to monitor the WPI values when the polymer concentration or the applied voltage was increased, the pore interconnectivity was decreased. The quality of the jet instability altered the pore numbers, areas, and other structural characteristics, all of which determined the scaffold porosity and aperture interconnectivity. An initial drastic increase was observed in the WPI values because of the chain entanglement phenomenon above a 6 wt % P3HB content. Although the solution mixture significantly (p < 0.05) changed the scaffold architectural characteristics as a function of the solution viscosity and surface tension, it had a minor impact on the WPI values. The solution mixture gained the third place of significance, and the distance was approved as the least important factor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Effective Wayfinding is the successful interplay of human and environmental factors resulting in a person successfully moving from their current position to a desired location in a timely manner. To date this process has not been modelled to reflect this interplay. This paper proposes a complex modelling system approach of wayfinding by using Bayesian Networks to model this process, and applies the model to airports. The model suggests that human factors have a greater impact on effective wayfinding in airports than environmental factors. The greatest influences on human factors are found to be the level of spatial anxiety experienced by travellers and their cognitive and spatial skills. The model also predicted that the navigation pathway that a traveller must traverse has a larger impact on the effectiveness of an airport’s environment in promoting effective wayfinding than the terminal design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores the similarities and differences between bicycle and motorcycle crashes with other motor vehicles. If similar treatments can be effective for both bicycle and motorcycle crashes, then greater benefits in terms crash costs saved may be possible for the same investment in treatments. To reduce the biases associated with under-reporting of these crashes to police, property damage and minor injury crashes were excluded. The most common crash type for both bicycles (31.1%) and motorcycles (24.5%) was intersection from adjacent approaches. Drivers of other vehicles were coded most at fault in the majority of two-unit bicycle (57.0%) and motorcycle crashes (62.7%). The crash types, patterns of fault and factors affecting fault were generally similar for bicycle and motorcycle crashes. This confirms the need to combat the factors contributing to failure of other drivers to yield right of way to two-wheelers, and suggest that some of these actions should prove beneficial to the safety of both motorized and non-motorized two-wheelers. In contrast, child bicyclists were more often at fault, particularly in crashes involving a vehicle leaving the driveway or footpath. The greater reporting of violations by riders and drivers in motorcycle crashes also deserves further investigation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel in-cylinder pressure method for determining ignition delay has been proposed and demonstrated. This method proposes a new Bayesian statistical model to resolve the start of combustion, defined as being the point at which the band-pass in-cylinder pressure deviates from background noise and the combustion resonance begins. Further, it is demonstrated that this method is still accurate in situations where there is noise present. The start of combustion can be resolved for each cycle without the need for ad hoc methods such as cycle averaging. Therefore, this method allows for analysis of consecutive cycles and inter-cycle variability studies. Ignition delay obtained by this method and by the net rate of heat release have been shown to give good agreement. However, the use of combustion resonance to determine the start of combustion is preferable over the net rate of heat release method because it does not rely on knowledge of heat losses and will still function accurately in the presence of noise. Results for a six-cylinder turbo-charged common-rail diesel engine run with neat diesel fuel at full, three quarters and half load have been presented. Under these conditions the ignition delay was shown to increase as the load was decreased with a significant increase in ignition delay at half load, when compared with three quarter and full loads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Keeping exotic plant pests out of our country relies on good border control or quarantine. However with increasing globalization and mobilization some things slip through. Then the back up systems become important. This can include an expensive form of surveillance that purposively targets particular pests. A much wider net is provided by general surveillance, which is assimilated into everyday activities, like farmers checking the health of their crops. In fact farmers and even home gardeners have provided a front line warning system for some pests (eg European wasp) that could otherwise have wreaked havoc. Mathematics is used to model how surveillance works in various situations. Within this virtual world we can play with various surveillance and management strategies to "see" how they would work, or how to make them work better. One of our greatest challenges is estimating some of the input parameters : because the pest hasn't been here before, it's hard to predict how well it might behave: establishing, spreading, and what types of symptoms it might express. So we rely on experts to help us with this. This talk will look at the mathematical, psychological and logical challenges of helping experts to quantify what they think. We show how the subjective Bayesian approach is useful for capturing expert uncertainty, ultimately providing a more complete picture of what they think... And what they don't!

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most studies examining the temperature–mortality association in a city used temperatures from one site or the average from a network of sites. This may cause measurement error as temperature varies across a city due to effects such as urban heat islands. We examined whether spatiotemporal models using spatially resolved temperatures produced different associations between temperature and mortality compared with time series models that used non-spatial temperatures. We obtained daily mortality data in 163 areas across Brisbane city, Australia from 2000 to 2004. We used ordinary kriging to interpolate spatial temperature variation across the city based on 19 monitoring sites. We used a spatiotemporal model to examine the impact of spatially resolved temperatures on mortality. Also, we used a time series model to examine non-spatial temperatures using a single site and the average temperature from three sites. We used squared Pearson scaled residuals to compare model fit. We found that kriged temperatures were consistent with observed temperatures. Spatiotemporal models using kriged temperature data yielded slightly better model fit than time series models using a single site or the average of three sites' data. Despite this better fit, spatiotemporal and time series models produced similar associations between temperature and mortality. In conclusion, time series models using non-spatial temperatures were equally good at estimating the city-wide association between temperature and mortality as spatiotemporal models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distraction resulting from mobile phone use whilst driving has been shown to increase the reaction times of drivers, thereby increasing the likelihood of a crash. This study compares the effects of mobile phone conversations on reaction times of drivers responding to traffic events that occur at different points in a driver’s field of view. The CARRS-Q Advanced Driving Simulator was used to test a group of young drivers on various simulated driving tasks including a traffic event that occurred within the driver’s central vision—a lead vehicle braking suddenly—and an event that occurred within the driver’s peripheral—a pedestrian entering a zebra crossing from a footpath. Thirty-two licensed drivers drove the simulator in three phone conditions: baseline (no phone conversation), and while engaged in hands-free and handheld phone conversations. The drivers were aged between 21 to 26 years and split evenly by gender. Differences in reaction times for an event in a driver’s central vision were not statistically significant across phone conditions, probably due to a lower speed selection by the distracted drivers. In contrast, the reaction times to detect an event that originated in a distracted driver’s peripheral vision were more than 50% longer compared to the baseline condition. A further statistical analysis revealed that deterioration of reaction times to an event in the peripheral vision was greatest for distracted drivers holding a provisional licence. Many critical events originate in a driver’s periphery, including vehicles, bicyclists, and pedestrians emerging from side streets. A reduction in the ability to detect these events while distracted presents a significant safety concern that must be addressed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of mobile phones while driving is more prevalent among young drivers—a less experienced cohort with elevated crash risk. The objective of this study was to examine and better understand the reaction times of young drivers to a traffic event originating in their peripheral vision whilst engaged in a mobile phone conversation. The CARRS-Q Advanced Driving Simulator was used to test a sample of young drivers on various simulated driving tasks, including an event that originated within the driver’s peripheral vision, whereby a pedestrian enters a zebra crossing from a sidewalk. Thirty-two licensed drivers drove the simulator in three phone conditions: baseline (no phone conversation), hands-free and handheld. In addition to driving the simulator each participant completed questionnaires related to driver demographics, driving history, usage of mobile phones while driving, and general mobile phone usage history. The participants were 21 to 26 years old and split evenly by gender. Drivers’ reaction times to a pedestrian in the zebra crossing were modelled using a parametric accelerated failure time (AFT) duration model with a Weibull distribution. Also tested where two different model specifications to account for the structured heterogeneity arising from the repeated measures experimental design. The Weibull AFT model with gamma heterogeneity was found to be the best fitting model and identified four significant variables influencing the reaction times, including phone condition, driver’s age, license type (Provisional license holder or not), and self-reported frequency of usage of handheld phones while driving. The reaction times of drivers were more than 40% longer in the distracted condition compared to baseline (not distracted). Moreover, the impairment of reaction times due to mobile phone conversations was almost double for provisional compared to open license holders. A reduction in the ability to detect traffic events in the periphery whilst distracted presents a significant and measurable safety concern that will undoubtedly persist unless mitigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Singapore is a highly urbanized city-state country where walking is an important mode of travel. Pedestrians form about 25% of road fatalities every year, making them one of the most vulnerable road user groups in Singapore. Engineering measures like provision of overhead pedestrian crossings and raised zebra crossings tend to address pedestrian safety in general, but there may be occasions where pedestrians are particularly vulnerable so that targeted interventions are more appropriate. The objective of this study is to identify factors and situations that affect the injury severity of pedestrians involved in traffic crashes. Six years of crash data from 2003 to 2008 containing around four thousands pedestrian crashes at roadway segments were analyzed. Injury severity of pedestrians—recorded as slight injury, major injury and fatal—were modeled as a function of roadway characteristics, traffic features, environmental factors and pedestrian demographics by an ordered probit model. Results suggest that the injury severity of pedestrians involved in crashes during night time is higher indicating that pedestrian visibility during night is a key issue in pedestrian safety. The likelihood of fatal or serious injuries is higher for crashes on roads with high speed limit, center and median lane of multi-lane roads, school zones, roads with two-way divided traffic type, and when pedestrians cross the roads. Elderly pedestrians appear to be involved in fatal and serious injury crashes more when they attempt to cross the road without using nearby crossing facilities. Specific countermeasures are recommended based on the findings of this study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sustainability is a key driver for decisions in the management and future development of industries. The World Commission on Environment and Development (WCED, 1987) outlined imperatives which need to be met for environmental, economic and social sustainability. Development of strategies for measuring and improving sustainability in and across these domains, however, has been hindered by intense debate between advocates for one approach fearing that efforts by those who advocate for another could have unintended adverse impacts. Studies attempting to compare the sustainability performance of countries and industries have also found ratings of performance quite variable depending on the sustainability indices used. Quantifying and comparing the sustainability of industries across the triple bottom line of economy, environment and social impact continues to be problematic. Using the Australian dairy industry as a case study, a Sustainability Scorecard, developed as a Bayesian network model, is proposed as an adaptable tool to enable informed assessment, dialogue and negotiation of strategies at a global level as well as being suitable for developing local solutions.