253 resultados para Visitor segment
Resumo:
This paper investigates stochastic analysis of transit segment hourly passenger load factor variation for transit capacity and quality of service (QoS) analysis using Automatic Fare Collection data for a premium radial bus route in Brisbane, Australia. It compares stochastic analysis to traditional peak hour factor (PHF) analysis to gain further insight into variability of transit route segments’ passenger loading during a study hour. It demonstrates that hourly design load factor is a useful method of modeling a route segment’s capacity and QoS time history across the study weekday. This analysis method is readily adaptable to different passenger load standards by adjusting design percentile, reflecting either a more relaxed or more stringent condition. This paper also considers hourly coefficient of variation of load factor as a capacity and QoS assessment measure, in particular through its relationships with hourly average and design load factors. Smaller value reflects uniform passenger loading, which is generally indicative of well dispersed passenger boarding demands and good schedule maintenance. Conversely, higher value may be indicative of pulsed or uneven passenger boarding demands, poor schedule maintenance, and/or bus bunching. An assessment table based on hourly coefficient of variation of load factor is developed and applied to this case study. Inferences are drawn for a selection of study hours across the weekday studied.
Resumo:
A one size fits all approach dominates alcohol programs in school settings (Botvin et al., 2007), which may limit program effectiveness (Snyder et al., 2004). Programs tailored to the meet the needs and wants of adolescent groups may be more effective. Limited attention has been directed towards employing a full segmentation process. Where segmentation has been examined, the focus has remained on socio-demographic characteristics and more recently psychographic variables (Mathijssen et al., 2012). The current study aimed to identify whether the addition of behaviour could be used to identify segments. Variables included attitudes towards binge drinking (α = 0.86), behavioral intentions’ (α = 0.97), perceived behavioral control (PBC), injunctive norms (α = 0.94); descriptive norms (α = 0.87), knowledge and reported behaviour. Data was collected from five schools, n = 625 (32.96% girls). Two-Step cluster analysis produced a sample (n = 625) with a silhouette measure of cohesion and separation of 0.4. The intention measure and whether students reported previously consuming alcohol were the most distinguishing characteristics - predictor importance scores of (1.0). A four segment solution emerged. The first segment (“Male abstainers” – 37.2%) featured the highest knowledge score (M: 5.9) along with the lowest-risk drinking attitudes and intentions to drink excessively. Segment 2 (“At risk drinkers” - 11.2%) were characterised by their high-risk attitudes and high-risk drinking intentions. Injunctive (M: 4.1) and descriptive norms (M: 4.9) may indicate a social environment where drinking is the norm. Segment 3 (”Female abstainers” – 25.9%) represents young girls, who have the lowest-risk attitudes and low intentions to drink excessively. The fourth and final segment (boys = 67.4%) (“Moderate drinkers” – 25.7%) all report previously drinking alcohol yet their attitudes and intentions towards excessive alcohol consumption are lower than other segments. Segmentation focuses on identifying groups of individuals who feature similar characteristics. The current study illustrates the importance of including reported behaviour in addition to psychographic and demographic characteristics to identify unique groups to inform intervention planning and design. Key messages The principle of segmentation has received limited attention in the context of school-based alcohol education programs. This research identified four segments amongst 14-16 year high school students, each of which can be targeted with a unique, tailored program to meet the needs and wants of the target audience.
Resumo:
A number of online algorithms have been developed that have small additional loss (regret) compared to the best “shifting expert”. In this model, there is a set of experts and the comparator is the best partition of the trial sequence into a small number of segments, where the expert of smallest loss is chosen in each segment. The regret is typically defined for worst-case data / loss sequences. There has been a recent surge of interest in online algorithms that combine good worst-case guarantees with much improved performance on easy data. A practically relevant class of easy data is the case when the loss of each expert is iid and the best and second best experts have a gap between their mean loss. In the full information setting, the FlipFlop algorithm by De Rooij et al. (2014) combines the best of the iid optimal Follow-The-Leader (FL) and the worst-case-safe Hedge algorithms, whereas in the bandit information case SAO by Bubeck and Slivkins (2012) competes with the iid optimal UCB and the worst-case-safe EXP3. We ask the same question for the shifting expert problem. First, we ask what are the simple and efficient algorithms for the shifting experts problem when the loss sequence in each segment is iid with respect to a fixed but unknown distribution. Second, we ask how to efficiently unite the performance of such algorithms on easy data with worst-case robustness. A particular intriguing open problem is the case when the comparator shifts within a small subset of experts from a large set under the assumption that the losses in each segment are iid.
Resumo:
Linear assets are engineering infrastructure, such as pipelines, railway lines, and electricity cables, which span long distances and can be divided into different segments. Optimal management of such assets is critical for asset owners as they normally involve significant capital investment. Currently, Time Based Preventive Maintenance (TBPM) strategies are commonly used in industry to improve the reliability of such assets, as they are easy to implement compared with reliability or risk-based preventive maintenance strategies. Linear assets are normally of large scale and thus their preventive maintenance is costly. Their owners and maintainers are always seeking to optimize their TBPM outcomes in terms of minimizing total expected costs over a long term involving multiple maintenance cycles. These costs include repair costs, preventive maintenance costs, and production losses. A TBPM strategy defines when Preventive Maintenance (PM) starts, how frequently the PM is conducted and which segments of a linear asset are operated on in each PM action. A number of factors such as required minimal mission time, customer satisfaction, human resources, and acceptable risk levels need to be considered when planning such a strategy. However, in current practice, TBPM decisions are often made based on decision makers’ expertise or industrial historical practice, and lack a systematic analysis of the effects of these factors. To address this issue, here we investigate the characteristics of TBPM of linear assets, and develop an effective multiple criteria decision making approach for determining an optimal TBPM strategy. We develop a recursive optimization equation which makes it possible to evaluate the effect of different maintenance options for linear assets, such as the best partitioning of the asset into segments and the maintenance cost per segment.
Resumo:
Smart Card Automated Fare Collection (AFC) data has been extensively exploited to understand passenger behavior, passenger segment, trip purpose and improve transit planning through spatial travel pattern analysis. The literature has been evolving from simple to more sophisticated methods such as from aggregated to individual travel pattern analysis, and from stop-to-stop to flexible stop aggregation. However, the issue of high computing complexity has limited these methods in practical applications. This paper proposes a new algorithm named Weighted Stop Density Based Scanning Algorithm with Noise (WS-DBSCAN) based on the classical Density Based Scanning Algorithm with Noise (DBSCAN) algorithm to detect and update the daily changes in travel pattern. WS-DBSCAN converts the classical quadratic computation complexity DBSCAN to a problem of sub-quadratic complexity. The numerical experiment using the real AFC data in South East Queensland, Australia shows that the algorithm costs only 0.45% in computation time compared to the classical DBSCAN, but provides the same clustering results.
Resumo:
Recent changes in the aviation industry and in the expectations of travellers have begun to alter the way we approach our understanding, and thus the segmentation, of airport passengers. The key to successful segmentation of any population lies in the selection of the criteria on which the partitions are based. Increasingly, the basic criteria used to segment passengers (purpose of trip and frequency of travel) no longer provide adequate insights into the passenger experience. In this paper, we propose a new model for passenger segmentation based on the passenger core value, time. The results are based on qualitative research conducted in-situ at Brisbane International Terminal during 2012-2013. Based on our research, a relationship between time sensitivity and degree of passenger engagement was identified. This relationship was used as the basis for a new passenger segmentation model, namely: Airport Enthusiast (engaged, non time sensitive); Time Filler (non engaged, non time sensitive); Efficiency Lover (non engaged, time sensitive) and Efficient Enthusiast (engaged, time sensitive). The outcomes of this research extend the theoretical knowledge about passenger experience in the terminal environment. These new insights can ultimately be used to optimise the allocation of space for future terminal planning and design.
Resumo:
Map-matching algorithms that utilise road segment connectivity along with other data (i.e.position, speed and heading) in the process of map-matching are normally suitable for high frequency (1 Hz or higher) positioning data from GPS. While applying such map-matching algorithms to low frequency data (such as data from a fleet of private cars, buses or light duty vehicles or smartphones), the performance of these algorithms reduces to in the region of 70% in terms of correct link identification, especially in urban and sub-urban road networks. This level of performance may be insufficient for some real-time Intelligent Transport System (ITS) applications and services such as estimating link travel time and speed from low frequency GPS data. Therefore, this paper develops a new weight-based shortest path and vehicle trajectory aided map-matching (stMM) algorithm that enhances the map-matching of low frequency positioning data on a road map. The well-known A* search algorithm is employed to derive the shortest path between two points while taking into account both link connectivity and turn restrictions at junctions. In the developed stMM algorithm, two additional weights related to the shortest path and vehicle trajectory are considered: one shortest path-based weight is related to the distance along the shortest path and the distance along the vehicle trajectory, while the other is associated with the heading difference of the vehicle trajectory. The developed stMM algorithm is tested using a series of real-world datasets of varying frequencies (i.e. 1 s, 5 s, 30 s, 60 s sampling intervals). A high-accuracy integrated navigation system (a high-grade inertial navigation system and a carrier-phase GPS receiver) is used to measure the accuracy of the developed algorithm. The results suggest that the algorithm identifies 98.9% of the links correctly for every 30 s GPS data. Omitting the information from the shortest path and vehicle trajectory, the accuracy of the algorithm reduces to about 73% in terms of correct link identification. The algorithm can process on average 50 positioning fixes per second making it suitable for real-time ITS applications and services.
Resumo:
Objective This study seeks establish whether meaningful subgroups exist within a 14-16 year old adolescent population and if these segments respond differently to the Game On: Know Alcohol (GOKA) intervention, a school-based alcohol social marketing program. Methodology This study is part of a larger cluster randomized controlled evaluation of the Game On: Know Alcohol (GOKA) program implemented in 14 schools in 2013/2014. TwoStep cluster analysis was conducted to segment 2114 high school adolescents (14-16 years old) on the basis of 22 demographic, behavioral and psychographic variables. Program effects on knowledge, attitudes, behavioral intentions, social norms, expectancies and refusal self-efficacy of identified segments was subsequently examined. Results Three segments were identified: (1) Abstainers (2) Bingers (3) Moderate Drinkers. Program effects varied significantly across segments. The strongest positive change effects post participation were observed for the Bingers, while mixed effects were evident for Moderate Drinkers and Abstainers. Conclusions These findings provide preliminary empirical evidence supporting application of social marketing segmentation in alcohol education programs. Development of targeted programs that meet the unique needs of each of the three identified segments is indicated to extend the social marketing footprint in alcohol education.
Resumo:
Progression of spinal deformity in children was studied with Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) to identify how gravity affects the deformity and to determine the full three-dimensional character of the deformity. The CT study showed that gravity is significant in deformity progression in some patients which has implications for clinical patient management. The world first MRI study showed that the standard clinical measure used to define the extent of the deformity is inadequate and further use of three-dimensional MRI should be considered by spinal surgeons.
Resumo:
This thesis evaluates the recent work of the Organisation for Economic Cooperation and Development and civil society groups in creating requirements for multinational entities to disclose financial information on a Country-by-Country basis. Country-by-Country reports may identify profit-shifting activities and enable various stakeholders to hold multinational entities accountable for their global conduct, through the provision of transparent and decision-useful information. This thesis identifies inadequacies in current disclosure requirements and develops a standardised Country-by-Country model, which is applied to the disclosures of three multinational entities to illustrate its pragmatic feasibility and the improvement in quality of financial information available to users.
Resumo:
We present a clustering-only approach to the problem of speaker diarization to eliminate the need for the commonly employed and computationally expensive Viterbi segmentation and realignment stage. We use multiple linear segmentations of a recording and carry out complete-linkage clustering within each segmentation scenario to obtain a set of clustering decisions for each case. We then collect all clustering decisions, across all cases, to compute a pairwise vote between the segments and conduct complete-linkage clustering to cluster them at a resolution equal to the minimum segment length used in the linear segmentations. We use our proposed cluster-voting approach to carry out speaker diarization and linking across the SAIVT-BNEWS corpus of Australian broadcast news data. We compare our technique to an equivalent baseline system with Viterbi realignment and show that our approach can outperform the baseline technique with respect to the diarization error rate (DER) and attribution error rate (AER).
Resumo:
We propose a novel technique for conducting robust voice activity detection (VAD) in high-noise recordings. We use Gaussian mixture modeling (GMM) to train two generic models; speech and non-speech. We then score smaller segments of a given (unseen) recording against each of these GMMs to obtain two respective likelihood scores for each segment. These scores are used to compute a dissimilarity measure between pairs of segments and to carry out complete-linkage clustering of the segments into speech and non-speech clusters. We compare the accuracy of our method against state-of-the-art and standardised VAD techniques to demonstrate an absolute improvement of 15% in half-total error rate (HTER) over the best performing baseline system and across the QUT-NOISE-TIMIT database. We then apply our approach to the Audio-Visual Database of American English (AVDBAE) to demonstrate the performance of our algorithm in using visual, audio-visual or a proposed fusion of these features.
Resumo:
In the experience economy, the role of art museums has evolved so as to cater to global cultural tourists. These institutions were traditionally dedicated to didactic functions, and served cognoscenti with elite cultural tastes that were aligned with the avant-garde’s autonomous stance towards mass culture. In a post-avant-garde era however museums have focused on appealing to a broad clientele that often has little or no knowledge of historical or contemporary art. Many of these tourists want art to provide entertaining and novel experiences, rather than receiving pedagogical ‘training’. In response, art museums are turning into ‘experience venues’ and are being informed by ideas associated with new museology, as well as business approaches like Customer Experience Management. This has led to the provision of populist entertainment modes, such as blockbuster exhibitions, participatory art events, jazz nights, and wine tasting, and reveals that such museums recognize that today’s cultural tourist is part of an increasingly diverse and populous demographic, which shares many languages and value systems. As art museums have shifted attention to global tourists, they have come to play a greater role in gentrification projects and cultural precincts. The art museum now seems ideally suited to tourist-centric environments that offer a variety of immersive sensory experiences and combine museums (often designed by star-architects), international hotels, restaurants, high-end shopping zones, and other leisure forums. These include sites such as Port Maravilha urban waterfront development in Rio de Janiero, the Museum of Old and New Art in Hobart, and the Chateau La Coste winery and hotel complex in Provence. It can be argued that in a global experience economy, art museums have become experience centres in experience-scapes. This paper will examine the nature of the tourist experience in relation to the new art museum, and the latter’s increasingly important role in attracting tourists to urban and regional cultural precincts.
Resumo:
The aim of this study was to develop a new method for quantifying intersegmental motion of the spine in an instrumented motion segment L4–L5 model using ultrasound image post-processing combined with an electromagnetic device. A prospective test–retest design was employed, combined with an evaluation of stability and within- and between-day intra-tester reliability during forward bending by 15 healthy male patients. The accuracy of the measurement system using the model was calculated to be ± 0.9° (standard deviation = 0.43) over a 40° range and ± 0.4 cm (standard deviation = 0.28) over 1.5 cm. The mean composite range of forward bending was 15.5 ± 2.04° during a single trial (standard error of the mean = 0.54, coefficient of variation = 4.18). Reliability (intra-class correlation coefficient = 2.1) was found to be excellent for both within-day measures (0.995–0.999) and between-day measures (0.996–0.999). Further work is necessary to explore the use of this approach in the evaluation of biomechanics, clinical assessments and interventions.
Labeling white matter tracts in hardi by fusing multiple tract atlases with applications to genetics
Resumo:
Accurate identification of white matter structures and segmentation of fibers into tracts is important in neuroimaging and has many potential applications. Even so, it is not trivial because whole brain tractography generates hundreds of thousands of streamlines that include many false positive fibers. We developed and tested an automatic tract labeling algorithm to segment anatomically meaningful tracts from diffusion weighted images. Our multi-atlas method incorporates information from multiple hand-labeled fiber tract atlases. In validations, we showed that the method outperformed the standard ROI-based labeling using a deformable, parcellated atlas. Finally, we show a high-throughput application of the method to genetic population studies. We use the sub-voxel diffusion information from fibers in the clustered tracts based on 105-gradient HARDI scans of 86 young normal twins. The whole workflow shows promise for larger population studies in the future.