957 resultados para segment QT
Resumo:
Transit passenger market segmentation enables transit operators to target different classes of transit users to provide customized information and services. The Smart Card (SC) data, from Automated Fare Collection system, facilitates the understanding of multiday travel regularity of transit passengers, and can be used to segment them into identifiable classes of similar behaviors and needs. However, the use of SC data for market segmentation has attracted very limited attention in the literature. This paper proposes a novel methodology for mining spatial and temporal travel regularity from each individual passenger’s historical SC transactions and segments them into four segments of transit users. After reconstructing the travel itineraries from historical SC transactions, the paper adopts the Density-Based Spatial Clustering of Application with Noise (DBSCAN) algorithm to mine travel regularity of each SC user. The travel regularity is then used to segment SC users by an a priori market segmentation approach. The methodology proposed in this paper assists transit operators to understand their passengers and provide them oriented information and services.
Resumo:
Due to the demand for better and deeper analysis in sports, organizations (both professional teams and broadcasters) are looking to use spatiotemporal data in the form of player tracking information to obtain an advantage over their competitors. However, due to the large volume of data, its unstructured nature, and lack of associated team activity labels (e.g. strategic/tactical), effective and efficient strategies to deal with such data have yet to be deployed. A bottleneck restricting such solutions is the lack of a suitable representation (i.e. ordering of players) which is immune to the potentially infinite number of possible permutations of player orderings, in addition to the high dimensionality of temporal signal (e.g. a game of soccer last for 90 mins). Leveraging a recent method which utilizes a "role-representation", as well as a feature reduction strategy that uses a spatiotemporal bilinear basis model to form a compact spatiotemporal representation. Using this representation, we find the most likely formation patterns of a team associated with match events across nearly 14 hours of continuous player and ball tracking data in soccer. Additionally, we show that we can accurately segment a match into distinct game phases and detect highlights. (i.e. shots, corners, free-kicks, etc) completely automatically using a decision-tree formulation.
Resumo:
At the highest level of competitive sport, nearly all performances of athletes (both training and competitive) are chronicled using video. Video is then often viewed by expert coaches/analysts who then manually label important performance indicators to gauge performance. Stroke-rate and pacing are important performance measures in swimming, and these are previously digitised manually by a human. This is problematic as annotating large volumes of video can be costly, and time-consuming. Further, since it is difficult to accurately estimate the position of the swimmer at each frame, measures such as stroke rate are generally aggregated over an entire swimming lap. Vision-based techniques which can automatically, objectively and reliably track the swimmer and their location can potentially solve these issues and allow for large-scale analysis of a swimmer across many videos. However, the aquatic environment is challenging due to fluctuations in scene from splashes, reflections and because swimmers are frequently submerged at different points in a race. In this paper, we temporally segment races into distinct and sequential states, and propose a multimodal approach which employs individual detectors tuned to each race state. Our approach allows the swimmer to be located and tracked smoothly in each frame despite a diverse range of constraints. We test our approach on a video dataset compiled at the 2012 Australian Short Course Swimming Championships.
Resumo:
The fastest-growing segment of jobs in the creative sector are in those firms that provide creative services to other sectors (Hearn, Goldsmith, Bridgstock, Rodgers 2014, this volume; Cunningham 2014, this volume). There are also a large number of Creative Services (Architecture and Design, Advertising and Marketing, Software and Digital Content occupations) workers embedded in organizations in other industry sectors (Cunningham and Higgs 2009). Ben Goldsmith (2014, this volume) shows, for example, that the Financial Services sector is the largest employer of digital creative talent in Australia. But why should this be? We argue it is because ‘knowledge-based intangibles are increasingly the source of value creation and hence of sustainable competitive advantage (Mudambi 2008, 186). This value creation occurs primarily at the research and development (R and D) and the marketing ends of the supply chain. Both of these areas require strong creative capabilities in order to design for, and to persuade, consumers. It is no surprise that Jess Rodgers (2014, this volume), in a study of Australia’s Manufacturing sector, found designers and advertising and marketing occupations to be the most numerous creative occupations. Greg Hearn and Ruth Bridgstock (2013, forthcoming) suggest ‘the creative heart of the creative economy […] is the social and organisational routines that manage the generation of cultural novelty, both tacit and codified, internal and external, and [cultural novelty’s] combination with other knowledges […] produce and capture value’. 2 Moreover, the main “social and organisational routine” is usually a team (for example, Grabher 2002; 2004).
Resumo:
A new community and communication type of social networks - online dating - are gaining momentum. With many people joining in the dating network, users become overwhelmed by choices for an ideal partner. A solution to this problem is providing users with partners recommendation based on their interests and activities. Traditional recommendation methods ignore the users’ needs and provide recommendations equally to all users. In this paper, we propose a recommendation approach that employs different recommendation strategies to different groups of members. A segmentation method using the Gaussian Mixture Model (GMM) is proposed to customize users’ needs. Then a targeted recommendation strategy is applied to each identified segment. Empirical results show that the proposed approach outperforms several existing recommendation methods.
Resumo:
This paper focuses on Australian development firms in the console and mobile games industry in order to understand how small firms in a geographically remote and marginal position in the global industry are able to relate to global firms and capture revenue share. This paper shows that, while technological change in the games industry has resulted in the emergence of new industry segments based on transactional rather than relational forms of economic coordination, in which we might therefore expect less asymmetrical power relations, lead firms retain a position of power in the global games entertainment industry relative to remote developers. This has been possible because lead firms in the emerging mobile devices market have developed and sustained bottlenecks in their segment of the industry through platform competition and the development of an intensely competitive ecosystem of developers. Our research shows the critical role of platform competition and bottlenecks in influencing power asymmetries within global markets.
Resumo:
Recent modelling of socio-economic costs by the Australian railway industry in 2010 has estimated the cost of level crossing accidents to exceed AU$116 million annually. To better understand causal factors that contribute to these accidents, the Cooperative Research Centre for Rail Innovation is running a project entitled Baseline Level Crossing Video. The project aims to improve the recording of level crossing safety data by developing an intelligent system capable of detecting near-miss incidents and capturing quantitative data around these incidents. To detect near-miss events at railway level crossings a video analytics module is being developed to analyse video footage obtained from forward-facing cameras installed on trains. This paper presents a vision base approach for the detection of these near-miss events. The video analytics module is comprised of object detectors and a rail detection algorithm, allowing the distance between a detected object and the rail to be determined. An existing publicly available Histograms of Oriented Gradients (HOG) based object detector algorithm is used to detect various types of vehicles in each video frame. As vehicles are usually seen from a sideway view from the cabin’s perspective, the results of the vehicle detector are verified using an algorithm that can detect the wheels of each detected vehicle. Rail detection is facilitated using a projective transformation of the video, such that the forward-facing view becomes a bird’s eye view. Line Segment Detector is employed as the feature extractor and a sliding window approach is developed to track a pair of rails. Localisation of the vehicles is done by projecting the results of the vehicle and rail detectors on the ground plane allowing the distance between the vehicle and rail to be calculated. The resultant vehicle positions and distance are logged to a database for further analysis. We present preliminary results regarding the performance of a prototype video analytics module on a data set of videos containing more than 30 different railway level crossings. The video data is captured from a journey of a train that has passed through these level crossings.
Resumo:
For the past decade, an attempt has been made by many research groups to define the roles of the growing number of Bcl-2 gene family proteins in the apoptotic process. The Bcl-2 family consists of pro-apoptotic (or cell death) and anti-apoptotic (or cell survival) genes and it is the balance in expression between these gene lineages that may determine the death or survival of a cell. The majority of studies have analysed the role/s of the Bcl-2 genes in cancer development. Equally important is their role in normal tissue development, homeostasis and non-cancer disease states. Bcl-2 is crucial for normal development in the kidney, with a deficiency in Bcl-2 producing such malformation that renal failure and death result. As a corollary, its role in renal disease states in the adult has been sought. Ischaemia is one of the most common causes of both acute and chronic renal failure. The section of the kidney that is most susceptible to ischaemic damage is the outer zone of the outer medulla. Within this zone the proximal tubules are most sensitive and often die by necrosis or desquamate. In the distal nephron, apoptosis is the more common form of cell death. Recent results from our laboratory have indicated that ischaemia-induced acute renal failure is associated with up-regulation of two anti-apoptotic Bcl-2 proteins (Bcl-2 and Bcl-XL) in the damaged distal tubule and occasional up-regulation of Bax in the proximal tubule. The distal tubule is a known reservoir for several growth factors important to renal growth and repair, such as insulin-like growth factor-1 (IGF-1) and epidermal growth factor (EGF). One of the likely possibilities for the anti-cell death action of the Bcl-2 genes is that the protected distal cells may be able to produce growth factors that have a further reparative or protective role via an autocrine mechanism in the distal segment and a paracrine mechanism in the proximal cells. Both EGF and IGF-1 are also up-regulated in the surviving distal tubules and are detected in the surviving proximal tubules, where these growth factors are not usually synthesized. As a result, we have been using in vitro methods to test: (i) the relative sensitivities of renal distal and proximal epithelial cell populations to injury caused by mechanisms known to act in ischaemia-reperfusion; (ii) whether a Bcl-2 anti-apoptotic mechanism acts in these cells; and (iii) whether an autocrine and/or paracrine growth factor mechanism is initiated. The following review discusses the background to these studies as well as some of our preliminary results.
Resumo:
Hot spot identification (HSID) aims to identify potential sites—roadway segments, intersections, crosswalks, interchanges, ramps, etc.—with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing with preponderance of zeros problem or right skewed dataset.
Resumo:
The older adult population (65 years and over) represents a rapid growing segment of the population in many developed countries. Unlike earlier cohorts of older drivers that included many who were familiar with public transportation, the present cohort of older drivers historically has a greater reliance on the private automobile as their main form of transportation. Recent studies of older adults’ travel patterns reported automobile to be responsible for over 80% of the total number of hours spent on all trips. While older drivers, as a group, does not demonstrate a particular road risk, the evident demographic change and the increased physical fragility and severity of crash-related injuries makes older driver safety a prevalent public health issue. This study systematically reviewed the safety and mobility outcomes of existing strategies used internationally to manage older driver safety, with a specific focus on age-based testing (ABT), license restriction and self-regulation (i.e. voluntary limiting driving in potentially hazardous situations). ABT remains the most commonly adopted strategy by licensing authorities both within Australia and internationally. Heterogeneity in the development of functional declines, and in driving behaviours within the older driver population, makes age an unreliable index of driving capacity. Given the counter-productive safety and mobility outcomes of ABT strategies, their continued popularity within both the legislative and public domains remains problematic. Self-regulation may provide greater potential for reducing older drivers’ crash risk while maintaining their mobility and independence. The current body of literature on older drivers’ self-regulation is systematically reviewed. Despite being promoted by researchers and licensing authorities as a strategy to maintain older driver safety and mobility, the proportion of older drivers who self-regulate, and exactly how they do so, remains unclear. Future research on older drivers’ adoption of self-regulation, particularly the underlying psychological factors that underlies this process, is needed in order to promote its use within the older driver community.
Resumo:
Currently a range of national policy settings are reshaping schooling and teacher education in Australia. This paper presents some of the findings from a small qualitative pilot study conducted with a group of final year pre-service teachers studying a secondary social science curriculum method unit in an Australian university. One of the study’s research objectives aimed at identifying how students reflected on their capacity to navigate curriculum change and, more specifically, on teaching about Australia and Asia in the forthcoming implementation of the first national history curriculum. The unit was designed and taught by the researcher on the assumption that beginning social science teachers need to be empowered to deal with the curriculum change they’ll encounter throughout their careers. The pilot study’s methodology was informed by a constructivist approach to grounded theory and its scope was limited to one semester with volunteer students. Of the pre-service teacher reflections on their preparedness to teach, this paper reports on the content, pedagogy and learning they experienced in one segment of the unit with specific reference to the new history curriculum’s ‘Australia in a world history’ approach and the development of Asia literacy. The findings indicate that whilst pre-service teachers valued the opportunity to engage with learning experiences which enhanced their intercultural understanding and extended their pedagogical and content knowledge on campus, the nature of the final practicum in schools was also influential in shaping their preparedness to enter the profession.
Resumo:
For clinical use, in electrocardiogram (ECG) signal analysis it is important to detect not only the centre of the P wave, the QRS complex and the T wave, but also the time intervals, such as the ST segment. Much research focused entirely on qrs complex detection, via methods such as wavelet transforms, spline fitting and neural networks. However, drawbacks include the false classification of a severe noise spike as a QRS complex, possibly requiring manual editing, or the omission of information contained in other regions of the ECG signal. While some attempts were made to develop algorithms to detect additional signal characteristics, such as P and T waves, the reported success rates are subject to change from person-to-person and beat-to-beat. To address this variability we propose the use of Markov-chain Monte Carlo statistical modelling to extract the key features of an ECG signal and we report on a feasibility study to investigate the utility of the approach. The modelling approach is examined with reference to a realistic computer generated ECG signal, where details such as wave morphology and noise levels are variable.
Resumo:
The use of dual growing rods is a fusionless surgical approach to the treatment of early onset scoliosis (EOS) which aims to harness potential growth in order to correct spinal deformity. This study compared through in-vitro experiments the biomechanical response of two different rod designs under axial rotation loading. The study showed that a new design of telescoping growing rod preserved the rotational flexibility of the spine in comparison with rigid rods indicating them to be a more physiological way to improve the spinal deformity.
Resumo:
Introduction Given the known challenges of obtaining accurate measurements of small radiation fields, and the increasing use of small field segments in IMRT beams, this study examined the possible effects of referencing inaccurate field output factors in the planning of IMRT treatments. Methods This study used the Brainlab iPlan treatment planning system to devise IMRT treatment plans for delivery using the Brainlab m3 microMLC (Brainlab, Feldkirchen, Germany). Four pairs of sample IMRT treatments were planned using volumes, beams and prescriptions that were based on a set of test plans described in AAPM TG 119’s recommendations for the commissioning of IMRT treatment planning systems [1]: • C1, a set of three 4 cm volumes with different prescription doses, was modified to reduce the size of the PTV to 2 cm across and to include an OAR dose constraint for one of the other volumes. • C2, a prostate treatment, was planned as described by the TG 119 report [1]. • C3, a head-and-neck treatment with a PTV larger than 10 cm across, was excluded from the study. • C4, an 8 cm long C-shaped PTV surrounding a cylindrical OAR, was planned as described in the TG 119 report [1] and then replanned with the length of the PTV reduced to 4 cm. Both plans in each pair used the same beam angles, collimator angles, dose reference points, prescriptions and constraints. However, one of each pair of plans had its beam modulation optimisation and dose calculation completed with reference to existing iPlan beam data and the other had its beam modulation optimisation and dose calculation completed with reference to revised beam data. The beam data revisions consisted of increasing the field output factor for a 0.6 9 0.6 cm2 field by 17 % and increasing the field output factor for a 1.2 9 1.2 cm2 field by 3 %. Results The use of different beam data resulted in different optimisation results with different microMLC apertures and segment weightings between the two plans for each treatment, which led to large differences (up to 30 % with an average of 5 %) between reference point doses in each pair of plans. These point dose differences are more indicative of the modulation of the plans than of any clinically relevant changes to the overall PTV or OAR doses. By contrast, the maximum, minimum and mean doses to the PTVs and OARs were smaller (less than 1 %, for all beams in three out of four pairs of treatment plans) but are more clinically important. Of the four test cases, only the shortened (4 cm) version of TG 119’s C4 plan showed substantial differences between the overall doses calculated in the volumes of interest using the different sets of beam data and thereby suggested that treatment doses could be affected by changes to small field output factors. An analysis of the complexity of this pair of plans, using Crowe et al.’s TADA code [2], indicated that iPlan’s optimiser had produced IMRT segments comprised of larger numbers of small microMLC leaf separations than in the other three test cases. Conclusion: The use of altered small field output factors can result in substantially altered doses when large numbers of small leaf apertures are used to modulate the beams, even when treating relatively large volumes.
Resumo:
Mothers represent a large segment of marketing dollars and traditionally, word of mouth was spread from mother to mother in a face-to-face environment, such as the school car park or mother’s groups. As families have evolved, so too has the traditional mother’s group. Limited academic studies have explored online mothers’ groups and how they impact on consumption. In order to explore the nature of this online influence and how mothers are influenced by other mothers online, a study was conducted through the use of observation and qualitative questioning. The data suggests that trust between mothers is generally high and mothers tend to trust the opinions of other mothers when they recommend a product. This is similar in other reference group contexts, however, mothers are communicating about brands frequently and influencing behaviour. This leads to a number of managerial and theoretical implications discussed in the paper.