943 resultados para patch match
Resumo:
Over the past decade, vision-based tracking systems have been successfully deployed in professional sports such as tennis and cricket for enhanced broadcast visualizations as well as aiding umpiring decisions. Despite the high-level of accuracy of the tracking systems and the sheer volume of spatiotemporal data they generate, the use of this high quality data for quantitative player performance and prediction has been lacking. In this paper, we present a method which predicts the location of a future shot based on the spatiotemporal parameters of the incoming shots (i.e. shot speed, location, angle and feet location) from such a vision system. Having the ability to accurately predict future short-term events has enormous implications in the area of automatic sports broadcasting in addition to coaching and commentary domains. Using Hawk-Eye data from the 2012 Australian Open Men's draw, we utilize a Dynamic Bayesian Network to model player behaviors and use an online model adaptation method to match the player's behavior to enhance shot predictability. To show the utility of our approach, we analyze the shot predictability of the top 3 players seeds in the tournament (Djokovic, Federer and Nadal) as they played the most amounts of games.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
In nature, the interactions between agents in a complex system (fish schools; colonies of ants) are governed by information that is locally created. Each agent self-organizes (adjusts) its behaviour, not through a central command centre, but based on variables that emerge from the interactions with other system agents in the neighbourhood. Self-organization has been proposed as a mechanism to explain the tendencies for individual performers to interact with each other in field-invasion sports teams, displaying functional co-adaptive behaviours, without the need for central control. The relevance of self-organization as a mechanism that explains pattern-forming dynamics within attacker-defender interactions in field-invasion sports has been sustained in the literature. Nonetheless, other levels of interpersonal coordination, such as intra-team interactions, still raise important questions, particularly with reference to the role of leadership or match strategies that have been prescribed in advance by a coach. The existence of key properties of complex systems, such as system degeneracy, nonlinearity or contextual dependency, suggests that self-organization is a functional mechanism to explain the emergence of interpersonal coordination tendencies within intra-team interactions. In this opinion article we propose how leadership may act as a key constraint on the emergent, self-organizational tendencies of performers in field-invasion sports.
Resumo:
Quantitative analysis is increasingly being used in team sports to better understand performance in these stylized, delineated, complex social systems. Here we provide a first step toward understanding the pattern-forming dynamics that emerge from collective offensive and defensive behavior in team sports. We propose a novel method of analysis that captures how teams occupy sub-areas of the field as the ball changes location. We used the method to analyze a game of association football (soccer) based upon a hypothesis that local player numerical dominance is key to defensive stability and offensive opportunity. We found that the teams consistently allocated more players than their opponents in sub-areas of play closer to their own goal. This is consistent with a predominantly defensive strategy intended to prevent yielding even a single goal. We also find differences between the two teams' strategies: while both adopted the same distribution of defensive, midfield, and attacking players (a 4:3:3 system of play), one team was significantly more effective both in maintaining defensive and offensive numerical dominance for defensive stability and offensive opportunity. That team indeed won the match with an advantage of one goal (2 to 1) but the analysis shows the advantage in play was more pervasive than the single goal victory would indicate. Our focus on the local dynamics of team collective behavior is distinct from the traditional focus on individual player capability. It supports a broader view in which specific player abilities contribute within the context of the dynamics of multiplayer team coordination and coaching strategy. By applying this complex system analysis to association football, we can understand how players' and teams' strategies result in successful and unsuccessful relationships between teammates and opponents in the area of play.
Resumo:
This study investigated changes in the complexity (magnitude and structure of variability) of the collective behaviours of association football teams during competitive performance. Raw positional data from an entire competitive match between two professional teams were obtained with the ProZone® tracking system. Five compound positional variables were used to investigate the collective patterns of performance of each team including: surface area, stretch index, team length, team width, and geometrical centre. Analyses involve the coefficient of variation (%CV) and approximate entropy (ApEn), as well as the linear association between both parameters. Collective measures successfully captured the idiosyncratic behaviours of each team and their variations across the six time periods of the match. Key events such as goals scored and game breaks (such as half time and full time) seemed to influence the collective patterns of performance. While ApEn values significantly decreased during each half, the %CV increased. Teams seem to become more regular and predictable, but with increased magnitudes of variation in their organisational shape over the natural course of a match.
Resumo:
Nedd4-2, a HECT (homologous with E6-associated protein C-terminus)-type ubiquitin protein ligase, has been implicated in regulating several ion channels, including Navs (voltage-gated sodium channels). In Xenopus oocytes Nedd4-2 strongly inhibits the activity of multiple Navs. However, the conditions under which Nedd4-2 mediates native Nav regulation remain uncharacterized. Using Nedd4-2-deficient mice, we demonstrate in the present study that in foetal cortical neurons Nedd4-2 regulates Navs specifically in response to elevated intracellular Na(+), but does not affect steady-state Nav activity. In dorsal root ganglia neurons from the same mice, however, Nedd4-2 does not control Nav activities. The results of the present study provide the first physiological evidence for an essential function of Nedd4-2 in regulating Navs in the central nervous system.
Resumo:
Aim To test the efficacy of Medilixir [cream] against the standard treatment of aqueous cream in the provision of relief from the symptoms of postburn itch. Design RCT with two parallel arms. Setting Professor Stuart Pegg Adult Burns Centre, Royal Brisbane and Women's Hospital, Brisbane, Australia. Participants Fifty-two patients aged between 18 and 80 years, admitted directly to the burns centre between 10 March and 22 July 2008, were able to provide informed consent, and had shown no allergic reaction to a patch test with the study medication, were randomised. Patients admitted from intensive care or high dependency were excluded. Main results Effect estimates and confidence intervals were not reported for any of the outcomes; only group means/proportions and P-values from hypothesis testing were provided. More patients in the intervention group reported itch reduction compared to comparison treatment (91 vs. 82%, P=0.001). Itch recurrence after cream application occurred later in the intervention group compared to the control group (P<0.001). Use of antipruritic medication was significantly greater in the control group (P=0.023). There was no difference in sleep disturbance between groups (not quantified). On average, Medilixir took longer to apply than aqueous cream (157s for Medilixir vs. 139s for aqueous cream; mean difference 17s), but authors noted that the groups did not differ significantly (CI for mean difference and P-values were not reported).
Resumo:
A microgrid contains both distributed generators (DGs) and loads and can be viewed by a controllable load by utilities. The DGs can be either inertial synchronous generators or non-inertial converter interfaced. Moreover, some of them can come online or go offline in plug and play fashion. The combination of these various types of operation makes the microgrid control a challenging task, especially when the microgrid operates in an autonomous mode. In this paper, a new phase locked loop (PLL) algorithm is proposed for smooth synchronization of plug and play DGs. A frequency droop for power sharing is used and a pseudo inertia has been introduced to non-inertial DGs in order to match their response with inertial DGs. The proposed strategy is validated through PSCAD simulation studies.
Resumo:
Increased levels of polybrominated diphenyl ethers (PBDEs) can occur particularly in dust and soil surrounding facilities that recycle products containing PBDEs. This may be the source of increased exposure for nearby workers and residents. To investigate, we measured PBDE levels in soil, office dust and blood of workers at the closest workplace (i.e. within 100m) to a large automotive shredding and metal recycling facility in Brisbane, Australia. The workplace investigated in this study was independent of the automotive shredding facility and was one of approximately 50 businesses of varying types within a relatively large commercial/industrial area surrounding the recycling facility. Concentrations of PBDEs in soils were at least an order of magnitude greater than background levels in the area. Congener profiles were dominated by larger molecular weight congeners; in particular BDE-209. This reflected the profile in outdoor air samples previously collected at this site. Biomonitoring data from blood serum indicated no differential exposure for workers near the recycling facility compared to a reference group of office workers, also in Brisbane. Unlike air, indoor dust and soil sample profiles, serum samples from both worker groups were dominated by congeners BDE-47, BDE-153, BDE-99, BDE-100 and BDE-183 and was similar to the profile previously reported in the general Australian population. Estimated exposures for workers near the industrial point source suggested indoor workers had significantly higher exposure than outdoor workers due to their exposure to indoor dust rather than soil. However, no relationship was observed between blood PBDE levels and different roles and activity patterns of workers on-site. These comparisons of PBDE levels in serum provide additional insight into the inter-individual variability within Australia. Results also indicate congener patterns in the workplace environment did not match blood profiles of workers. This was attributed to the relatively high background exposures for the general Australian population via dietary intake and the home environment.
Resumo:
Landscape change is an ongoing process even within established urban landscapes. Yet, analyses of fragmentation and deforestation have focused primarily on the conversion of non-urban to urban landscapes in rural landscapes and ignored urban landscapes. To determine the ecological effects of continued urbanization in urban landscapes, tree-covered patches were mapped in the Gwynns Falls watershed (17158.6 ha) in Maryland for 1994 and 1999 to document fragmentation, deforestation, and reforestation. The watershed was divided into lower (urban core), middle (older suburbs), and upper (recent suburbs) subsections. Over the entire watershed a net of 264.5 of 4855.5 ha of tree-covered patches were converted to urban land use-125 new tree-covered patches were added through fragmentation, 4 were added through reforestation, 43 were lost through deforestation, and 7 were combined with an adjacent patch. In addition, 180 patches were reduced in size. In the urban core, deforestation continued with conversion to commercial land use. Because of the lack of vegetation, commercial land uses are problematic for both species conservation and derived ecosystem benefits. In the lower subsection, shape complexity increased for tree-covered patches less than 10 ha. Changes in shape resulted from canopy expansion, planted materials, and reforestation of vacant sites. In the middle and upper subsections, the shape index value for tree-covered patches decreased, indicating simplification. Density analyses of the subsections showed no change with respect to patch densities but pointed out the importance of small patches (≤5 ha) as "stepping stone" to link large patches (e. g., ≥100 ha). Using an urban forest effect model, we estimated, for the entire watershed, total carbon loss and pollution removal, from 1994 to 1999, to be 14,235,889.2 kg and 13,011.4 kg, respectively due to urban land-use conversions.
Resumo:
Natural landscapes are increasingly subjected to anthropogenic pressure and fragmentation resulting in reduced ecological condition. In this study we examined the relationship between ecological condition and the soundscape in fragmented forest remnants of south-east Queensland, Australia. The region is noted for its high biodiversity value and increased pressure associated with habitat fragmentation and urbanisation. Ten sites defined by a distinct open eucalypt forest community dominated by spotted gum (Corymbia citriodora ssp. variegata) were stratified based on patch size and patch connectivity. Each site underwent a series of detailed vegetation condition and landscape assessments, together with bird surveys and acoustic analysis using relative soundscape power. Univariate and multivariate analyses indicated that the measurement of relative soundscape power reflects ecological condition and bird species richness, and is dependent on the extent of landscape fragmentation. We conclude that acoustic monitoring technologies provide a cost effective tool for measuring ecological condition, especially in conjunction with established field observations and recordings.
Resumo:
Robust facial expression recognition (FER) under occluded face conditions is challenging. It requires robust algorithms of feature extraction and investigations into the effects of different types of occlusion on the recognition performance to gain insight. Previous FER studies in this area have been limited. They have spanned recovery strategies for loss of local texture information and testing limited to only a few types of occlusion and predominantly a matched train-test strategy. This paper proposes a robust approach that employs a Monte Carlo algorithm to extract a set of Gabor based part-face templates from gallery images and converts these templates into template match distance features. The resulting feature vectors are robust to occlusion because occluded parts are covered by some but not all of the random templates. The method is evaluated using facial images with occluded regions around the eyes and the mouth, randomly placed occlusion patches of different sizes, and near-realistic occlusion of eyes with clear and solid glasses. Both matched and mis-matched train and test strategies are adopted to analyze the effects of such occlusion. Overall recognition performance and the performance for each facial expression are investigated. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the high robustness and fast processing speed of our approach, and provide useful insight into the effects of occlusion on FER. The results on the parameter sensitivity demonstrate a certain level of robustness of the approach to changes in the orientation and scale of Gabor filters, the size of templates, and occlusions ratios. Performance comparisons with previous approaches show that the proposed method is more robust to occlusion with lower reductions in accuracy from occlusion of eyes or mouth.
Resumo:
Most of existing motorway traffic safety studies using disaggregate traffic flow data aim at developing models for identifying real-time traffic risks by comparing pre-crash and non-crash conditions. One of serious shortcomings in those studies is that non-crash conditions are arbitrarily selected and hence, not representative, i.e. selected non-crash data might not be the right data comparable with pre-crash data; the non-crash/pre-crash ratio is arbitrarily decided and neglects the abundance of non-crash over pre-crash conditions; etc. Here, we present a methodology for developing a real-time MotorwaY Traffic Risk Identification Model (MyTRIM) using individual vehicle data, meteorological data, and crash data. Non-crash data are clustered into groups called traffic regimes. Thereafter, pre-crash data are classified into regimes to match with relevant non-crash data. Among totally eight traffic regimes obtained, four highly risky regimes were identified; three regime-based Risk Identification Models (RIM) with sufficient pre-crash data were developed. MyTRIM memorizes the latest risk evolution identified by RIM to predict near future risks. Traffic practitioners can decide MyTRIM’s memory size based on the trade-off between detection and false alarm rates. Decreasing the memory size from 5 to 1 precipitates the increase of detection rate from 65.0% to 100.0% and of false alarm rate from 0.21% to 3.68%. Moreover, critical factors in differentiating pre-crash and non-crash conditions are recognized and usable for developing preventive measures. MyTRIM can be used by practitioners in real-time as an independent tool to make online decision or integrated with existing traffic management systems.
Resumo:
Digital forensics concerns the analysis of electronic artifacts to reconstruct events such as cyber crimes. This research produced a framework to support forensic analyses by identifying associations in digital evidence using metadata. It showed that metadata based associations can help uncover the inherent relationships between heterogeneous digital artifacts thereby aiding reconstruction of past events by identifying artifact dependencies and time sequencing. It also showed that metadata association based analysis is amenable to automation by virtue of the ubiquitous nature of metadata across forensic disk images, files, system and application logs and network packet captures. The results prove that metadata based associations can be used to extract meaningful relationships between digital artifacts, thus potentially benefiting real-life forensics investigations.
Resumo:
Railhead is perhaps the highest stressed civil infrastructure due to the passage of heavily loaded wheels through a very small contact patch. The stresses at the contact patch cause yielding of the railhead material and wear. Many theories exist for the prediction of these mechanisms of continuous rails; this process in the discontinuous rails is relatively sparingly researched. Discontinuous railhead edges fail due to accumulating excessive plastic strains. Significant safety concern is widely reported as these edges form part of Insulated Rail Joints (IRJs) in the signalling track circuitry. Since Hertzian contact is not valid at a discontinuous edge, 3D finite element (3DFE) models of wheel contact at a railhead edge have been used in this research. Elastic–plastic material properties of the head hardened rail steel have been experimentally determined through uniaxial monotonic tension tests and incorporated into a FE model of a cylindrical specimen subject to cyclic tension load- ing. The parameters required for the Chaboche kinematic hardening model have been determined from the stabilised hysteresis loops of the cyclic load simulation and imple- mented into the 3DFE model. The 3DFE predictions of the plastic strain accumulation in the vicinity of the wheel contact at discontinuous railhead edges are shown to be affected by the contact due to passage of wheels rather than the magnitude of the loads the wheels carry. Therefore to eliminate this failure mechanism, modification to the contact patch is essential; reduction in wheel load cannot solve this problem.