973 resultados para match


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complete nucleotide sequence of Subterranean clover mottle virus (SCMoV) genomic RNA has been determined. The SCMoV genome is 4,258 nucleotides in length. It shares most nucleotide and amino acid sequence identity with the genome of Lucerne transient streak virus (LTSV). SCMoV RNA encodes four overlapping open reading frames and has a genome organisation similar to that of Cocksfoot mottle virus (CfMV). ORF1 and ORF4 are predicted to encode single proteins. ORF2 is predicted to encode two proteins that are derived from a -1 translational frameshift between two overlapping reading frames (ORF2a and ORF2b). A search of amino acid databases did not find a significant match for ORF1 and the function of this protein remains unclear. ORF2a contains a motif typical of chymotrypsin-like serine proteases and ORF2b has motifs characteristically present in positive-stranded RNA-dependent RNA polymerases. ORF4 is likely to be expressed from a subgenomic RNA and encodes the viral coat protein. The ORF2a/ORF2b overlapping gene expression strategy used by SCMoV and CfMV is similar to that of the poleroviruses and differ from that of other published sobemoviruses. These results suggest that the sobemoviruses could now be divided into two distinct subgroups based on those that express the RNA-dependent RNA polymerase from a single, in-frame polyprotein, and those that express it via a -1 translational frameshifting mechanism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thick piles of late-Archean volcaniclastic sedimentary successions that overlie the voluminous greenstone units of the eastern Yilgarn Craton, Western Australia, record the important transition from the cessation in mafic-ultramafic volcanism to cratonisation between about 2690 and 2655 Ma. Unfortunately, an inability to clearly subdivide the superficially similar sedimentary successions and correlate them between the various geological terranes and domains of the eastern Yilgarn Craton has led to uncertainty about the timing and nature of the region's palaeogeographic and palaeotectonic evolution. Here, we present the results of some 2025 U–Pb laser-ablation-ICP-MS analyses and 323 Sensitive High-Resolution Ion Microprobe (SHRIMP) analyses of detrital zircons from 14 late-Archean felsic clastic successions of the eastern Yilgarn Craton, which have enabled correlation of clastic successions. The results of our data, together with those compiled from previous studies, show that the post-greenstone sedimentary successions include two major cycles that both commenced with voluminous pyroclastic volcanism and ended with widespread exhumation and erosion associated with granite emplacement. Cycle One commences with an influx of rapidly reworked feldspar-rich pyroclastic debris. These units, here-named the Early Black Flag Group, are dominated by a single population of detrital zircons with an average age of 2690–2680 Ma. Thick (up to 2 km) dolerite bodies, such as the Golden Mile Dolerite, intrude the upper parts of the Early Black Flag Group at about 2680 Ma. Incipient development of large granite domes during Cycle One created extensional basins predominantly near their southeastern and northwestern margins (e.g., St Ives, Wallaby, Kanowna Belle and Agnew), into which the Early Black Flag Group and overlying coarse mafic conglomerate facies of the Late Black Flag Group were deposited. The clast compositions and detrital-zircon ages of the late Black Flag Group detritus match closely the nearby and/or stratigraphically underlying successions, thus suggesting relatively local provenance. Cycle Two involved a similar progression to that observed in Cycle One, but the age and composition of the detritus were notably different. Deposition of rapidly reworked quartz-rich pyroclastic deposits dominated by a single detrital-zircon age population of 2670–2660 Ma heralded the beginning of Cycle Two. These coarse-grained quartz-rich units, are name here the Early Merougil Group. The mean ages of the detrital zircons from the Early Merougil Group match closely the age of the peak in high-Ca (quartz-rich) granite magmatism in the Yilgarn Craton and thus probably represent the surface expression of the same event. Successions of the Late Merougil Group are dominated by coarse felsic conglomerate with abundant volcanic quartz. Although the detrital zircons in these successions have a broad spread of age, the principal sub-populations have ages of about 2665 Ma and thus match closely those of the Early Merougil Group. These successions occur most commonly at the northwestern and southeastern margins of the granite batholiths and thus are interpreted to represent resedimented units dominted by the stratigraphically underlying packages of the Early Merougil Group. The Kurrawang Group is the youngest sedimentary units identified in this study and is dominated by polymictic conglomerate with clasts of banded iron formation (BIF), granite and quartzite near the base and quartz-rich sandstone units containing detrital zircons aged up to 3500 Ma near the top. These units record provenance from deeper and/or more-distal sources. We suggest here that the principal driver for the major episodes of volcanism, sedimentation and deformation associated with basin development was the progressive emplacement of large granite batholiths. This interpretation has important implication for palaeogeographic and palaeotectonic evolution of all late-Archean terranes around the world.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A major challenge for robot localization and mapping systems is maintaining reliable operation in a changing environment. Vision-based systems in particular are susceptible to changes in illumination and weather, and the same location at another time of day may appear radically different to a system using a feature-based visual localization system. One approach for mapping changing environments is to create and maintain maps that contain multiple representations of each physical location in a topological framework or manifold. However, this requires the system to be able to correctly link two or more appearance representations to the same spatial location, even though the representations may appear quite dissimilar. This paper proposes a method of linking visual representations from the same location without requiring a visual match, thereby allowing vision-based localization systems to create multiple appearance representations of physical locations. The most likely position on the robot path is determined using particle filter methods based on dead reckoning data and recent visual loop closures. In order to avoid erroneous loop closures, the odometry-based inferences are only accepted when the inferred path's end point is confirmed as correct by the visual matching system. Algorithm performance is demonstrated using an indoor robot dataset and a large outdoor camera dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transport through crowded environments is often classified as anomalous, rather than classical, Fickian diffusion. Several studies have sought to describe such transport processes using either a continuous time random walk or fractional order differential equation. For both these models the transport is characterized by a parameter α, where α = 1 is associated with Fickian diffusion and α < 1 is associated with anomalous subdiffusion. Here, we simulate a single agent migrating through a crowded environment populated by impenetrable, immobile obstacles and estimate α from mean squared displacement data. We also simulate the transport of a population of such agents through a similar crowded environment and match averaged agent density profiles to the solution of a related fractional order differential equation to obtain an alternative estimate of α. We examine the relationship between our estimate of α and the properties of the obstacle field for both a single agent and a population of agents; we show that in both cases, α decreases as the obstacle density increases, and that the rate of decrease is greater for smaller obstacles. Our work suggests that it may be inappropriate to model transport through a crowded environment using widely reported approaches including power laws to describe the mean squared displacement and fractional order differential equations to represent the averaged agent density profiles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to the demand for better and deeper analysis in sports, organizations (both professional teams and broadcasters) are looking to use spatiotemporal data in the form of player tracking information to obtain an advantage over their competitors. However, due to the large volume of data, its unstructured nature, and lack of associated team activity labels (e.g. strategic/tactical), effective and efficient strategies to deal with such data have yet to be deployed. A bottleneck restricting such solutions is the lack of a suitable representation (i.e. ordering of players) which is immune to the potentially infinite number of possible permutations of player orderings, in addition to the high dimensionality of temporal signal (e.g. a game of soccer last for 90 mins). Leveraging a recent method which utilizes a "role-representation", as well as a feature reduction strategy that uses a spatiotemporal bilinear basis model to form a compact spatiotemporal representation. Using this representation, we find the most likely formation patterns of a team associated with match events across nearly 14 hours of continuous player and ball tracking data in soccer. Additionally, we show that we can accurately segment a match into distinct game phases and detect highlights. (i.e. shots, corners, free-kicks, etc) completely automatically using a decision-tree formulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the past decade, vision-based tracking systems have been successfully deployed in professional sports such as tennis and cricket for enhanced broadcast visualizations as well as aiding umpiring decisions. Despite the high-level of accuracy of the tracking systems and the sheer volume of spatiotemporal data they generate, the use of this high quality data for quantitative player performance and prediction has been lacking. In this paper, we present a method which predicts the location of a future shot based on the spatiotemporal parameters of the incoming shots (i.e. shot speed, location, angle and feet location) from such a vision system. Having the ability to accurately predict future short-term events has enormous implications in the area of automatic sports broadcasting in addition to coaching and commentary domains. Using Hawk-Eye data from the 2012 Australian Open Men's draw, we utilize a Dynamic Bayesian Network to model player behaviors and use an online model adaptation method to match the player's behavior to enhance shot predictability. To show the utility of our approach, we analyze the shot predictability of the top 3 players seeds in the tournament (Djokovic, Federer and Nadal) as they played the most amounts of games.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a novel place recognition algorithm inspired by recent discoveries in human visual neuroscience. The algorithm combines intolerant but fast low resolution whole image matching with highly tolerant, sub-image patch matching processes. The approach does not require prior training and works on single images (although we use a cohort normalization score to exploit temporal frame information), alleviating the need for either a velocity signal or image sequence, differentiating it from current state of the art methods. We demonstrate the algorithm on the challenging Alderley sunny day – rainy night dataset, which has only been previously solved by integrating over 320 frame long image sequences. The system is able to achieve 21.24% recall at 100% precision, matching drastically different day and night-time images of places while successfully rejecting match hypotheses between highly aliased images of different places. The results provide a new benchmark for single image, condition-invariant place recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In nature, the interactions between agents in a complex system (fish schools; colonies of ants) are governed by information that is locally created. Each agent self-organizes (adjusts) its behaviour, not through a central command centre, but based on variables that emerge from the interactions with other system agents in the neighbourhood. Self-organization has been proposed as a mechanism to explain the tendencies for individual performers to interact with each other in field-invasion sports teams, displaying functional co-adaptive behaviours, without the need for central control. The relevance of self-organization as a mechanism that explains pattern-forming dynamics within attacker-defender interactions in field-invasion sports has been sustained in the literature. Nonetheless, other levels of interpersonal coordination, such as intra-team interactions, still raise important questions, particularly with reference to the role of leadership or match strategies that have been prescribed in advance by a coach. The existence of key properties of complex systems, such as system degeneracy, nonlinearity or contextual dependency, suggests that self-organization is a functional mechanism to explain the emergence of interpersonal coordination tendencies within intra-team interactions. In this opinion article we propose how leadership may act as a key constraint on the emergent, self-organizational tendencies of performers in field-invasion sports.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantitative analysis is increasingly being used in team sports to better understand performance in these stylized, delineated, complex social systems. Here we provide a first step toward understanding the pattern-forming dynamics that emerge from collective offensive and defensive behavior in team sports. We propose a novel method of analysis that captures how teams occupy sub-areas of the field as the ball changes location. We used the method to analyze a game of association football (soccer) based upon a hypothesis that local player numerical dominance is key to defensive stability and offensive opportunity. We found that the teams consistently allocated more players than their opponents in sub-areas of play closer to their own goal. This is consistent with a predominantly defensive strategy intended to prevent yielding even a single goal. We also find differences between the two teams' strategies: while both adopted the same distribution of defensive, midfield, and attacking players (a 4:3:3 system of play), one team was significantly more effective both in maintaining defensive and offensive numerical dominance for defensive stability and offensive opportunity. That team indeed won the match with an advantage of one goal (2 to 1) but the analysis shows the advantage in play was more pervasive than the single goal victory would indicate. Our focus on the local dynamics of team collective behavior is distinct from the traditional focus on individual player capability. It supports a broader view in which specific player abilities contribute within the context of the dynamics of multiplayer team coordination and coaching strategy. By applying this complex system analysis to association football, we can understand how players' and teams' strategies result in successful and unsuccessful relationships between teammates and opponents in the area of play.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigated changes in the complexity (magnitude and structure of variability) of the collective behaviours of association football teams during competitive performance. Raw positional data from an entire competitive match between two professional teams were obtained with the ProZone® tracking system. Five compound positional variables were used to investigate the collective patterns of performance of each team including: surface area, stretch index, team length, team width, and geometrical centre. Analyses involve the coefficient of variation (%CV) and approximate entropy (ApEn), as well as the linear association between both parameters. Collective measures successfully captured the idiosyncratic behaviours of each team and their variations across the six time periods of the match. Key events such as goals scored and game breaks (such as half time and full time) seemed to influence the collective patterns of performance. While ApEn values significantly decreased during each half, the %CV increased. Teams seem to become more regular and predictable, but with increased magnitudes of variation in their organisational shape over the natural course of a match.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A microgrid contains both distributed generators (DGs) and loads and can be viewed by a controllable load by utilities. The DGs can be either inertial synchronous generators or non-inertial converter interfaced. Moreover, some of them can come online or go offline in plug and play fashion. The combination of these various types of operation makes the microgrid control a challenging task, especially when the microgrid operates in an autonomous mode. In this paper, a new phase locked loop (PLL) algorithm is proposed for smooth synchronization of plug and play DGs. A frequency droop for power sharing is used and a pseudo inertia has been introduced to non-inertial DGs in order to match their response with inertial DGs. The proposed strategy is validated through PSCAD simulation studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increased levels of polybrominated diphenyl ethers (PBDEs) can occur particularly in dust and soil surrounding facilities that recycle products containing PBDEs. This may be the source of increased exposure for nearby workers and residents. To investigate, we measured PBDE levels in soil, office dust and blood of workers at the closest workplace (i.e. within 100m) to a large automotive shredding and metal recycling facility in Brisbane, Australia. The workplace investigated in this study was independent of the automotive shredding facility and was one of approximately 50 businesses of varying types within a relatively large commercial/industrial area surrounding the recycling facility. Concentrations of PBDEs in soils were at least an order of magnitude greater than background levels in the area. Congener profiles were dominated by larger molecular weight congeners; in particular BDE-209. This reflected the profile in outdoor air samples previously collected at this site. Biomonitoring data from blood serum indicated no differential exposure for workers near the recycling facility compared to a reference group of office workers, also in Brisbane. Unlike air, indoor dust and soil sample profiles, serum samples from both worker groups were dominated by congeners BDE-47, BDE-153, BDE-99, BDE-100 and BDE-183 and was similar to the profile previously reported in the general Australian population. Estimated exposures for workers near the industrial point source suggested indoor workers had significantly higher exposure than outdoor workers due to their exposure to indoor dust rather than soil. However, no relationship was observed between blood PBDE levels and different roles and activity patterns of workers on-site. These comparisons of PBDE levels in serum provide additional insight into the inter-individual variability within Australia. Results also indicate congener patterns in the workplace environment did not match blood profiles of workers. This was attributed to the relatively high background exposures for the general Australian population via dietary intake and the home environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robust facial expression recognition (FER) under occluded face conditions is challenging. It requires robust algorithms of feature extraction and investigations into the effects of different types of occlusion on the recognition performance to gain insight. Previous FER studies in this area have been limited. They have spanned recovery strategies for loss of local texture information and testing limited to only a few types of occlusion and predominantly a matched train-test strategy. This paper proposes a robust approach that employs a Monte Carlo algorithm to extract a set of Gabor based part-face templates from gallery images and converts these templates into template match distance features. The resulting feature vectors are robust to occlusion because occluded parts are covered by some but not all of the random templates. The method is evaluated using facial images with occluded regions around the eyes and the mouth, randomly placed occlusion patches of different sizes, and near-realistic occlusion of eyes with clear and solid glasses. Both matched and mis-matched train and test strategies are adopted to analyze the effects of such occlusion. Overall recognition performance and the performance for each facial expression are investigated. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the high robustness and fast processing speed of our approach, and provide useful insight into the effects of occlusion on FER. The results on the parameter sensitivity demonstrate a certain level of robustness of the approach to changes in the orientation and scale of Gabor filters, the size of templates, and occlusions ratios. Performance comparisons with previous approaches show that the proposed method is more robust to occlusion with lower reductions in accuracy from occlusion of eyes or mouth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most of existing motorway traffic safety studies using disaggregate traffic flow data aim at developing models for identifying real-time traffic risks by comparing pre-crash and non-crash conditions. One of serious shortcomings in those studies is that non-crash conditions are arbitrarily selected and hence, not representative, i.e. selected non-crash data might not be the right data comparable with pre-crash data; the non-crash/pre-crash ratio is arbitrarily decided and neglects the abundance of non-crash over pre-crash conditions; etc. Here, we present a methodology for developing a real-time MotorwaY Traffic Risk Identification Model (MyTRIM) using individual vehicle data, meteorological data, and crash data. Non-crash data are clustered into groups called traffic regimes. Thereafter, pre-crash data are classified into regimes to match with relevant non-crash data. Among totally eight traffic regimes obtained, four highly risky regimes were identified; three regime-based Risk Identification Models (RIM) with sufficient pre-crash data were developed. MyTRIM memorizes the latest risk evolution identified by RIM to predict near future risks. Traffic practitioners can decide MyTRIM’s memory size based on the trade-off between detection and false alarm rates. Decreasing the memory size from 5 to 1 precipitates the increase of detection rate from 65.0% to 100.0% and of false alarm rate from 0.21% to 3.68%. Moreover, critical factors in differentiating pre-crash and non-crash conditions are recognized and usable for developing preventive measures. MyTRIM can be used by practitioners in real-time as an independent tool to make online decision or integrated with existing traffic management systems.