193 resultados para Picture and Image Generation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

To detect and annotate the key events of live sports videos, we need to tackle the semantic gaps of audio-visual information. Previous work has successfully extracted semantic from the time-stamped web match reports, which are synchronized with the video contents. However, web and social media articles with no time-stamps have not been fully leveraged, despite they are increasingly used to complement the coverage of major sporting tournaments. This paper aims to address this limitation using a novel multimodal summarization framework that is based on sentiment analysis and players' popularity. It uses audiovisual contents, web articles, blogs, and commentators' speech to automatically annotate and visualize the key events and key players in a sports tournament coverage. The experimental results demonstrate that the automatically generated video summaries are aligned with the events identified from the official website match reports.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the use of certain data-dependent estimates of the complexity of a function class, called Rademacher and Gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinations of functions from basis classes and show how the Rademacher and Gaussian complexities of such a function class can be bounded in terms of the complexity of the basis classes. We give examples of the application of these techniques in finding data-dependent risk bounds for decision trees, neural networks and support vector machines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Log-linear and maximum-margin models are two commonly-used methods in supervised machine learning, and are frequently used in structured prediction problems. Efficient learning of parameters in these models is therefore an important problem, and becomes a key factor when learning from very large data sets. This paper describes exponentiated gradient (EG) algorithms for training such models, where EG updates are applied to the convex dual of either the log-linear or max-margin objective function; the dual in both the log-linear and max-margin cases corresponds to minimizing a convex function with simplex constraints. We study both batch and online variants of the algorithm, and provide rates of convergence for both cases. In the max-margin case, O(1/ε) EG updates are required to reach a given accuracy ε in the dual; in contrast, for log-linear models only O(log(1/ε)) updates are required. For both the max-margin and log-linear cases, our bounds suggest that the online EG algorithm requires a factor of n less computation to reach a desired accuracy than the batch EG algorithm, where n is the number of training examples. Our experiments confirm that the online algorithms are much faster than the batch algorithms in practice. We describe how the EG updates factor in a convenient way for structured prediction problems, allowing the algorithms to be efficiently applied to problems such as sequence learning or natural language parsing. We perform extensive evaluation of the algorithms, comparing them to L-BFGS and stochastic gradient descent for log-linear models, and to SVM-Struct for max-margin models. The algorithms are applied to a multi-class problem as well as to a more complex large-scale parsing task. In all these settings, the EG algorithms presented here outperform the other methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spontaneous facial expressions differ from posed ones in appearance, timing and accompanying head movements. Still images cannot provide timing or head movement information directly. However, indirectly the distances between key points on a face extracted from a still image using active shape models can capture some movement and pose changes. This information is superposed on information about non-rigid facial movement that is also part of the expression. Does geometric information improve the discrimination between spontaneous and posed facial expressions arising from discrete emotions? We investigate the performance of a machine vision system for discrimination between posed and spontaneous versions of six basic emotions that uses SIFT appearance based features and FAP geometric features. Experimental results on the NVIE database demonstrate that fusion of geometric information leads only to marginal improvement over appearance features. Using fusion features, surprise is the easiest emotion (83.4% accuracy) to be distinguished, while disgust is the most difficult (76.1%). Our results find different important facial regions between discriminating posed versus spontaneous version of one emotion and classifying the same emotion versus other emotions. The distribution of the selected SIFT features shows that mouth is more important for sadness, while nose is more important for surprise, however, both the nose and mouth are important for disgust, fear, and happiness. Eyebrows, eyes, nose and mouth are important for anger.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Facial expression recognition (FER) algorithms mainly focus on classification into a small discrete set of emotions or representation of emotions using facial action units (AUs). Dimensional representation of emotions as continuous values in an arousal-valence space is relatively less investigated. It is not fully known whether fusion of geometric and texture features will result in better dimensional representation of spontaneous emotions. Moreover, the performance of many previously proposed approaches to dimensional representation has not been evaluated thoroughly on publicly available databases. To address these limitations, this paper presents an evaluation framework for dimensional representation of spontaneous facial expressions using texture and geometric features. SIFT, Gabor and LBP features are extracted around facial fiducial points and fused with FAP distance features. The CFS algorithm is adopted for discriminative texture feature selection. Experimental results evaluated on the publicly accessible NVIE database demonstrate that fusion of texture and geometry does not lead to a much better performance than using texture alone, but does result in a significant performance improvement over geometry alone. LBP features perform the best when fused with geometric features. Distributions of arousal and valence for different emotions obtained via the feature extraction process are compared with those obtained from subjective ground truth values assigned by viewers. Predicted valence is found to have a more similar distribution to ground truth than arousal in terms of covariance or Bhattacharya distance, but it shows a greater distance between the means.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we investigate the heuristic construction of bijective s-boxes that satisfy a wide range of cryptographic criteria including algebraic complexity, high nonlinearity, low autocorrelation and have none of the known weaknesses including linear structures, fixed points or linear redundancy. We demonstrate that the power mappings can be evolved (by iterated mutation operators alone) to generate bijective s-boxes with the best known tradeoffs among the considered criteria. The s-boxes found are suitable for use directly in modern encryption algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we describe a body of work aimed at extending the reach of mobile navigation and mapping. We describe how running topological and metric mapping and pose estimation processes concurrently, using vision and laser ranging, has produced a full six-degree-of-freedom outdoor navigation system. It is capable of producing intricate three-dimensional maps over many kilometers and in real time. We consider issues concerning the intrinsic quality of the built maps and describe our progress towards adding semantic labels to maps via scene de-construction and labeling. We show how our choices of representation, inference methods and use of both topological and metric techniques naturally allow us to fuse maps built from multiple sessions with no need for manual frame alignment or data association.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The majority of peptide bonds in proteins are found to occur in the trans conformation. However, for proline residues, a considerable fraction of Prolyl peptide bonds adopt the cis form. Proline cis/trans isomerization is known to play a critical role in protein folding, splicing, cell signaling and transmembrane active transport. Accurate prediction of proline cis/trans isomerization in proteins would have many important applications towards the understanding of protein structure and function. Results In this paper, we propose a new approach to predict the proline cis/trans isomerization in proteins using support vector machine (SVM). The preliminary results indicated that using Radial Basis Function (RBF) kernels could lead to better prediction performance than that of polynomial and linear kernel functions. We used single sequence information of different local window sizes, amino acid compositions of different local sequences, multiple sequence alignment obtained from PSI-BLAST and the secondary structure information predicted by PSIPRED. We explored these different sequence encoding schemes in order to investigate their effects on the prediction performance. The training and testing of this approach was performed on a newly enlarged dataset of 2424 non-homologous proteins determined by X-Ray diffraction method using 5-fold cross-validation. Selecting the window size 11 provided the best performance for determining the proline cis/trans isomerization based on the single amino acid sequence. It was found that using multiple sequence alignments in the form of PSI-BLAST profiles could significantly improve the prediction performance, the prediction accuracy increased from 62.8% with single sequence to 69.8% and Matthews Correlation Coefficient (MCC) improved from 0.26 with single local sequence to 0.40. Furthermore, if coupled with the predicted secondary structure information by PSIPRED, our method yielded a prediction accuracy of 71.5% and MCC of 0.43, 9% and 0.17 higher than the accuracy achieved based on the singe sequence information, respectively. Conclusion A new method has been developed to predict the proline cis/trans isomerization in proteins based on support vector machine, which used the single amino acid sequence with different local window sizes, the amino acid compositions of local sequence flanking centered proline residues, the position-specific scoring matrices (PSSMs) extracted by PSI-BLAST and the predicted secondary structures generated by PSIPRED. The successful application of SVM approach in this study reinforced that SVM is a powerful tool in predicting proline cis/trans isomerization in proteins and biological sequence analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A system to segment and recognize Australian 4-digit postcodes from address labels on parcels is described. Images of address labels are preprocessed and adaptively thresholded to reduce noise. Projections are used to segment the line and then the characters comprising the postcode. Individual digits are recognized using bispectral features extracted from their parallel beam projections. These features are insensitive to translation, scaling and rotation, and robust to noise. Results on scanned images are presented. The system is currently being improved and implemented to work on-line.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over less than a decade, we have witnessed a seismic shift in the way knowledge is produced and exchanged. This is opening up new opportunities for civic and community engagement, entrepreneurial behaviour, sustainability initiatives and creative practices. It also has the potential to create fresh challenges in areas of privacy, cyber-security and misuse of data and personal information. The field of urban informatics focuses on the use and impacts of digital media technology in urban environments. Urban informatics is a dynamic and cross-disciplinary area of inquiry that encapsulates social media, ubiquitous computing, mobile applications and location-based services. Its insights suggest the emergence of a new economic force with the potential for driving innovation, wealth and prosperity through technological advances, digital media and online networks that affect patterns of both social and economic development. Urban informatics explores the intersections between people, place and technology, and their implications for creativity, innovation and engagement. This paper examines how the key learnings from this field can be used to position creative and cultural institutions such as galleries, libraries, archives and museums (GLAM) to take advantage of the opportunities presented by these changing social and technological developments. This paper introduces the underlying principles, concepts and research areas of urban informatics, against the backdrop of modern knowledge economies. Both theoretical ideas and empirical examples are covered in this paper. The first part discusses three challenges: a. People, and the challenge of creativity: The paper explores the opportunities and challenges of urban informatics that can lead to the design and development of new tools, methods and applications fostering participation, the democratisation of knowledge, and new creative practices. b. Technology, and the challenge of innovation: The paper examines how urban informatics can be applied to support user-led innovation with a view to promoting entrepreneurial ideas and creative industries. c. Place, and the challenge of engagement: The paper discusses the potential to establish place-based applications of urban informatics, using the example of library spaces designed to deliver community and civic engagement strategies. The discussion of these challenges is illustrated by a review of projects as examples drawn from diverse fields such as urban computing, locative media, community activism, and sustainability initiatives. The second part of the paper introduces an empirically grounded case study that responds to these three challenges: The Edge, the Queensland Government’s Digital Culture Centre which is an initiative of the State Library of Queensland to explore the nexus of technology and culture in an urban environment. The paper not only explores the new role of libraries in the knowledge economy, but also how the application of urban informatics in prototype engagement spaces such as The Edge can provide transferable insights that can inform the design and development of responsive and inclusive new library spaces elsewhere. To set the scene and background, the paper begins by drawing the bigger picture and outlining some key characteristics of the knowledge economy and the role that the creative and cultural industries play in it, grasping new opportunities that can contribute to the prosperity of Australia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Agents make up an important part of game worlds, ranging from the characters and monsters that live in the world to the armies that the player controls. Despite their importance, agents in current games rarely display an awareness of their environment or react appropriately, which severely detracts from the believability of the game. Some games have included agents with a basic awareness of other agents, but they are still unaware of important game events or environmental conditions. This paper presents an agent design we have developed, which combines cellular automata for environmental modeling with influence maps for agent decision-making. The agents were implemented into a 3D game environment we have developed, the EmerGEnT system, and tuned through three experiments. The result is simple, flexible game agents that are able to respond to natural phenomena (e.g. rain or fire), while pursuing a goal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper defines and discusses two contrasting approaches to designing game environments. The first, referred to as scripting, requires developers to anticipate, hand-craft and script specific game objects, events and player interactions. The second, known as emergence, involves defining general, global rules that interact to give rise to emergent gameplay. Each of these approaches is defined, discussed and analyzed with respect to the considerations and affects for game developers and game players. Subsequently, various techniques for implementing these design approaches are identified and discussed. It is concluded that scripting and emergence are two extremes of the same continuum, neither of which are ideal for game development. Rather, there needs to be a compromise in which the boundaries of action (such as story and game objectives) can be hardcoded and non-scripted behavior (such as interactions and strategies) are able to emerge within these boundaries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The phase of an analytic signal constructed from the autocorrelation function of a signal contains significant information about the shape of the signal. Using Bedrosian's (1963) theorem for the Hilbert transform it is proved that this phase is robust to multiplicative noise if the signal is baseband and the spectra of the signal and the noise do not overlap. Higher-order spectral features are interpreted in this context and shown to extract nonlinear phase information while retaining robustness. The significance of the result is that prior knowledge of the spectra is not required.