954 resultados para video images
Resumo:
Aerial surveys conducted using manned or unmanned aircraft with customized camera payloads can generate a large number of images. Manual review of these images to extract data is prohibitive in terms of time and financial resources, thus providing strong incentive to automate this process using computer vision systems. There are potential applications for these automated systems in areas such as surveillance and monitoring, precision agriculture, law enforcement, asset inspection, and wildlife assessment. In this paper, we present an efficient machine learning system for automating the detection of marine species in aerial imagery. The effectiveness of our approach can be credited to the combination of a well-suited region proposal method and the use of Deep Convolutional Neural Networks (DCNNs). In comparison to previous algorithms designed for the same purpose, we have been able to dramatically improve recall to more than 80% and improve precision to 27% by using DCNNs as the core approach.
Resumo:
Self-organized Bi lines that are only 1.5 nm wide can be grown without kinks or breaks on Si(0 0 1) surfaces to lengths of up to 500 nm. Constant-current topographical images of the lines, obtained with the scanning tunneling microscope, have a striking bias dependence. Although the lines appear darker than the Si terraces at biases below ≈∣1.2∣ V, the contrast reverses at biases above ≈∣1.5∣ V. Between these two ranges the lines and terraces are of comparable brightness. It has been suggested that this bias dependence may be due to the presence of a semiconductor-like energy gap within the line. Using ab initio calculations it is demonstrated that the energy gap is too small to explain the experimentally observed bias dependence. Consequently, at this time, there is no compelling explanation for this phenomenon. An alternative explanation is proposed that arises naturally from calculations of the tunneling current, using the Tersoff–Hamann approximation, and an examination of the electronic structure of the line.
Resumo:
Analyzing and redesigning business processes is a complex task, which requires the collaboration of multiple actors. Current approaches focus on collaborative modeling workshops where process stakeholders verbally contribute their perspective on a process while modeling experts translate their contributions and integrate them into a model using traditional input devices. Limiting participants to verbal contributions not only affects the outcome of collaboration but also collaboration itself. We created CubeBPM – a system that allows groups of actors to interact with process models through a touch based interface on a large interactive touch display wall. We are currently in the process of conducting a study that aims at assessing the impact of CubeBPM on collaboration and modeling performance. Initial results presented in this paper indicate that the setting helped participants to become more active in collaboration.
Resumo:
With the availability of a huge amount of video data on various sources, efficient video retrieval tools are increasingly in demand. Video being a multi-modal data, the perceptions of ``relevance'' between the user provided query video (in case of Query-By-Example type of video search) and retrieved video clips are subjective in nature. We present an efficient video retrieval method that takes user's feedback on the relevance of retrieved videos and iteratively reformulates the input query feature vectors (QFV) for improved video retrieval. The QFV reformulation is done by a simple, but powerful feature weight optimization method based on Simultaneous Perturbation Stochastic Approximation (SPSA) technique. A video retrieval system with video indexing, searching and relevance feedback (RF) phases is built for demonstrating the performance of the proposed method. The query and database videos are indexed using the conventional video features like color, texture, etc. However, we use the comprehensive and novel methods of feature representations, and a spatio-temporal distance measure to retrieve the top M videos that are similar to the query. In feedback phase, the user activated iterative on the previously retrieved videos is used to reformulate the QFV weights (measure of importance) that reflect the user's preference, automatically. It is our observation that a few iterations of such feedback are generally sufficient for retrieving the desired video clips. The novel application of SPSA based RF for user-oriented feature weights optimization makes the proposed method to be distinct from the existing ones. The experimental results show that the proposed RF based video retrieval exhibit good performance.
Resumo:
We propose a robust method for mosaicing of document images using features derived from connected components. Each connected component is described using the Angular Radial Tran. form (ART). To ensure geometric consistency during feature matching, the ART coefficients of a connected component are augmented with those of its two nearest neighbors. The proposed method addresses two critical issues often encountered in correspondence matching: (i) The stability of features and (ii) Robustness against false matches due to the multiple instances of characters in a document image. The use of connected components guarantees a stable localization across images. The augmented features ensure a successful correspondence matching even in the presence of multiple similar regions within the page. We illustrate the effectiveness of the proposed method on camera captured document images exhibiting large variations in viewpoint, illumination and scale.
Resumo:
Quantifying the stiffness properties of soft tissues is essential for the diagnosis of many cardiovascular diseases such as atherosclerosis. In these pathologies it is widely agreed that the arterial wall stiffness is an indicator of vulnerability. The present paper focuses on the carotid artery and proposes a new inversion methodology for deriving the stiffness properties of the wall from cine-MRI (magnetic resonance imaging) data. We address this problem by setting-up a cost function defined as the distance between the modeled pixel signals and the measured ones. Minimizing this cost function yields the unknown stiffness properties of both the arterial wall and the surrounding tissues. The sensitivity of the identified properties to various sources of uncertainty is studied. Validation of the method is performed on a rubber phantom. The elastic modulus identified using the developed methodology lies within a mean error of 9.6%. It is then applied to two young healthy subjects as a proof of practical feasibility, with identified values of 625 kPa and 587 kPa for one of the carotid of each subject.
Resumo:
The rupture of atherosclerotic plaques is known to be associated with the stresses that act on or within the arterial wall. The extreme wall tensile stress (WTS) is usually recognized as a primary trigger for the rupture of vulnerable plaque. The present study used the in-vivo high-resolution multi-spectral magnetic resonance imaging (MRI) for carotid arterial plaque morphology reconstruction. Image segmentation of different plaque components was based on the multi-spectral MRI and co-registered with different sequences for the patient. Stress analysis was performed on totally four subjects with different plaque burden by fluid-structure interaction (FSI) simulations. Wall shear stress distributions are highly related to the degree of stenosis, while the level of its magnitude is much lower than the WTS in the fibrous cap. WTS is higher in the luminal wall and lower at the outer wall, with the lowest stress at the lipid region. Local stress concentrations are well confined in the thinner fibrous cap region, and usually locating in the plaque shoulder; the introduction of relative stress variation during a cycle in the fibrous cap can be a potential indicator for plaque fatigue process in the thin fibrous cap. According to stress analysis of the four subjects, a risk assessment in terms of mechanical factors could be made, which may be helpful in clinical practice. However, more subjects with patient specific analysis are desirable for plaque-stability study.
Resumo:
The 2008 US election has been heralded as the first presidential election of the social media era, but took place at a time when social media were still in a state of comparative infancy; so much so that the most important platform was not Facebook or Twitter, but the purpose-built campaign site my.barackobama.com, which became the central vehicle for the most successful electoral fundraising campaign in American history. By 2012, the social media landscape had changed: Facebook and, to a somewhat lesser extent, Twitter are now well-established as the leading social media platforms in the United States, and were used extensively by the campaign organisations of both candidates. As third-party spaces controlled by independent commercial entities, however, their use necessarily differs from that of home-grown, party-controlled sites: from the point of view of the platform itself, a @BarackObama or @MittRomney is technically no different from any other account, except for the very high follower count and an exceptional volume of @mentions. In spite of the significant social media experience which Democrat and Republican campaign strategists had already accumulated during the 2008 campaign, therefore, the translation of such experience to the use of Facebook and Twitter in their 2012 incarnations still required a substantial amount of new work, experimentation, and evaluation. This chapter examines the Twitter strategies of the leading accounts operated by both campaign headquarters: the ‘personal’ candidate accounts @BarackObama and @MittRomney as well as @JoeBiden and @PaulRyanVP, and the campaign accounts @Obama2012 and @TeamRomney. Drawing on datasets which capture all tweets from and at these accounts during the final months of the campaign (from early September 2012 to the immediate aftermath of the election night), we reconstruct the campaigns’ approaches to using Twitter for electioneering from the quantitative and qualitative patterns of their activities, and explore the resonance which these accounts have found with the wider Twitter userbase. A particular focus of our investigation in this context will be on the tweeting styles of these accounts: the mixture of original messages, @replies, and retweets, and the level and nature of engagement with everyday Twitter followers. We will examine whether the accounts chose to respond (by @replying) to the messages of support or criticism which were directed at them, whether they retweeted any such messages (and whether there was any preferential retweeting of influential or – alternatively – demonstratively ordinary users), and/or whether they were used mainly to broadcast and disseminate prepared campaign messages. Our analysis will highlight any significant differences between the accounts we examine, trace changes in style over the course of the final campaign months, and correlate such stylistic differences with the respective electoral positioning of the candidates. Further, we examine the use of these accounts during moments of heightened attention (such as the presidential and vice-presidential debates, or in the context of controversies such as that caused by the publication of the Romney “47%” video; additional case studies may emerge over the remainder of the campaign) to explore how they were used to present or defend key talking points, and exploit or avert damage from campaign gaffes. A complementary analysis of the messages directed at the campaign accounts (in the form of @replies or retweets) will also provide further evidence for the extent to which these talking points were picked up and disseminated by the wider Twitter population. Finally, we also explore the use of external materials (links to articles, images, videos, and other content on the campaign sites themselves, in the mainstream media, or on other platforms) by the campaign accounts, and the resonance which these materials had with the wider follower base of these accounts. This provides an indication of the integration of Twitter into the overall campaigning process, by highlighting how the platform was used as a means of encouraging the viral spread of campaign propaganda (such as advertising materials) or of directing user attention towards favourable media coverage. By building on comprehensive, large datasets of Twitter activity (as of early October, our combined datasets comprise some 3.8 million tweets) which we process and analyse using custom-designed social media analytics tools, and by using our initial quantitative analysis to guide further qualitative evaluation of Twitter activity around these campaign accounts, we are able to provide an in-depth picture of the use of Twitter in political campaigning during the 2012 US election which will provide detailed new insights social media use in contemporary elections. This analysis will then also be able to serve as a touchstone for the analysis of social media use in subsequent elections, in the USA as well as in other developed nations where Twitter and other social media platforms are utilised in electioneering.
Resumo:
This thesis is a comparative case study in Japanese video game localization for the video games Sairen, Sairen 2 and Sairen Nyûtoransurêshon, and English-language localized versions of the same games as published in Scandinavia and Australia/New Zealand. All games are developed by Sony Computer Entertainment Inc. and published exclusively for Playstation2 and Playstation3 consoles. The fictional world of the Sairen games draws much influence from Japanese history, as well as from popular and contemporary culture, and in doing so caters mainly to a Japanese audience. For localization, i.e. the adaptation of a product to make it accessible to users outside the original market it was intended for in the first place, this is a challenging issue. Video games are media of entertainment, and therefore localization practice must preserve the games’ effects on the players’ emotions. Further, video games are digital products that are comprised of a multitude of distinct elements, some of which are part of the game world, while others regulate the connection between the player as part of the real world and the game as digital medium. As a result, video game localization is also a practice that has to cope with the technical restrictions that are inherent to the medium. The main theory used throughout the thesis is Anthony Pym’s framework for localization studies that considers the user of the localized product as a defining part of the localization process. This concept presupposes that localization is an adaptation that is performed to make a product better suited for use during a specific reception situation. Pym also addresses the factor that certain products may resist distribution into certain reception situations because of their content, and that certain aspects of localization aim to reduce this resistance through significant alterations of the original product. While Pym developed his ideas with mainly regular software in mind, they can also be adapted well to study video games from a localization angle. Since modern video games are highly complex entities that often switch between interactive and non-interactive modes, Pym’s ideas are adapted throughout the thesis to suit the particular elements being studied. Instances analyzed in this thesis include menu screens, video clips, in-game action and websites. The main research questions focus on how the games’ rules influence localization, and how the games’ fictional domain influences localization. Because there are so many peculiarities inherent to the medium of the video game, other theories are introduced as well to complement the research at hand. These include Lawrence Venuti’s discussions of foreiginizing and domesticating translation methods for literary translation, and Jesper Juul’s definition of games. Additionally, knowledge gathered from interviews with video game localization professionals in Japan during September and October 2009 is also utilized for this study. Apart from answering the aforementioned research questions, one of this thesis’ aims is to enrich the still rather small field of game localization studies, and the study of Japanese video games in particular, one of Japan’s most successful cultural exports.
Resumo:
At present, the most reliable method to obtain end-user perceived quality is through subjective tests. In this paper, the impact of automatic region-of-interest (ROI) coding on perceived quality of mobile video is investigated. The evidence, which is based on perceptual comparison analysis, shows that the coding strategy improves perceptual quality. This is particularly true in low bit rate situations. The ROI detection method used in this paper is based on two approaches: - (1) automatic ROI by analyzing the visual contents automatically, and; - (2) eye-tracking based ROI by aggregating eye-tracking data across many users, used to both evaluate the accuracy of automatic ROI detection and the subjective quality of automatic ROI encoded video. The perceptual comparison analysis is based on subjective assessments with 54 participants, across different content types, screen resolutions, and target bit rates while comparing the two ROI detection methods. The results from the user study demonstrate that ROI-based video encoding has higher perceived quality compared to normal video encoded at a similar bit rate, particularly in the lower bit rate range.
Resumo:
This study sets out to provide new information about the interaction between abstract religious ideas and actual acts of violence in the early crusading movement. The sources are asked, whether such a concept as religious violence can be sorted out as an independent or distinguishable source of aggression at the moment of actual bloodshed. The analysis concentrates on the practitioners of sacred violence, crusaders and their mental processing of the use of violence, the concept of the violent act, and the set of values and attitudes defining this concept. The scope of the study, the early crusade movement, covers the period from late 1080 s to the crusader conquest of Jerusalem in 15 July 1099. The research has been carried out by contextual reading of relevant sources. Eyewitness reports will be compared with texts that were produced by ecclesiastics in Europe. Critical reading of the texts reveals both connecting ideas and interesting differences between them. The sources share a positive attitude towards crusading, and have principally been written to propagate the crusade institution and find new recruits. The emphasis of the study is on the interpretation of images: the sources are not asked what really happened in chronological order, but what the crusader understanding of the reality was like. Fictional material can be even more crucial for the understanding of the crusading mentality. Crusader sources from around the turn of the twelfth century accept violent encounters with non-Christians on the grounds of external hostility directed towards the Christian community. The enemies of Christendom can be identified with either non-Christians living outside the Christian society (Muslims), non-Christians living within the Christian society (Jews) or Christian heretics. Western Christians are described as both victims and avengers of the surrounding forces of diabolical evil. Although the ideal of universal Christianity and gradual eradication of the non-Christian is present, the practical means of achieving a united Christendom are not discussed. The objective of crusader violence was thus entirely Christian: the punishment of the wicked and the restoration of Christian morals and the divine order. Meanwhile, the means used to achieve these objectives were not. Given the scarcity of written regulations concerning the use of force in bello, perceptions concerning the practical use of violence were drawn from a multitude of notions comprising an adaptable network of secular and ecclesiastical, pre-Christian and Christian traditions. Though essentially ideological and often religious in character, the early crusader concept of the practise of violence was not exclusively rooted in Christian thought. The main conclusion of the study is that there existed a definable crusader ideology of the use of force by 1100. The crusader image of violence involved several levels of thought. Predominantly, violence indicates a means of achieving higher spiritual rewards; eternal salvation and immortal glory.
Resumo:
The subject of the thesis is the mediated construction of author images in popular music. In the study, the construction of images is treated as a process in which artists, the media and the members of the audience participate. The notions of presented, mediated and compiled author images are used in explaining the mediation process and the various authorial roles of the agents involved. In order to explore the issue more closely, I analyse the author images of a group of popular music artists representing the genres of rock, pop and electronic dance music. The analysed material consists mostly of written media texts through which the artists authorial roles and creative responsibilities are discussed. Theoretically speaking, the starting points for the examination lie in cultural studies and discourse analysis. Even though author images may be conceived as intertextual constructions, the artist is usually presented as a recognizable figure whose purpose is to give the music its public face. This study does not, then, deal with musical authors as such, but rather with their public images and mediated constructions. Because of the author-based functioning of popular music culture and the idea of the artist s individual creative power, the collective and social processes involved in the making of popular music are often superseded by the belief in a single, originating authorship. In addition to the collective practices of music making, the roles of the media and the marketing machinery complicate attempts to clarify the sharing of authorial contributions. As the case studies demonstrate, the differences between the examined author images are connected with a number of themes ranging from issues of auteurism and stardom to the use of masked imagery and the blending of authorial voices. Also the emergence of new music technologies has affected not only the ways in which music is made, but also how the artist s authorial status and artistic identity is understood. In the study at hand, the author images of auteurs, stars, DJs and sampling artists are discussed alongside such varied topics as collective authorship, evaluative hierarchies, visual promotion and generic conventions. Taken altogether, the examined case studies shed light on the functioning of popular music culture and the ways in which musical authorship is (re)defined.
Resumo:
Australian researchers have been developing robust yield estimation models, based mainly on the crop growth response to water availability during the crop season. However, knowledge of spatial distribution of yields within and across the production regions can be improved by the use of remote sensing techniques. Images of Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indices, available since 1999, have the potential to contribute to crop yield estimation. The objective of this study was to analyse the relationship between winter crop yields and the spectral information available in MODIS vegetation index images at the shire level. The study was carried out in the Jondaryan and Pittsworth shires, Queensland , Australia . Five years (2000 to 2004) of 250m resolution, 16-day composite of MODIS Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) images were used during the winter crop season (April to November). Seasonal variability of the profiles of the vegetation index images for each crop season using different regions of interest (cropping mask) were displayed and analysed. Correlation analysis between wheat and barley yield data and MODIS image values were also conducted. The results showed high seasonal variability in the NDVI and EVI profiles, and the EVI values were consistently lower than those of the NDVI. The highest image values were observed in 2003 (in contrast to 2004), and were associated with rainfall amount and distribution. The seasonal variability of the profiles was similar in both shires, with minimum values in June and maximum values at the end of August. NDVI and EVI images showed sensitivity to seasonal variability of the vegetation and exhibited good association (e.g. r = 0.84, r = 0.77) with winter crop yields.