863 resultados para multimedia
Resumo:
Digital and interactive technologies are becoming increasingly embedded in everyday lives of people around the world. Application of technologies such as real-time, context-aware, and interactive technologies; augmented and immersive realities; social media; and location-based services has been particularly evident in urban environments where technological and sociocultural infrastructures enable easier deployment and adoption as compared to non-urban areas. There has been growing consumer demand for new forms of experiences and services enabled through these emerging technologies. We call this ambient media, as the media is embedded in the natural human living environment. This workshop focuses on ambient media services, applications, and technologies that promote people’s engagement in creating and recreating liveliness in urban environments, particularly through arts, culture, and gastronomic experiences. The RelCi workshop series is organized in cooperation with the Queensland University of Technology (QUT), in particular the Urban Informatics Lab and the Tampere University of Technology (TUT), in particular the Entertainment and Media Management (EMMi) Lab. The workshop runs under the umbrella of the International Ambient Media Association (AMEA) (http://www.ambientmediaassociation.org), which is hosting the international open access journal entitled “International Journal on Information Systems and Management in Creative eMedia”, and the international open access series “International Series on Information Systems and Management in Creative eMedia” (see http://www.tut.fi/emmi/Journal). The RelCi workshop took place for the first time in 2012 in conjunction with ICME 2012 in Melbourne, Autralia; and this year’s edition took place in conjunction with INTERACT 2013 in Cape Town, South Africa. Besides, the International Ambient Media Association (AMEA) organizes the Semantic Ambient Media (SAME) workshop series, which took place in 2008 in conjunction with ACM Multimedia 2008 in Vancouver, Canada; in 2009 in conjunction with AmI 2009 in Salzburg, Austria; in 2010 in conjunction with AmI 2010 in Malaga, Spain; in 2011 in conjunction with Communities and Technologies 2011 in Brisbane, Australia; in 2012 in conjunction with Pervasive 2012 in Newcastle, UK; and in 2013 in conjunction with C&T 2013 in Munich, Germany.
Resumo:
This paper describes an approach based on Zernike moments and Delaunay triangulation for localization of hand-written text in machine printed text documents. The Zernike moments of the image are first evaluated and we classify the text as hand-written using the nearest neighbor classifier. These features are independent of size, slant, orientation, translation and other variations in handwritten text. We then use Delaunay triangulation to reclassify the misclassified text regions. When imposing Delaunay triangulation on the centroid points of the connected components, we extract features based on the triangles and reclassify the text. We remove the noise components in the document as part of the preprocessing step so this method works well on noisy documents. The success rate of the method is found to be 86%. Also for specific hand-written elements such as signatures or similar text the accuracy is found to be even higher at 93%.
Resumo:
In document images, we often find printed lines over-lapping with hand written elements especially in case of signatures. Typical examples of such images are bank cheques and payment slips. Although the detection and removal of the horizontal lines has been addressed, the restoration of the handwritten area after removal of lines, persists to be a problem of interest. lit this paper, we propose a method for line removal and restoration of the erased areas of the handwritten elements. Subjective evaluation of the results have been conducted to analyze the effectiveness of the proposed method. The results are promising with an accuracy of 86.33%. The entire Process takes less than half a second for completion on a 2.4 GHz 512 MB RAM Pentium IV PC for a document image.
Resumo:
In this paper, we are concerned with energy efficient area monitoring using information coverage in wireless sensor networks, where collaboration among multiple sensors can enable accurate sensing of a point in a given area-to-monitor even if that point falls outside the physical coverage of all the sensors. We refer to any set of sensors that can collectively sense all points in the entire area-to-monitor as a full area information cover. We first propose a low-complexity heuristic algorithm to obtain full area information covers. Using these covers, we then obtain the optimum schedule for activating the sensing activity of various sensors that maximizes the sensing lifetime. The scheduling of sensor activity using the optimum schedules obtained using the proposed algorithm is shown to achieve significantly longer sensing lifetimes compared to those achieved using physical coverage. Relaxing the full area coverage requirement to a partial area coverage (e.g., 95% of area coverage as adequate instead of 100% area coverage) further enhances the lifetime.
Resumo:
With the increasing adoption of wireless technology, it is reasonable to expect an increase in file demand for supporting both real-time multimedia and high rate reliable data services. Next generation wireless systems employ Orthogonal Frequency Division Multiplexing (OFDM) physical layer owing, to the high data rate transmissions that are possible without increase in bandwidth. Towards improving file performance of these systems, we look at the design of resource allocation algorithms at medium-access layer, and their impact on higher layers. While TCP-based clastic traffic needs reliable transport, UDP-based real-time applications have stringent delay and rate requirements. The MAC algorithms while catering to the heterogeneous service needs of these higher layers, tradeoff between maximizing the system capacity and providing fairness among users. The novelly of this work is the proposal of various channel-aware resource allocation algorithms at the MAC layer. which call result in significant performance gains in an OFDM based wireless system.
Resumo:
The problem of automatic melody line identification in a MIDI file plays an important role towards taking QBH systems to the next level. We present here, a novel algorithm to identify the melody line in a polyphonic MIDI file. A note pruning and track/channel ranking method is used to identify the melody line. We use results from musicology to derive certain simple heuristics for the note pruning stage. This helps in the robustness of the algorithm, by way of discarding "spurious" notes. A ranking based on the melodic information in each track/channel enables us to choose the melody line accurately. Our algorithm makes no assumption about MIDI performer specific parameters, is simple and achieves an accuracy of 97% in identifying the melody line correctly. This algorithm is currently being used by us in a QBH system built in our lab.
Resumo:
A polymorphic ASIC is a runtime reconfigurable hardware substrate comprising compute and communication elements. It is a ldquofuture proofrdquo custom hardware solution for multiple applications and their derivatives in a domain. Interoperability between application derivatives at runtime is achieved through hardware reconfiguration. In this paper we present the design of a single cycle Network on Chip (NoC) router that is responsible for effecting runtime reconfiguration of the hardware substrate. The router design is optimized to avoid FIFO buffers at the input port and loop back at output crossbar. It provides virtual channels to emulate a non-blocking network and supports a simple X-Y relative addressing scheme to limit the control overhead to 9 bits per packet. The 8times8 honeycomb NoC (RECONNECT) implemented in 130 nm UMC CMOS standard cell library operates at 500 MHz and has a bisection bandwidth of 28.5 GBps. The network is characterized for random, self-similar and application specific traffic patterns that model the execution of multimedia and DSP kernels with varying network loads and virtual channels. Our implementation with 4 virtual channels has an average network latency of 24 clock cycles and throughput of 62.5% of the network capacity for random traffic. For application specific traffic the latency is 6 clock cycles and throughput is 87% of the network capacity.
Resumo:
Feature track matrix factorization based methods have been attractive solutions to the Structure-front-motion (Sfnl) problem. Group motion of the feature points is analyzed to get the 3D information. It is well known that the factorization formulations give rise to rank deficient system of equations. Even when enough constraints exist, the extracted models are sparse due the unavailability of pixel level tracks. Pixel level tracking of 3D surfaces is a difficult problem, particularly when the surface has very little texture as in a human face. Only sparsely located feature points can be tracked and tracking error arc inevitable along rotating lose texture surfaces. However, the 3D models of an object class lie in a subspace of the set of all possible 3D models. We propose a novel solution to the Structure-from-motion problem which utilizes the high-resolution 3D obtained from range scanner to compute a basis for this desired subspace. Adding subspace constraints during factorization also facilitates removal of tracking noise which causes distortions outside the subspace. We demonstrate the effectiveness of our formulation by extracting dense 3D structure of a human face and comparing it with a well known Structure-front-motion algorithm due to Brand.
Resumo:
Nowadays, national interest in lifelong learning is heightened, but more than a half of adults do not take part in any lifelong learning the past year on the grounds that one is too busy. Providing equal access for all people to lifelong learning, remains a challenge in Japan. This artcle proposes an idea “information education as lifelong learning”.According to a survey on policy, actual condition, and previous research of “information education as lifelong learning” in Japan, it found that Japan has less guidelines and practical case studies than Finland. In this article, Finnish educational model, resulted success in PISA, which has similar social backgrounds to Japan is argued, in particular, history of media education in Finland, policies on media education, and concrete examples in libraries. Based on the argument, this artcle proposes three ideas that are feasible in Japan, such as ⑴ multimedia-based information education using mobile phone and radio, ⑵ employability-aware information education, and ⑶ information education utilizing library.
Resumo:
Usually digital image forgeries are created by copy-pasting a portion of an image onto some other image. While doing so, it is often necessary to resize the pasted portion of the image to suit the sampling grid of the host image. The resampling operation changes certain characteristics of the pasted portion, which when detected serves as a clue of tampering. In this paper, we present deterministic techniques to detect resampling, and localize the portion of the image that has been tampered with. Two of the techniques are in pixel domain and two others in frequency domain. We study the efficacy of our techniques against JPEG compression and subsequent resampling of the entire tampered image.
Resumo:
Multimedia mining primarily involves, information analysis and retrieval based on implicit knowledge. The ever increasing digital image databases on the Internet has created a need for using multimedia mining on these databases for effective and efficient retrieval of images. Contents of an image can be expressed in different features such as Shape, Texture and Intensity-distribution(STI). Content Based Image Retrieval(CBIR) is an efficient retrieval of relevant images from large databases based on features extracted from the image. Most of the existing systems either concentrate on a single representation of all features or linear combination of these features. The paper proposes a CBIR System named STIRF (Shape, Texture, Intensity-distribution with Relevance Feedback) that uses a neural network for nonlinear combination of the heterogenous STI features. Further the system is self-adaptable to different applications and users based upon relevance feedback. Prior to retrieval of relevant images, each feature is first clustered independent of the other in its own space and this helps in matching of similar images. Testing the system on a database of images with varied contents and intensive backgrounds showed good results with most relevant images being retrieved for a image query. The system showed better and more robust performance compared to existing CBIR systems
Resumo:
The new paradigm of connectedness and empowerment brought by the interactivity feature of the Web 2.0 has been challenging the traditional centralized performance of mainstream media. The corporation has been able to survive the strong winds by transforming itself into a global multimedia business network embedded in the network society. By establishing networks, e.g. networks of production and distribution, the global multimedia business network has been able to sight potential solutions by opening the doors to innovation in a decentralized and flexible manner. Under this emerging context of re-organization, traditional practices like sourcing need to be re- explained and that is precisely what this thesis attempts to tackle. Based on ICT and on the network society, the study seeks to explain within the Finnish context the particular case of Helsingin Sanomat (HS) and its relations with the youth news agency, Youth Voice Editorial Board (NÄT). In that sense, the study can be regarded as an explanatory embedded single case study, where HS is the principal unit of analysis and NÄT its embedded unit of analysis. The thesis was able to reach explanations through interrelated steps. First, it determined the role of ICT in HS’s sourcing practices. Then it mapped an overview of the HS’s sourcing relations and provided a context in which NÄT was located. And finally, it established conceptualized institutional relational data between HS and NÄT for their posterior measurement through social network analysis. The data set was collected via qualitative interviews addressed to online and offline editors of HS as well as interviews addressed to NÄT’s personnel. The study concluded that ICT’s interactivity and User Generated Content (UGC) are not sourcing tools as such but mechanism used by HS for getting ideas that could turn into potential news stories. However, when it comes to visual communication, some exemptions were found. The lack of official sources amidst the immediacy leads HS to rely on ICT’s interaction and UGC. More than meets the eye, ICT’s input into the sourcing practice may be more noticeable if the interaction and UGC is well organized and coordinated into proper and innovative networks of alternative content collaboration. Currently, HS performs this sourcing practice via two projects that differ, precisely, by the mode they are coordinated. The first project found, Omakaupunki, is coordinated internally by Sanoma Group’s owned media houses HS, Vartti and Metro. The second project found is coordinated externally. The external alternative sourcing network, as it was labeled, consists of three actors, namely HS, NÄT (professionals in charge) and the youth. This network is a balanced and complete triad in which the actors connect themselves in relations of feedback, recognition, creativity and filtering. However, as innovation is approached very reluctantly, this content collaboration is a laboratory of experiments; a ‘COLLABORATORY’.
Resumo:
We present experimental investigation of a new reconstruction method for off-axis digital holographic microscopy (DHM). This method effectively suppresses the object auto-correlation, commonly called the zero-order term, from holographic measurements, thereby suppressing the artifacts generated by the intensities of the two beams employed for interference from complex wavefield reconstruction. The algorithm is based on non-linear filtering, and can be applied to standard DHM setups, with realistic recording conditions. We study the applicability of the technique under different experimental configurations, such as topographic images of microscopic specimens or speckle holograms.
Resumo:
With the emergence of Internet, the global connectivity of computers has become a reality. Internet has progressed to provide many user-friendly tools like Gopher, WAIS, WWW etc. for information publishing and access. The WWW, which integrates all other access tools, also provides a very convenient means for publishing and accessing multimedia and hypertext linked documents stored in computers spread across the world. With the emergence of WWW technology, most of the information activities are becoming Web-centric. Once the information is published on the Web, a user can access this information from any part of the world. A Web browser like Netscape or Internet Explorer is used as a common user interface for accessing information/databases. This will greatly relieve a user from learning the search syntax of individual information systems. Libraries are taking advantage of these developments to provide access to their resources on the Web. CDS/ISIS is a very popular bibliographic information management software used in India. In this tutorial we present details of integrating CDS/ISIS with the WWW. A number of tools are now available for making CDS/ISIS database accessible on the Internet/Web. Some of these are 1) the WAIS_ISIS Server. 2) the WWWISIS Server 3) the IQUERY Server. In this tutorial, we have explained in detail the steps involved in providing Web access to an existing CDS/ISIS database using the freely available software, WWWISIS. This software is developed, maintained and distributed by BIREME, the Latin American & Caribbean Centre on Health Sciences Information. WWWISIS acts as a server for CDS/ISIS databases in a WWW client/server environment. It supports functions for searching, formatting and data entry operations over CDS/ISIS databases. WWWISIS is available for various operating systems. We have tested this software on Windows '95, Windows NT and Red Hat Linux release 5.2 (Appolo) Kernel 2. 0. 36 on an i686. The testing was carried out using IISc's main library's OPAC containing more than 80,000 records and Current Contents issues (bibliographic data) containing more than 25,000 records. WWWISIS is fully compatible with CDS/ISIS 3.07 file structure. However, on a system running Unix or its variant, there is no guarantee of this compatibility. It is therefore safe to recreate the master and the inverted files, using utilities provided by BIREME, under Unix environment.
Resumo:
Motion Estimation is one of the most power hungry operations in video coding. While optimal search (eg. full search)methods give best quality, non optimal methods are often used in order to reduce cost and power. Various algorithms have been used in practice that trade off quality vs. complexity. Global elimination is an algorithm based on pixel averaging to reduce complexity of motion search while keeping performance close to that of full search. We propose an adaptive version of the global elimination algorithm that extracts individual macro-block features using Hadamard transform to optimize the search. Performance achieved is close to the full search method and global elimination. Operational complexity and hence power is reduced by 30% to 45% compared to global elimination method.