89 resultados para computer science and engineering
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
Document engineering is the computer science discipline that investigates systems for documents in any form and in all media. As with the relationship between software engineering and software, document engineering is concerned with principles, tools and processes that improve our ability to create, manage, and maintain documents (http://www.documentengineering.org). The ACM Symposium on Document Engineering is an annual meeting of researchers active in document engineering: it is sponsored by ACM by means of the ACM SIGWEB Special Interest Group. In this editorial, we first point to work carried out in the context of document engineering, which are directly related to multimedia tools and applications. We conclude with a summary of the papers presented in this special issue.
Resumo:
A planar k-restricted structure is a simple graph whose blocks are planar and each has at most k vertices. Planar k-restricted structures are used by approximation algorithms for Maximum Weight Planar Subgraph, which motivates this work. The planar k-restricted ratio is the infimum, over simple planar graphs H, of the ratio of the number of edges in a maximum k-restricted structure subgraph of H to the number edges of H. We prove that, as k tends to infinity, the planar k-restricted ratio tends to 1/2. The same result holds for the weighted version. Our results are based on analyzing the analogous ratios for outerplanar and weighted outerplanar graphs. Here both ratios tend to 1 as k goes to infinity, and we provide good estimates of the rates of convergence, showing that they differ in the weighted from the unweighted case.
Resumo:
The large amount of information in electronic contracts hampers their establishment due to high complexity. An approach inspired in Software Product Line (PL) and based on feature modelling was proposed to make this process more systematic through information reuse and structuring. By assessing the feature-based approach in relation to a proposed set of requirements, it was showed that the approach does not allow the price of services and of Quality of Services (QoS) attributes to be considered in the negotiation and included in the electronic contract. Thus, this paper also presents an extension of such approach in which prices and price types associated to Web services and QoS levels are applied. An extended toolkit prototype is also presented as well as an experiment example of the proposed approach.
Resumo:
Support for interoperability and interchangeability of software components which are part of a fieldbus automation system relies on the definition of open architectures, most of them involving proprietary technologies. Concurrently, standard, open and non-proprietary technologies, such as XML, SOAP, Web Services and the like, have greatly evolved and been diffused in the computing area. This article presents a FOUNDATION fieldbus (TM) device description technology named Open-EDD, based on XML and other related technologies (XLST, DOM using Xerces implementation, OO, XMIL Schema), proposing an open and nonproprietary alternative to the EDD (Electronic Device Description). This initial proposal includes defining Open-EDDML as the programming language of the technology in the FOUNDATION fieldbus (TM) protocol, implementing a compiler and a parser, and finally, integrating and testing the new technology using field devices and a commercial fieldbus configurator. This study attests that this new technology is feasible and can be applied to other configurators or HMI applications used in fieldbus automation systems. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes a novel computer vision approach that processes video sequences of people walking and then recognises those people by their gait. Human motion carries different information that can be analysed in various ways. The skeleton carries motion information about human joints, and the silhouette carries information about boundary motion of the human body. Moreover, binary and gray-level images contain different information about human movements. This work proposes to recover these different kinds of information to interpret the global motion of the human body based on four different segmented image models, using a fusion model to improve classification. Our proposed method considers the set of the segmented frames of each individual as a distinct class and each frame as an object of this class. The methodology applies background extraction using the Gaussian Mixture Model (GMM), a scale reduction based on the Wavelet Transform (WT) and feature extraction by Principal Component Analysis (PCA). We propose four new schemas for motion information capture: the Silhouette-Gray-Wavelet model (SGW) captures motion based on grey level variations; the Silhouette-Binary-Wavelet model (SBW) captures motion based on binary information; the Silhouette-Edge-Binary model (SEW) captures motion based on edge information and the Silhouette Skeleton Wavelet model (SSW) captures motion based on skeleton movement. The classification rates obtained separately from these four different models are then merged using a new proposed fusion technique. The results suggest excellent performance in terms of recognising people by their gait.
Resumo:
The pervasive and ubiquitous computing has motivated researches on multimedia adaptation which aims at matching the video quality to the user needs and device restrictions. This technique has a high computational cost which needs to be studied and estimated when designing architectures and applications. This paper presents an analytical model to quantify these video transcoding costs in a hardware independent way. The model was used to analyze the impact of transcoding delays in end-to-end live-video transmissions over LANs, MANs and WANs. Experiments confirm that the proposed model helps to define the best transcoding architecture for different scenarios.
Resumo:
Point placement strategies aim at mapping data points represented in higher dimensions to bi-dimensional spaces and are frequently used to visualize relationships amongst data instances. They have been valuable tools for analysis and exploration of data sets of various kinds. Many conventional techniques, however, do not behave well when the number of dimensions is high, such as in the case of documents collections. Later approaches handle that shortcoming, but may cause too much clutter to allow flexible exploration to take place. In this work we present a novel hierarchical point placement technique that is capable of dealing with these problems. While good grouping and separation of data with high similarity is maintained without increasing computation cost, its hierarchical structure lends itself both to exploration in various levels of detail and to handling data in subsets, improving analysis capability and also allowing manipulation of larger data sets.
Resumo:
The problem of projecting multidimensional data into lower dimensions has been pursued by many researchers due to its potential application to data analyses of various kinds. This paper presents a novel multidimensional projection technique based on least square approximations. The approximations compute the coordinates of a set of projected points based on the coordinates of a reduced number of control points with defined geometry. We name the technique Least Square Projections ( LSP). From an initial projection of the control points, LSP defines the positioning of their neighboring points through a numerical solution that aims at preserving a similarity relationship between the points given by a metric in mD. In order to perform the projection, a small number of distance calculations are necessary, and no repositioning of the points is required to obtain a final solution with satisfactory precision. The results show the capability of the technique to form groups of points by degree of similarity in 2D. We illustrate that capability through its application to mapping collections of textual documents from varied sources, a strategic yet difficult application. LSP is faster and more accurate than other existing high-quality methods, particularly where it was mostly tested, that is, for mapping text sets.
Resumo:
The literature reports research efforts allowing the editing of interactive TV multimedia documents by end-users. In this article we propose complementary contributions relative to end-user generated interactive video, video tagging, and collaboration. In earlier work we proposed the watch-and-comment (WaC) paradigm as the seamless capture of an individual`s comments so that corresponding annotated interactive videos be automatically generated. As a proof of concept, we implemented a prototype application, the WACTOOL, that supports the capture of digital ink and voice comments over individual frames and segments of the video, producing a declarative document that specifies both: different media stream structure and synchronization. In this article, we extend the WaC paradigm in two ways. First, user-video interactions are associated with edit commands and digital ink operations. Second, focusing on collaboration and distribution issues, we employ annotations as simple containers for context information by using them as tags in order to organize, store and distribute information in a P2P-based multimedia capture platform. We highlight the design principles of the watch-and-comment paradigm, and demonstrate related results including the current version of the WACTOOL and its architecture. We also illustrate how an interactive video produced by the WACTOOL can be rendered in an interactive video environment, the Ginga-NCL player, and include results from a preliminary evaluation.
Resumo:
Reusable and evolvable Software Engineering Environments (SEES) are essential to software production and have increasingly become a need. In another perspective, software architectures and reference architectures have played a significant role in determining the success of software systems. In this paper we present a reference architecture for SEEs, named RefASSET, which is based on concepts coming from the aspect-oriented approach. This architecture is specialized to the software testing domain and the development of tools for that domain is discussed. This and other case studies have pointed out that the use of aspects in RefASSET provides a better Separation of Concerns, resulting in reusable and evolvable SEEs. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
We report the observation at the Relativistic Heavy Ion Collider of suppression of back-to-back correlations in the direct photon+jet channel in Au+Au relative to p+p collisions. Two-particle correlations of direct photon triggers with associated hadrons are obtained by statistical subtraction of the decay photon-hadron (gamma-h) background. The initial momentum of the away-side parton is tightly constrained, because the parton-photon pair exactly balance in momentum at leading order in perturbative quantum chromodynamics, making such correlations a powerful probe of the in-medium parton energy loss. The away-side nuclear suppression factor, I(AA), in central Au+Au collisions, is 0.32 +/- 0.12(stat)+/- 0.09(syst) for hadrons of 3 < p(T)(h)< 5 in coincidence with photons of 5 < p(T)(gamma)< 15 GeV/c. The suppression is comparable to that observed for high-p(T) single hadrons and dihadrons. The direct photon associated yields in p+p collisions scale approximately with the momentum balance, z(T)equivalent to p(T)(h)/p(T)(gamma), as expected for a measurement of the away-side parton fragmentation function. We compare to Au+Au collisions for which the momentum balance dependence of the nuclear modification should be sensitive to the path-length dependence of parton energy loss.
Resumo:
The PHENIX experiment presents results from the RHIC 2006 run with polarized p + p collisions at root s = 62.4 GeV, for inclusive pi(0) production at midrapidity. Unpolarized cross section results are measured for transverse momenta p(T) = 0.5 to 7 GeV/c. Next-to-leading order perturbative quantum chromodynamics calculations are compared with the data, and while the calculations are consistent with the measurements, next-to-leading logarithmic corrections improve the agreement. Double helicity asymmetries A(LL) are presented for p(T) = 1 to 4 GeV/c and probe the higher range of Bjorken x of the gluon (x(g)) with better statistical precision than our previous measurements at root s = 200 GeV. These measurements are sensitive to the gluon polarization in the proton for 0.06 < x(g) < 0.4.
Resumo:
We simplify the known formula for the asymptotic estimate of the number of deterministic and accessible automata with n states over a k-letter alphabet. The proof relies on the theory of Lagrange inversion applied in the context of generalized binomial series.
Resumo:
This paper describes three-dimensional microfluidic paper-based analytical devices (3-D mu PADs) that can be programmed (postfabrication) by the user to generate multiple patterns of flow through them. These devices are programmed by pressing single-use 'on' buttons, using a stylus or a ballpoint pen. Pressing a button closes a small space (gap) between two vertically aligned microfluidic channels, and allows fluids to wick from one channel to the other. These devices are simple to fabricate, and are made entirely out of paper and double-sided adhesive tape. Programmable devices expand the capabilities of mu PADs and provide a simple method for controlling the movement of fluids in paper-based channels. They are the conceptual equivalent of field-programmable gate arrays (FPGAs) widely used in electronics.
Resumo:
Composition and orientation effects on the final recrystallization texture of three coarse-grained Nb-containing AISI 430 ferritic stainless steels (FSSs) were investigated. Hot-bands of steels containing distinct amounts of niobium, carbon and nitrogen were annealed at 1250 degrees C for 2h to promote grain growth. In particular, the amounts of Nb in solid solution vary from one grade to another. For purposes of comparison, the texture evolution of a hot-band sheet annealed at 1030 degrees C for 1 min (finer grain structure) was also investigated. Subsequently, the four sheets were cold rolled up to 80% reduction and then annealed at 800 degrees C for 15 min. Texture was determined using X-ray diffraction and electron backscatter diffraction (EBSD). Noticeable differences regarding the final recrystallization texture and microstructure were observed in the four investigated grades. Results suggest that distinct nucleation mechanisms take place within these large grains leading to the development of different final recrystallization textures. (c) 2011 Elsevier B.V. All rights reserved.