24 resultados para Information science|Computer science
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
The aim of this paper is to give an overview of the issues and actions on the Brazilian cultural heritage and then to discuss contributions as well as relationships that may be established from the principles of Information Science. The first item is concerned with the relationship between heritage and the concept of document, the second relates the documentary processes and the information scientist and finally, an approach of cultural heritage mediation and appropriation is presented.
Resumo:
Developed countries have an even distribution of published papers on the seventeen model organisms. Developing countries have biased preferences for a few model organisms which are associated with endemic human diseases. A variant of the Hirsch-index, that we call the mean (mo)h-index (""model organism h-index""), shows an exponential relationship with the amount of papers published in each country on the selected model organisms. Developing countries cluster together with low mean (mo)h-indexes, even those with high number of publications. The growth curves of publications on the recent model Caenorhabditis elegans in developed countries shows different formats. We also analyzed the growth curves of indexed publications originating from developing countries. Brazil and South Korea were selected for this comparison. The most prevalent model organisms in those countries show different growth curves when compared to a global analysis, reflecting the size and composition of their research communities.
Resumo:
In this paper, we propose a content selection framework that improves the users` experience when they are enriching or authoring pieces of news. This framework combines a variety of techniques to retrieve semantically related videos, based on a set of criteria which are specified automatically depending on the media`s constraints. The combination of different content selection mechanisms can improve the quality of the retrieved scenes, because each technique`s limitations are minimized by other techniques` strengths. We present an evaluation based on a number of experiments, which show that the retrieved results are better when all criteria are used at time.
Resumo:
In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. We have developed an inherently incremental technique (incBoard) that maintains a coherent disposition of elements from a dynamic multidimensional data set on a 2D grid as the set changes. Here, we introduce a novel layout that uses pairwise similarity from grid neighbors, as defined in incBoard, to reposition elements on the visual space, free from constraints imposed by the grid. The board continues to be updated and can be displayed alongside the new space. As similar items are placed together, while dissimilar neighbors are moved apart, it supports users in the identification of clusters and subsets of related elements. Densely populated areas identified in the incSpace can be efficiently explored with the corresponding incBoard visualization, which is not susceptible to occlusion. The solution remains inherently incremental and maintains a coherent disposition of elements, even for fully renewed sets. The algorithm considers relative positions for the initial placement of elements, and raw dissimilarity to fine tune the visualization. It has low computational cost, with complexity depending only on the size of the currently viewed subset, V. Thus, a data set of size N can be sequentially displayed in O(N) time, reaching O(N (2)) only if the complete set is simultaneously displayed.
Resumo:
While watching TV, viewers use the remote control to turn the TV set on and off, change channel and volume, to adjust the image and audio settings, etc. Worldwide, research institutes collect information about audience measurement, which can also be used to provide personalization and recommendation services, among others. The interactive digital TV offers viewers the opportunity to interact with interactive applications associated with the broadcast program. Interactive TV infrastructure supports the capture of the user-TV interaction at fine-grained levels. In this paper we propose the capture of all the user interaction with a TV remote control-including short term and instant interactions: we argue that the corresponding captured information can be used to create content pervasively and automatically, and that this content can be used by a wide variety of services, such as audience measurement, personalization and recommendation services. The capture of fine grained data about instant and interval-based interactions also allows the underlying infrastructure to offer services at the same scale, such as annotation services and adaptative applications. We present the main modules of an infrastructure for TV-based services, along with a detailed example of a document used to record the user-remote control interaction. Our approach is evaluated by means of a proof-of-concept prototype which uses the Brazilian Digital TV System, the Ginga-NCL middleware.
Resumo:
Document engineering is the computer science discipline that investigates systems for documents in any form and in all media. As with the relationship between software engineering and software, document engineering is concerned with principles, tools and processes that improve our ability to create, manage, and maintain documents (http://www.documentengineering.org). The ACM Symposium on Document Engineering is an annual meeting of researchers active in document engineering: it is sponsored by ACM by means of the ACM SIGWEB Special Interest Group. In this editorial, we first point to work carried out in the context of document engineering, which are directly related to multimedia tools and applications. We conclude with a summary of the papers presented in this special issue.
Resumo:
The pervasive and ubiquitous computing has motivated researches on multimedia adaptation which aims at matching the video quality to the user needs and device restrictions. This technique has a high computational cost which needs to be studied and estimated when designing architectures and applications. This paper presents an analytical model to quantify these video transcoding costs in a hardware independent way. The model was used to analyze the impact of transcoding delays in end-to-end live-video transmissions over LANs, MANs and WANs. Experiments confirm that the proposed model helps to define the best transcoding architecture for different scenarios.
Resumo:
The literature reports research efforts allowing the editing of interactive TV multimedia documents by end-users. In this article we propose complementary contributions relative to end-user generated interactive video, video tagging, and collaboration. In earlier work we proposed the watch-and-comment (WaC) paradigm as the seamless capture of an individual`s comments so that corresponding annotated interactive videos be automatically generated. As a proof of concept, we implemented a prototype application, the WACTOOL, that supports the capture of digital ink and voice comments over individual frames and segments of the video, producing a declarative document that specifies both: different media stream structure and synchronization. In this article, we extend the WaC paradigm in two ways. First, user-video interactions are associated with edit commands and digital ink operations. Second, focusing on collaboration and distribution issues, we employ annotations as simple containers for context information by using them as tags in order to organize, store and distribute information in a P2P-based multimedia capture platform. We highlight the design principles of the watch-and-comment paradigm, and demonstrate related results including the current version of the WACTOOL and its architecture. We also illustrate how an interactive video produced by the WACTOOL can be rendered in an interactive video environment, the Ginga-NCL player, and include results from a preliminary evaluation.
Resumo:
Policy hierarchies and automated policy refinement are powerful approaches to simplify administration of security services in complex network environments. A crucial issue for the practical use of these approaches is to ensure the validity of the policy hierarchy, i.e. since the policy sets for the lower levels are automatically derived from the abstract policies (defined by the modeller), we must be sure that the derived policies uphold the high-level ones. This paper builds upon previous work on Model-based Management, particularly on the Diagram of Abstract Subsystems approach, and goes further to propose a formal validation approach for the policy hierarchies yielded by the automated policy refinement process. We establish general validation conditions for a multi-layered policy model, i.e. necessary and sufficient conditions that a policy hierarchy must satisfy so that the lower-level policy sets are valid refinements of the higher-level policies according to the criteria of consistency and completeness. Relying upon the validation conditions and upon axioms about the model representativeness, two theorems are proved to ensure compliance between the resulting system behaviour and the abstract policies that are modelled.
Resumo:
In this paper, we consider a classical problem of complete test generation for deterministic finite-state machines (FSMs) in a more general setting. The first generalization is that the number of states in implementation FSMs can even be smaller than that of the specification FSM. Previous work deals only with the case when the implementation FSMs are allowed to have the same number of states as the specification FSM. This generalization provides more options to the test designer: when traditional methods trigger a test explosion for large specification machines, tests with a lower, but yet guaranteed, fault coverage can still be generated. The second generalization is that tests can be generated starting with a user-defined test suite, by incrementally extending it until the desired fault coverage is achieved. Solving the generalized test derivation problem, we formulate sufficient conditions for test suite completeness weaker than the existing ones and use them to elaborate an algorithm that can be used both for extending user-defined test suites to achieve the desired fault coverage and for test generation. We present the experimental results that indicate that the proposed algorithm allows obtaining a trade-off between the length and fault coverage of test suites.
Resumo:
Ubiquitous computing aims at providing services to users in everyday environments such as the home. One research theme in this area is that of building capture and access applications which support information to be recorded ( captured) during a live experience toward automatically producing documents for review (accessed). The recording demands instrumented environments with devices such as microphones, cameras, sensors and electronic whiteboards. Since each experience is usually related to many others ( e. g. several meetings of a project), there is a demand for mechanisms supporting the automatic linking among documents relative to different experiences. In this paper we present original results relative to the integration of our previous efforts in the Infrastructure for Capturing, Accessing, Linking, Storing and Presenting information (CALiSP). Ubiquitous computing aims at providing services to users in everyday environments such as the home. One research theme in this area is that of building capture and access applications which support information to be recorded (captured) during a live experience toward automatically producing documents for review (accessed). The recording demands instrumented environments with devices such as microphones, cameras, sensors and electronic whiteboards. Since each experience is usually related to many others (e.g. several meetings of a project), there is a demand for mechanisms supporting the automatic linking among documents relative to different experiences. In this paper we present original results relative to the integration of our previous efforts in the Infrastructure for Capturing, Accessing, Linking, Storing and Presenting information (CALiSP).
Resumo:
Most physiological effects of thyroid hormones are mediated by the two thyroid hormone receptor subtypes, TR alpha and TR beta. Several pharmacological effects mediated by TR beta might be beneficial in important medical conditions such as obesity, hypercholesterolemia and diabetes, and selective TR beta activation may elicit these effects while maintaining an acceptable safety profile, To understand the molecular determinants of affinity and subtype selectivity of TR ligands, we have successfully employed a ligand- and structure-guided pharmacophore-based approach to obtain the molecular alignment of a large series of thyromimetics. Statistically reliable three-dimensional quantitative structure-activity relationship (3D-QSAR) and three-dimensional quantitative structure-selectivity relationship (3D-QSSR) models were obtained using the comparative molecular field analysis (CoMFA) method, and the visual analyses of the contour maps drew attention to a number of possible opportunities for the development of analogs with improved affinity and selectivity. Furthermore, the 3D-QSSR analysis allowed the identification of a novel and previously unmentioned halogen bond, bringing new insights to the mechanism of activity and selectivity of thyromimetics.
Structure-Based Approach for the Study of Estrogen Receptor Binding Affinity and Subtype Selectivity
Resumo:
Estrogens exert important physiological effects through the modulation of two human estrogen receptor (hER) subtypes, alpa (hER alpha) and beta (hER beta). Because the levels and relative proportion of hER alpha and hER beta differ significantly in different target cells, selective hER ligands could target specific tissues or pathways regulated by one receptor subtype without affecting the other. To understand the structural and chemical basis by which small molecule modulators are able to discriminate between the two subtypes, we have applied three-dimensional target-based approaches employing a series of potent hER-ligands. Comparative molecular field analysis (CoMFA) studies were applied to a data set of 81 hER modulators, for which binding affinity values were collected for both hER alpha and hER beta. Significant statistical coefficients were obtained (hER alpha, q(2) = 0.76; hER beta, q(2) = 0.70), indicating the internal consistency of the models. The generated models were validated using external test sets, and the predicted values were in good agreement with the experimental results. Five hER crystal structures were used in GRID/PCA investigations to generate molecular interaction fields (MIF) maps. hER alpha and hER beta were separated using one factor. The resulting 3D information was integrated with the aim of revealing the most relevant structural features involved in hER subtype selectivity. The final QSAR and GRID/PCA models and the information gathered from 3D contour maps should be useful for the design or novel hER modulators with improved selectivity.
Resumo:
The glycolytic enzyme glyceraldehyde-3 -phosphate dehydrogenase (GAPDH) is as an attractive target for the development of novel antitrypanosomatid agents. In the present work, comparative molecular field analysis and comparative molecular similarity index analysis were conducted on a large series of selective inhibitors of trypanosomatid GAPDH. Four statistically significant models were obtained (r(2) > 0.90 and q(2) > 0.70), indicating their predictive ability for untested compounds. The models were then used to predict the potency of an external test set, and the predicted values were in good agreement with the experimental results. Molecular modeling studies provided further insight into the structural basis for selective inhibition of trypanosomatid GAPDH.
Resumo:
Schistosomiasis is considered the second most important tropical parasitic disease, with severe socioeconomic consequences for millions of people worldwide. Schistosoma monsoni, one of the causative agents of human schistosomiasis, is unable to synthesize purine nucleotides de novo, which makes the enzymes of the purine salvage pathway important targets for antischistosomal drug development. In the present work, we describe the development of a pharmacophore model for ligands of S. mansoni purine nucleoside phosphorylase (SmPNP) as well as a pharmacophore-based virtual screening approach, which resulted in the identification of three thioxothiazolidinones (1-3) with substantial in vitro inhibitory activity against SmPNP. Synthesis, biochemical evaluation, and structure activity relationship investigations led to the successful development of a small set of thioxothiazolidinone derivatives harboring a novel chemical scaffold as new competitive inhibitors of SmPNP at the low-micromolar range. Seven compounds were identified with IC(50) values below 100 mu M. The most potent inhibitors 7, 10, and 17 with 1050 of 2, 18, and 38 mu M, respectively, could represent new potential lead compounds for further development of the therapy of schistosomiasis.