924 resultados para Scenario Programming, Markup Language, End User Programming


Relevância:

100.00% 100.00%

Publicador:

Resumo:

O Repositório Institucional UNESP foi criado em 2013 e, para sua implantação, foi povoado com dados obtidos de forma automática. Considerando a experiência realizada na UNESP, este trabalho tem por objetivo apresentar os processos utilizados para a conversão dos registros coletados de três diferentes fontes de dados (Web of Science, SciELO e Scopus) para inclusão no repositório. A partir da coleta dos registros, os padrões de metadados da Web of Science, da SciELO e da Scopus foram mapeados para o perfil de aplicação de metadados utilizado no repositório. Os registros foram coletados como arquivos XML e, para sua conversão, foram elaboradas folhas de estilo utilizando a linguagem XSLT. Após essa conversão, os arquivos XML foram convertidos em arquivos CSV e, então, importados no Repositório. Conclui-se que os processos de conversão utilizados permitiram alcançar as metas iniciais do Repositório e evitaram a necessidade de inclusão dos registros de forma manual.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Mecânica - FEG

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, there has been growing interest in developing optical fiber networks to support the increasing bandwidth demands of multimedia applications, such as video conferencing and World Wide Web browsing. One technique for accessing the huge bandwidth available in an optical fiber is wavelength-division multiplexing (WDM). Under WDM, the optical fiber bandwidth is divided into a number of nonoverlapping wavelength bands, each of which may be accessed at peak electronic rates by an end user. By utilizing WDM in optical networks, we can achieve link capacities on the order of 50 THz. The success of WDM networks depends heavily on the available optical device technology. This paper is intended as a tutorial on some of the optical device issues in WDM networks. It discusses the basic principles of optical transmission in fiber and reviews the current state of the art in optical device technology. It introduces some of the basic components in WDM networks, discusses various implementations of these components, and provides insights into their capabilities and limitations. Then, this paper demonstrates how various optical components can be incorporated into WDM optical networks for both local and wide-area applications. Last, the paper provides a brief review of experimental WDM networks that have been implemented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer and telecommunication networks are changing the world dramatically and will continue to do so in the foreseeable future. The Internet, primarily based on packet switches, provides very flexible data services such as e-mail and access to the World Wide Web. The Internet is a variable-delay, variable- bandwidth network that provides no guarantee on quality of service (QoS) in its initial phase. New services are being added to the pure data delivery framework of yesterday. Such high demands on capacity could lead to a “bandwidth crunch” at the core wide-area network, resulting in degradation of service quality. Fortunately, technological innovations have emerged which can provide relief to the end user to overcome the Internet’s well-known delay and bandwidth limitations. At the physical layer, a major overhaul of existing networks has been envisaged from electronic media (e.g., twisted pair and cable) to optical fibers - in wide-area, metropolitan-area, and even local-area settings. In order to exploit the immense bandwidth potential of optical fiber, interesting multiplexing techniques have been developed over the years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lightpath scheduling is an important capability in next-generation wavelength-division multiplexing (WDM) optical networks to reserve resources in advance for a specified time period while provisioning end-to-end lightpaths. In a dynamic environment, the end user requests for dynamic scheduled lightpath demands (D-SLDs) need to be serviced without the knowledge of future requests. Even though the starting time of the request may be hours or days from the current time, the end-user however expects a quick response as to whether the request could be satisfied. We propose a two-phase approach to dynamically schedule and provision D-SLDs. In the first phase, termed the deterministic lightpath scheduling phase, upon arrival of a lightpath request, the network control plane schedules a path with guaranteed resources so that the user can get a quick response with a deterministic lightpath schedule. In the second phase, termed the lightpath re-optimization phase, we re-provision some already scheduled lightpaths to re-optimize for improving network performance. We study two reoptimization scenarios to reallocate network resources while maintaining the existing lightpath schedules. Experimental results show that our proposed two-phase dynamic lightpath scheduling approach can greatly reduce network blocking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for (i) discovering the structural commonalities between sub-trees, (ii) identifying sub-tree semantic resemblances, (iii) computing tree-based edit operations costs, and (iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current commercial and academic OLAP tools do not process XML data that contains XLink. Aiming at overcoming this issue, this paper proposes an analytical system composed by LMDQL, an analytical query language. Also, the XLDM metamodel is given to model cubes of XML documents with XLink and to deal with syntactic, semantic and structural heterogeneities commonly found in XML documents. As current W3C query languages for navigating in XML documents do not support XLink, XLPath is discussed in this article to provide features for the LMDQL query processing. A prototype system enabling the analytical processing of XML documents that use XLink is also detailed. This prototype includes a driver, named sql2xquery, which performs the mapping of SQL queries into XQuery. To validate the proposed system, a case study and its performance evaluation are presented to analyze the impact of analytical processing over XML/XLink documents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cooperation and the sharing of information cataloguing and bibliographical in environment automated, this was only possible with the creation and adoption of interchange format MARC21. But due to the progresses of the technologies of information and communication, of the crescent use of Internet and of the databases and databanks, there were the need of the creation and development of tools that optimize the organization activities, retrieval and interchange of information. XML is one of those developments that have as purpose to facilitate the management, storage and transmission of data through Internet. Before that, it was proposed through a literature revision, to analyze Interchange Format MARC21 and Markup Language XML as tools for the consolidation of the Automated Cooperative Cataloguing, your differences of storage flexibilities, organization, retrieval and interchange of data through Internet. This research made possible the divulgation to the community librarian, through a literature revision, that has been discussed internationally on MARC21 and XML

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many countries buildings are responsible for a substantial part of the energy consumption, nd it varies according to their energetic and environmental performances. The potential for major reductions in buildings consumption have bee well documented in Brazil. Opportunities have been identified throughout the life cycle of the buildings, due of projects in diverse locations without the proper adjustments. This article offers a reflection about project processes and how its understanding can be conducted in an integrated way, favoring the use of natural resources and lowering energy consumption. It concludes by indicating that the longest phase in the life cycle of a building is also the phase responsible for its largest energy consumption, not only because of its duration but also for the interaction with the end user. Therefore, in order to harvest the energy cost reduction potential from future buildings designers need a holistic view of the surrounding, end users, materials and methodologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In electronic commerce, systems development is based on two fundamental types of models, business models and process models. A business model is concerned with value exchanges among business partners, while a process model focuses on operational and procedural aspects of business communication. Thus, a business model defines the what in an e-commerce system, while a process model defines the how. Business process design can be facilitated and improved by a method for systematically moving from a business model to a process model. Such a method would provide support for traceability, evaluation of design alternatives, and seamless transition from analysis to realization. This work proposes a unified framework that can be used as a basis to analyze, to interpret and to understand different concepts associated at different stages in e-Commerce system development. In this thesis, we illustrate how UN/CEFACT’s recommended metamodels for business and process design can be analyzed, extended and then integrated for the final solutions based on the proposed unified framework. Also, as an application of the framework, we demonstrate how process-modeling tasks can be facilitated in e-Commerce system design. The proposed methodology, called BP3 stands for Business Process Patterns Perspective. The BP3 methodology uses a question-answer interface to capture different business requirements from the designers. It is based on pre-defined process patterns, and the final solution is generated by applying the captured business requirements by means of a set of production rules to complete the inter-process communication among these patterns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis proposes a new document model, according to which any document can be segmented in some independent components and transformed in a pattern-based projection, that only uses a very small set of objects and composition rules. The point is that such a normalized document expresses the same fundamental information of the original one, in a simple, clear and unambiguous way. The central part of my work consists of discussing that model, investigating how a digital document can be segmented, and how a segmented version can be used to implement advanced tools of conversion. I present seven patterns which are versatile enough to capture the most relevant documents’ structures, and whose minimality and rigour make that implementation possible. The abstract model is then instantiated into an actual markup language, called IML. IML is a general and extensible language, which basically adopts an XHTML syntax, able to capture a posteriori the only content of a digital document. It is compared with other languages and proposals, in order to clarify its role and objectives. Finally, I present some systems built upon these ideas. These applications are evaluated in terms of users’ advantages, workflow improvements and impact over the overall quality of the output. In particular, they cover heterogeneous content management processes: from web editing to collaboration (IsaWiki and WikiFactory), from e-learning (IsaLearning) to professional printing (IsaPress).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The miniaturization race in the hardware industry aiming at continuous increasing of transistor density on a die does not bring respective application performance improvements any more. One of the most promising alternatives is to exploit a heterogeneous nature of common applications in hardware. Supported by reconfigurable computation, which has already proved its efficiency in accelerating data intensive applications, this concept promises a breakthrough in contemporary technology development. Memory organization in such heterogeneous reconfigurable architectures becomes very critical. Two primary aspects introduce a sophisticated trade-off. On the one hand, a memory subsystem should provide well organized distributed data structure and guarantee the required data bandwidth. On the other hand, it should hide the heterogeneous hardware structure from the end-user, in order to support feasible high-level programmability of the system. This thesis work explores the heterogeneous reconfigurable hardware architectures and presents possible solutions to cope the problem of memory organization and data structure. By the example of the MORPHEUS heterogeneous platform, the discussion follows the complete design cycle, starting from decision making and justification, until hardware realization. Particular emphasis is made on the methods to support high system performance, meet application requirements, and provide a user-friendly programmer interface. As a result, the research introduces a complete heterogeneous platform enhanced with a hierarchical memory organization, which copes with its task by means of separating computation from communication, providing reconfigurable engines with computation and configuration data, and unification of heterogeneous computational devices using local storage buffers. It is distinguished from the related solutions by distributed data-flow organization, specifically engineered mechanisms to operate with data on local domains, particular communication infrastructure based on Network-on-Chip, and thorough methods to prevent computation and communication stalls. In addition, a novel advanced technique to accelerate memory access was developed and implemented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As distributed collaborative applications and architectures are adopting policy based management for tasks such as access control, network security and data privacy, the management and consolidation of a large number of policies is becoming a crucial component of such policy based systems. In large-scale distributed collaborative applications like web services, there is the need of analyzing policy interactions and integrating policies. In this thesis, we propose and implement EXAM-S, a comprehensive environment for policy analysis and management, which can be used to perform a variety of functions such as policy property analyses, policy similarity analysis, policy integration etc. As part of this environment, we have proposed and implemented new techniques for the analysis of policies that rely on a deep study of state of the art techniques. Moreover, we propose an approach for solving heterogeneity problems that usually arise when considering the analysis of policies belonging to different domains. Our work focuses on analysis of access control policies written in the dialect of XACML (Extensible Access Control Markup Language). We consider XACML policies because XACML is a rich language which can represent many policies of interest to real world applications and is gaining widespread adoption in the industry.