604 resultados para Workflows semânticos
Resumo:
Background: In many experimental pipelines, clustering of multidimensional biological datasets is used to detect hidden structures in unlabelled input data. Taverna is a popular workflow management system that is used to design and execute scientific workflows and aid in silico experimentation. The availability of fast unsupervised methods for clustering and visualization in the Taverna platform is important to support a data-driven scientific discovery in complex and explorative bioinformatics applications. Results: This work presents a Taverna plugin, the Biological Data Interactive Clustering Explorer (BioDICE), that performs clustering of high-dimensional biological data and provides a nonlinear, topology preserving projection for the visualization of the input data and their similarities. The core algorithm in the BioDICE plugin is Fast Learning Self Organizing Map (FLSOM), which is an improved variant of the Self Organizing Map (SOM) algorithm. The plugin generates an interactive 2D map that allows the visual exploration of multidimensional data and the identification of groups of similar objects. The effectiveness of the plugin is demonstrated on a case study related to chemical compounds. Conclusions: The number and variety of available tools and its extensibility have made Taverna a popular choice for the development of scientific data workflows. This work presents a novel plugin, BioDICE, which adds a data-driven knowledge discovery component to Taverna. BioDICE provides an effective and powerful clustering tool, which can be adopted for the explorative analysis of biological datasets.
Resumo:
Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.
Resumo:
This text extends some ideas presented in a keynote lecture of the 5th Encontro de Tipografia conference, in Barcelos, Portugal, in November 2014. The paper discusses problems of identifying the location and encoding of design decisions, the implications of digital workflows for capturing knowledge generating through design practice, and the consequences of the transformation of production tools into commodities. It concludes with a discussion of the perception of added value in typeface design.
Resumo:
The emergence and development of digital imaging technologies and their impact on mainstream filmmaking is perhaps the most familiar special effects narrative associated with the years 1981-1999. This is in part because some of the questions raised by the rise of the digital still concern us now, but also because key milestone films showcasing advancements in digital imaging technologies appear in this period, including Tron (1982) and its computer generated image elements, the digital morphing in The Abyss (1989) and Terminator 2: Judgment Day (1991), computer animation in Jurassic Park (1993) and Toy Story (1995), digital extras in Titanic (1997), and ‘bullet time’ in The Matrix (1999). As a result it is tempting to characterize 1981-1999 as a ‘transitional period’ in which digital imaging processes grow in prominence and technical sophistication, and what we might call ‘analogue’ special effects processes correspondingly become less common. But such a narrative risks eliding the other practices that also shape effects sequences in this period. Indeed, the 1980s and 1990s are striking for the diverse range of effects practices in evidence in both big budget films and lower budget productions, and for the extent to which analogue practices persist independently of or alongside digital effects work in a range of production and genre contexts. The chapter seeks to document and celebrate this diversity and plurality, this sustaining of earlier traditions of effects practice alongside newer processes, this experimentation with materials and technologies old and new in the service of aesthetic aspirations alongside budgetary and technical constraints. The common characterization of the period as a series of rapid transformations in production workflows, practices and technologies will be interrogated in relation to the persistence of certain key figures as Douglas Trumbull, John Dykstra, and James Cameron, but also through a consideration of the contexts for and influences on creative decision-making. Comparative analyses of the processes used to articulate bodies, space and scale in effects sequences drawn from different generic sites of special effects work, including science fiction, fantasy, and horror, will provide a further frame for the chapter’s mapping of the commonalities and specificities, continuities and variations in effects practices across the period. In the process, the chapter seeks to reclaim analogue processes’ contribution both to moments of explicit spectacle, and to diegetic verisimilitude, in the decades most often associated with the digital’s ‘arrival’.
Resumo:
The aim for this thesis was to develop a proposal of documentation, containing rules and procedures, which JernströmOffset needs to acquire the Certified Graphic Production certification. A fundamental part was to study the materialissued by Sveriges Grafiska Mediaförening, to clarify the requirements that must be met to obtain the certification. Therecommendations found in the CGP material must also be considered.Based on the requirements and recommendations of Certified Graphic Production, a mapping of the workflows atJernström Offset was performed. It was done by interviewing employees from different departments at the company,this to get a clear understanding of the operations carried out throughout the production flow and the final qualityfollow-up.During our reviewing process we found that a number of changes, in terms of working environment and practices,must be made at the company. As a result of this we propose some appropriate actions to be implemented.The documentation was finally written, through the requirements and recommendations of Certified GraphicProduction and then applied to Jernström Offset’s work procedures.
Resumo:
This thesis is about new digital moving image recording technologies and how they augment the distribution of creativity and the flexibility in moving image production systems, but also impose constraints on how images flow through the production system. The central concept developed in this thesis is ‘creative space’ which links quality and efficiency in moving image production to time for creative work, capacity of digital tools, user skills and the constitution of digital moving image material. The empirical evidence of this thesis is primarily based on semi-structured interviews conducted with Swedish film and TV production representatives.This thesis highlights the importance of pre-production technical planning and proposes a design management support tool (MI-FLOW) as a way to leverage functional workflows that is a prerequisite for efficient and cost effective moving image production.
Resumo:
MyGrid is an e-Science Grid project that aims to help biologists and bioinformaticians to perform workflow-based in silico experiments, and help them to automate the management of such workflows through personalisation, notification of change and publication of experiments. In this paper, we describe the architecture of myGrid and how it will be used by the scientist. We then show how myGrid can benefit from agents technologies. We have identified three key uses of agent technologies in myGrid: user agents, able to customize and personalise data, agent communication languages offering a generic and portable communication medium, and negotiation allowing multiple distributed entities to reach service level agreements.
Resumo:
As scientific workflows and the data they operate on, grow in size and complexity, the task of defining how those workflows should execute (which resources to use, where the resources must be in readiness for processing etc.) becomes proportionally more difficult. While "workflow compilers", such as Pegasus, reduce this burden, a further problem arises: since specifying details of execution is now automatic, a workflow's results are harder to interpret, as they are partly due to specifics of execution. By automating steps between the experiment design and its results, we lose the connection between them, hindering interpretation of results. To reconnect the scientific data with the original experiment, we argue that scientists should have access to the full provenance of their data, including not only parameters, inputs and intermediary data, but also the abstract experiment, refined into a concrete execution by the "workflow compiler". In this paper, we describe preliminary work on adapting Pegasus to capture the process of workflow refinement in the PASOA provenance system.
Resumo:
Current scientific applications are often structured as workflows and rely on workflow systems to compile abstract experiment designs into enactable workflows that utilise the best available resources. The automation of this step and of the workflow enactment, hides the details of how results have been produced. Knowing how compilation and enactment occurred allows results to be reconnected with the experiment design. We investigate how provenance helps scientists to connect their results with the actual execution that took place, their original experiment and its inputs and parameters.
Resumo:
A description of a data item's provenance can be provided in dierent forms, and which form is best depends on the intended use of that description. Because of this, dierent communities have made quite distinct underlying assumptions in their models for electronically representing provenance. Approaches deriving from the library and archiving communities emphasise agreed vocabulary by which resources can be described and, in particular, assert their attribution (who created the resource, who modied it, where it was stored etc.) The primary purpose here is to provide intuitive metadata by which users can search for and index resources. In comparison, models for representing the results of scientific workflows have been developed with the assumption that each event or piece of intermediary data in a process' execution can and should be documented, to give a full account of the experiment undertaken. These occurrences are connected together by stating where one derived from, triggered, or otherwise caused another, and so form a causal graph. Mapping between the two approaches would be benecial in integrating systems and exploiting the strengths of each. In this paper, we specify such a mapping between Dublin Core and the Open Provenance Model. We further explain the technical issues to overcome and the rationale behind the approach, to allow the same method to apply in mapping similar schemes.
Resumo:
HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.
Resumo:
This futuristic article discusses the shift in academic and research libraries to electronic collections in the context of information access, costs, publication models, and preservation of content. Certain factors currently complicate the shift to electronic formats and challenge their widespread acceptance. Future scenarios spanning skill ecosystems, technologies and workflows, and societal implications are explored as logical outgrowths of present circumstances.
Resumo:
Este trabalho defende de que a idéia que critérios lingüísticos e pragmáticos contribuem para o reconhecimento da especificidade do termo jurídico. Desse modo, parte do princípio que a identificação de uma terminologia está vinculada ao reconhecimento da natureza e dos propósitos daqueles que a utilizam em uma dada área de conhecimento, o que, na área jurídica, se torna evidente na expressão da normatividade da lei. A pesquisa utiliza como referencial teórico as concepções de base da Teoria Comunicativa da Terminologia, da Teoria dos Atos de Fala, aportes da Teoria Semiótica do Texto no âmbito jurídico, bem como fundamentos gerais da ciência jurídica. O corpus de estudo, a partir do qual se demonstra a validade da idéia defendida, é formado por textos legislativos. A Constituição Brasileira de 1988 foi escolhida como campo preferencial de pesquisa e é examinado como objeto da comunicação que se estabelece entre o destinador e o destinatário no âmbito do universo sócio-cultural da área jurídica. Descrevem-se os mecanismos que tecem a rede modal que estrutura esse tipo de texto, considerando-se que a enunciação da norma constitucional configura um ato de fala jurídico. Esse ato de fala é analisado na manifestação de normas de três categorias: programáticas, de atribuição de poder e competência e de conduta, destacando-se o caráter performativo dos verbos que expressam tais normas. Após a identificação do padrão morfossintático e semântico que caracteriza a sua estrutura frasal, analisam-se os elementos que vinculam o verbo, seu sujeito e complementos aos propósitos da área temática, com destaque para sua implicação pragmática. Conforme a pesquisa demonstra, tais propósitos imprimem o caráter de imperatividade àquilo que é comunicado, conferindo especificidade às unidades lexicais que integram a estrutura frasal dos verbos focalizados. Conclui-se que o verbo performativo é fator primordial no processo de atualização da especificidade dos termos na linguagem jurídica, bem como se demonstra que alguns dos verbos analisados se constituem em genuínos candidatos a termo jurídico. Finalizando a investigação, são indicados parâmetros para a marcação de elementos lingüísticos, tanto morfossintáticos como semânticos e de natureza pragmática, para o processamento informatizado da linguagem usada no Direito.
Resumo:
Neste estudo são discutidos alguns aspectos relacionados à escolha da primeira linguagem de programação em currículos de ciência da computação, com interesse especial em Pascal e Java. A primeira linguagem é amplamente adotada para ensinar programação aos novatos, enquanto a segunda está ganhando popularidade como uma linguagem moderna e abrangente, que pode ser usada em muitas disicplinas ao longo de um curso degraduação em computação como ferramenta para ensinar desde recursos básicos de programação até tópicos mais avançados. Embora vários problemas quanto ao ensino de Java, com a primeira linguagem de programação, possam ser apontadas, consideramosque Java é uma boa escolha, visto que (a) oferece apoio a importantes questões conceituais e tecnológicos e, (b) é possível contornar algumas complexidades da linguagem e da plataforma Java para torná-las mais adequadas à alunos iniciantes. Além disso, considerando a grande popularidade de Pascal nos currículos de cursos de computação, uma eventual adoção de Java conduz à outro problema: a falta de professores aptos a lecionar programação orientada a objetos. Sugerimos que este problema de migração de Pascal para Java seja enfrentado através de smplificação do ambiente de desenvolvimento de programas, uso de um pacote com classes que facilitam a entrada e saída, e o desenvolvimento de um catálogo comparativo de programas implementados em ambas as linguagens. Neste estudo também é apresentado o JEduc, um IDE muito simples com o objetivo de dar suporte ao ensino da linguagem de programação orientada a objetos Java aos novatos. Oferece componentes desenvolvidos em Java que integram edição, compilação e execução de programas Java. Além das funcionalidades comuns a um IDE, JEduc foi desenvolvido para gir como uma ferramente pedagógica: simplifica a maioria das mensagens do compilador e erros da JRE, permite a inserção de esqueletos de comandos, e incorpora pacotes especiais para esconder alguns detalhes sintáticos e semânticos indesejáveis.
Resumo:
Sistemas de gerência de workflow estão sendo amplamente utilizados para a modelagem e a execução dos processos de negócios das organizações. Tipicamente, esses sistemas interpretam um workflow e atribuem atividades a participantes, os quais podem utilizar ferramentas e aplicativos para sua realização. Recentemente, XML começou a ser utilizada como linguagem para representação dos processos bem como para a interoperação entre várias máquinas de workflow. Os processos de negócio são, na grande maioria, dinâmicos, podendo ser modi- ficados devido a inúmeros fatores, que vão desde a correção de erros até a adaptação a novas leis externas à organização. Conseqüentemente, os workflows correspondentes devem também evoluir, para se adequar às novas especificações do processo. Algumas propostas para o tratamento deste problema já foram definidas, enfocando principalmente as alterações sobre o fluxo de controle. Entretanto, para workflows representados em XML, ainda não foram definidos mecanismos apropriados para que a evolução possa ser realizada. Este trabalho apresenta uma estratégia para a evolução de esquemas de workflow representados em XML. Esta estratégia é construída a partir do conceito de versionamento, que permite o armazenamento de diversas versões de esquemas e a consulta ao hist´orico de versões e instâncias. As versões são representadas de acordo com uma linguagem que considera os aspectos de evolução. As instâncias, responsáveis pelas execuções particulares das versões, também são adequadamente modeladas. Além disso, é definido um método para a migração de instâncias entre versões de esquema, no caso de uma evolução. A aplicabilidade da estratégia proposta é verificada por meio de um estudo de caso.