604 resultados para Workflows semânticos
Resumo:
Presentation from the MARAC conference in Roanoke, VA on October 7–10, 2015. S6 - Digital Archives: New Colleagues, New Solutions.
Resumo:
Hoy en día la cantidad de información de la que el mundo dispone es inmensa y la gran mayoría está al alcance de un click gracias a las tecnologías de la información. Muchos de los recursos que existen en Internet están escritos a mano por personas y para personas, pero este hecho tiene muchas limitaciones, como el idioma, el contenido, las expresiones en la comunicación o la disposición de la información en el texto. Todos estos factores influyen en el lector permitiendo entender mejor o peor los conceptos, relaciones e ideas que se expresan. Un ejemplo de un recurso muy utilizado a día de hoy es Wikipedia, que cuenta con más de cinco millones de artículos en inglés y más de un millón en otros doce idiomas entre los cuales se encuentran el castellano, el francés y el alemán. Por otro lado, existen otros recursos que aportan información de otras formas más interesantes desde el punto de vista de la informática, como pueden ser ConceptNet o WordNet. Las ventajas que ofrecen este tipo de recursos son que no disponen de varios lenguajes, es decir el conocimiento está unificado en uno solo, no tienen estructura de texto y se puede automatizar más fácilmente la inserción de nueva información, lo que se traduce en un crecimiento más rápido del conocimiento. Este tipo de recursos son ideales para su uso en aplicaciones informáticas gracias a que no es necesario un proceso de extracción de información de la fuente. Sin embargo, este tipo de información no está pensada para la lectura por parte de un humano, ya que se enfrentaría a muchos datos de golpe y sin un orden lógico para la comprensión, además de carecer de la conjugación propia o traducción a un idioma concreto. Este trabajo tiene como objetivo principal partir de un recurso de información no legible ni manejable por humanos e ideado para el uso por computadoras, y dar lugar a una interpretación de esta información que permita la lectura y comprensión en lenguaje natural por personas. Podemos verlo como un trabajo que posibilita y facilita el entendimiento Máquina-Hombre. Para ello se hace uso de un sistema de generación de lenguaje natural, inteligencia artificial y de la creatividad computacional. Además, este trabajo forma parte de un proyecto mayor, del que hablaremos en la sección 2.5, en el que se generan nuevos conceptos a partir de otros. El papel que desempeña esta aplicación permite describir los nuevos conceptos generados y poder entenderlos. A la hora de abordar el problema de la generación de texto podemos encontrar varias formas de atacar la cuestión, y todas las soluciones se pueden considerar como válidas. Se implementarán sistemas de diferente complejidad y naturaleza, como generadores básicos de textos o generadores con planificación y otras soluciones comunes en este campo como el uso de plantillas y el estudio de las propiedades de los textos generados por los humanos. Por esta razón, en este trabajo se desarrollarán varios métodos y se valorarán según ciertos criterios como la claridad del texto, su organización, o si se ha hecho un buen uso de la gramática o la ortografía. Como objetivos secundarios de este proyecto podemos remarcar la generación de un servicio web que permita que esté disponible la aplicación para su uso, y aporte valor tanto al mundo de la investigación como al del conocimiento. También se valora la semejanza a los generados por humanos.
Resumo:
Se estudia la incidencia de los estímulos fonológicos y semánticos en los procesos de producción léxica, a partir de los datos obtenidos en pruebas de denominación con paciente afásica con características anómicas. Arroja datos en relación con la naturaleza del lexicón, el debate entre procesos seriales y de acceso directo y su papel en la recuperación léxica la longitud fonológica y silábica de la palabra.The incidence of semantic and phonological stimuli in word production processes is addressed. This research analyzes the results obtained from different denomination tasks with an anomic speaker with aphasia. The basis of research was a lexicon theory, the debate between connectionist or serial levels in language production, and the incidence of syllabical and phonological length in word recovery.
A Digital Collection Center's Experience: ETD Discovery, Promotion, and Workflows in Digital Commons
Resumo:
This presentation was given at the Digital Commons Southeastern User Group conference at Winthrop University, South Carolina on June 5, 2015. The presentation discusses how the digital collections center (DCC) at Florida International University uses Digital Commons as their tool for ingesting, editing, tracking, and publishing university theses and dissertations. The basic DCC workflow is covered as well as institutional repository promotion.
Resumo:
This presentation was given at the Panhandle Library Access Network's (PLAN) Innovation Conference: Digitization- Preserving the Past for the Future Conference on August 14th, 2015. The presentation uses a specific collection of directories as a case study of the complications librarians and archivists face in digitizing older materials that may also be quite large, such as a directory. Prime OCR and Abbyy Fine Reader are discussed and their pros and cons covered. Troubleshooting and editing with Adobe Photoshop is also discussed.
Resumo:
This presentation was given at the FLVC regional conference at Broward College on May 7, 2015 and introduced scanning, processing, record creation, dissemination, and preservation in FIU Libraries' Digital Collections Center. The main focus was on processing, specifically employing OCR technology with difficult sources.
Resumo:
OpenLab ESEV is a project of the School of Education of the Polytechnic Institute of Viseu (ESEV), Portugal, that aims to promote, foster and support the use of Free/Libre Software and Open Source Software, Open Educational Resources, Free Culture, Free file formats and more flexible copyright licenses for creative and educational purposes in the ESEV's domains of activity (education, arts, media). Most of the OpenLab ESEV activities are related to the teacher education and arts and multimedia programs, with a special focus on the later. In this paper, the project and some activities are presented, starting with its origins and its conceptual framework. The presented overview is intended as background for the examination of the use of Free/Libre Software and Free Culture in educational settings, specially at the higher education level, and for creative purposes. The activities developed with students and professionals generated pipelines and workflows implemented for different creative purposes, software packages used for different tasks, choices for file formats and copyright licenses. Finished and ongoing multimedia and arts projects will be presented as real case scenarios.
Resumo:
The advantages of bundling e-journals together into publisher collections include increased access to information for the subscribing institution’s clients, purchasing cost-effectiveness and streamlined workflows. Whilst cataloguing a consortial e-journal collection has its advantages, there are also various pitfalls and the author outlines efforts by the CAUL (Council of Australian University Libraries) Consortium libraries to further streamline this process, working in conjunction with major publishers. Despite the advantages that publisher collections provide, pressures to unbundle existing packages continue to build, fuelled by an ever-increasing selection of available electronic resources; decreases in, and competing demands upon, library budgets; the impact of currency fluctuations; and poor usage for an alarmingly high proportion of collection titles. Consortial perspectives on bundling and unbundling titles are discussed, including options for managing the addition of new titles to the bundle and why customising consortial collections currently does not work. Unbundling analyses carried out at Queensland University of Technology during 2006 to 2008 prior to the renewal of several major publisher collections are presented as further case studies which illustrate why the “big deal” continues to persist.
Resumo:
Simulation is widely used as a tool for analyzing business processes but is mostly focused on examining abstract steady-state situations. Such analyses are helpful for the initial design of a business process but are less suitable for operational decision making and continuous improvement. Here we describe a simulation system for operational decision support in the context of workflow management. To do this we exploit not only the workflow’s design, but also use logged data describing the system’s observed historic behavior, and incorporate information extracted about the current state of the workflow. Making use of actual data capturing the current state and historic information allows our simulations to accurately predict potential near-future behaviors for different scenarios. The approach is supported by a practical toolset which combines and extends the workflow management system YAWL and the process mining framework ProM.
Resumo:
Queensland University of Technology (QUT) is faced with a rapidly growing research agenda built upon a strategic research capacity-building program. This presentation will outline the results of a project that has recently investigated QUT’s research support requirements and which has developed a model for the support of eResearch across the university. QUT’s research building strategy has produced growth at the faculty level and within its research institutes. This increased research activity is pushing the need for university-wide eResearch platforms capable of providing infrastructure and support in areas such as collaboration, data, networking, authentication and authorisation, workflows and the grid. One of the driving forces behind the investigation is data-centric nature of modern research. It is now critical that researchers have access to supported infrastructure that allows the collection, analysis, aggregation and sharing of large data volumes for exploration and mining in order to gain new insights and to generate new knowledge. However, recent surveys into current research data management practices by the Australian Partnership for Sustainable Repositories (APSR) and by QUT itself, has revealed serious shortcomings in areas such as research data management, especially its long term maintenance for reuse and authoritative evidence of research findings. While these internal university pressures are building, at the same time there are external pressures that are magnifying them. For example, recent compliance guidelines from bodies such as the ARC, and NHMRC and Universities Australia indicate that institutions need to provide facilities for the safe and secure storage of research data along with a surrounding set of policies, on its retention, ownership and accessibility. The newly formed Australian National Data Service (ANDS) is developing strategies and guidelines for research data management and research institutions are a central focus, responsible for managing and storing institutional data on platforms that can be federated nationally and internationally for wider use. For some time QUT has recognised the importance of eResearch and has been active in a number of related areas: ePrints to digitally publish research papers, grid computing portals and workflows, institutional-wide provisioning and authentication systems, and legal protocols for copyright management. QUT also has two widely recognised centres focused on fundamental research into eResearch itself: The OAK LAW project (Open Access to Knowledge) which focuses upon legal issues relating eResearch and the Microsoft QUT eResearch Centre whose goal is to accelerate scientific research discovery, through new smart software. In order to better harness all of these resources and improve research outcomes, the university recently established a project to investigate how it might better organise the support of eResearch. This presentation will outline the project outcomes, which include a flexible and sustainable eResearch support service model addressing short and longer term research needs, identification of resource requirements required to establish and sustain the service, and the development of research data management policies and implementation plans.
Resumo:
Reflective learning is vital for successful practice-led education such as animation, multimedia design and graphic design, and social network sites can accommodate various learning styles for effective reflective learning. In this paper, the researcher studies reflective learning through social network sites with two animation units. These units aim to provide students with an understanding of the tasks and workflows involved in the production of style sheets, character sheets and motion graphics for use in 3D productions for film and television and game design. In particular, an assessment in these units requires students to complete their online reflective journals throughout the semester. The reflective learning has been integrated within the unit design and students are encouraged to reflect weekly learning processes and outcomes. A survey evaluating for students’ learning experience was conducted, and its outcomes indicate that social network site based reflective learning will not be effective without considering students’ learning circumstances and designing peer-to-peer interactions.
Resumo:
"This column is distinguished from previous Impact columns in that it concerns the development tightrope between research and commercial take-up and the role of the LGPL in an open source workflow toolkit produced in a University environment. Many ubiquitous systems have followed this route, (Apache, BSD Unix, ...), and the lessons this Service Oriented Architecture produces cast yet more light on how software diffuses out to impact us all." Michiel van Genuchten and Les Hatton Workflow management systems support the design, execution and analysis of business processes. A workflow management system needs to guarantee that work is conducted at the right time, by the right person or software application, through the execution of a workflow process model. Traditionally, there has been a lack of broad support for a workflow modeling standard. Standardization efforts proposed by the Workflow Management Coalition in the late nineties suffered from limited support for routing constructs. In fact, as later demonstrated by the Workflow Patterns Initiative (www.workflowpatterns.com), a much wider range of constructs is required when modeling realistic workflows in practice. YAWL (Yet Another Workflow Language) is a workflow language that was developed to show that comprehensive support for the workflow patterns is achievable. Soon after its inception in 2002, a prototype system was built to demonstrate that it was possible to have a system support such a complex language. From that initial prototype, YAWL has grown into a fully-fledged, open source workflow management system and support environment
Resumo:
Petri nets are often used to model and analyze workflows. Many workflow languages have been mapped onto Petri nets in order to provide formal semantics or to verify correctness properties. Typically, the so-called Workflow nets are used to model and analyze workflows and variants of the classical soundness property are used as a correctness notion. Since many workflow languages have cancelation features, a mapping to workflow nets is not always possible. Therefore, it is interesting to consider workflow nets with reset arcs. Unfortunately, soundness is undecidable for workflow nets with reset arcs. In this paper, we provide a proof and insights into the theoretical limits of workflow verification.
Resumo:
Technology-mediated collaboration process has been extensively studied for over a decade. Most applications with collaboration concepts reported in the literature focus on enhancing efficiency and effectiveness of the decision-making processes in objective and well-structured workflows. However, relatively few previous studies have investigated the applications of collaboration schemes to problems with subjective and unstructured nature. In this paper, we explore a new intelligent collaboration scheme for fashion design which, by nature, relies heavily on human judgment and creativity. Techniques such as multicriteria decision making, fuzzy logic, and artificial neural network (ANN) models are employed. Industrial data sets are used for the analysis. Our experimental results suggest that the proposed scheme exhibits significant improvement over the traditional method in terms of the time–cost effectiveness, and a company interview with design professionals has confirmed its effectiveness and significance.
Resumo:
Bioinformatics is dominated by online databases and sophisticated web-accessible tools. As such, it is ideally placed to benefit from the rapid, purpose specific combination of services achievable via web mashups. The recent introduction of a number of sophisticated frameworks has greatly simplified the mashup creation process, making them accessible to scientists with limited programming expertise. In this paper we investigate the feasibility of mashups as a new approach to bioinformatic experimentation, focusing on an exploratory niche between interactive web usage and robust workflows, and attempting to identify the range of computations for which mashups may be employed. While we treat each of the major frameworks, we illustrate the ideas with a series of examples developed under the Popfly framework