977 resultados para Digital Document
Resumo:
Charles Edward Perry (Chuck), 1937-1999, was the founding president of Florida International University in Miami, Florida. He grew up in Logan County, West Virginia and graduated from Bowling Green State University. He married Betty Laird in 1960. In 1969, at the age of 32, Perry was the youngest president of any university in the nation. The name of the university reflects Perry’s desire for a title that would not limit the scope of the institution and would support his vision of having close ties to Latin America. Perry and a founding corps opened FIU to 5,667 students in 1972 with only one large building housing six different schools. Perry left the office of President of FIU in 1976 when the student body had grown to 10,000 students and the university had six buildings, offered 134 different degrees and was fully accredited. Charles Perry died on August 30, 1999 at his home in Rockwall, Texas. He is buried on the FIU campus in front of the Graham Center entrance.
Resumo:
The Digital Commons Annual Report is a document that interested parties may use as a means of monitoring the yearly progress of Florida International University Libraries’ institutional repository. The report includes download and page hit statistics for all collections held in FIU Digital Commons.
Resumo:
The Digital Commons Annual Report is a document that interested parties may use as a means of monitoring the yearly progress of Florida International University Libraries’ institutional repository. The report includes download and page hit statistics for all collections held in FIU Digital Commons.
Resumo:
The Digital Commons Annual Report is a document that interested parties may use as a means of monitoring the yearly progress of Florida International University Libraries’ institutional repository. The report includes download and page hit statistics for all collections held in FIU Digital Commons.
Resumo:
A journal of commercial voyages and domestic life on the Tigris River -- structure of the documents.
Resumo:
Document representations can rapidly become unwieldy if they try to encapsulate all possible document properties, ranging from abstract structure to detailed rendering and layout. We present a composite document approach wherein an XMLbased document representation is linked via a shadow tree of bi-directional pointers to a PDF representation of the same document. Using a two-window viewer any material selected in the PDF can be related back to the corresponding material in the XML, and vice versa. In this way the treatment of specialist material such as mathematics, music or chemistry (e.g. via read aloud or play aloud ) can be activated via standard tools working within the XML representation, rather than requiring that application-specific structures be embedded in the PDF itself. The problems of textual recognition and tree pattern matching between the two representations are discussed in detail. Comparisons are drawn between our use of a shadow tree of pointers to map between document representations and the use of a code-replacement shadow tree in technologies such as XBL.
Resumo:
It is just over 20 years since Adobe's PostScript opened a new era in digital documents. PostScript allows most details of rendering to be hidden within the imaging device itself, while providing a rich set of primitives enabling document engineers to think of final-form rendering as being just a sophisticated exercise in computer graphics. The refinement of the PostScript model into PDF has been amazingly successful in creating a near-universal interchange format for complex and graphically rich digital documents but the PDF format itself is neither easy to create nor to amend. In the meantime a whole new world of digital documents has sprung up centred around XML-based technologies. The most widespread example is XHTML (with optional CSS styling) but more recently we have seen Scalable Vector Graphics (SVG) emerge as an XML-based, low-level, rendering language with PostScript-compatible rendering semantics. This paper surveys graphically-rich final-form rendering technologies and asks how flexible they can be in allowing adjustments to be made to final appearance without the need for regenerating a whole page or an entire document. Particular attention is focused on the relative merits of SVG and PDF in this regard and on the desirability, in any document layout language, of being able to manipulate the graphic properties of document components parametrically, and at a level of granularity smaller than an entire page.
Resumo:
With the development of variable-data-driven digital presses - where each document printed is potentially unique - there is a need for pre-press optimization to identify material that is invariant from document to document. In this way rasterisation can be confined solely to those areas which change between successive documents thereby alleviating a potential performance bottleneck. Given a template document specified in terms of layout functions, where actual data is bound at the last possible moment before printing, we look at deriving and exploiting the invariant properties of layout functions from their formal specifications. We propose future work on generic extraction of invariance from such properties for certain classes of layout functions.
Resumo:
Este documento descreve o trabalho realizado em conjunto com a empresa MedSUPPORT[1] no desenvolvimento de uma plataforma digital para análise da satisfação dos utentes de unidades de saúde. Atualmente a avaliação de satisfação junto dos seus clientes é um procedimento importante e que deve ser utilizado pelas empresas como mais uma ferramenta de avaliação dos seus produtos ou serviços. Para as unidades de saúde a avaliação da satisfação do utente é atualmente considerada como um objetivo fundamental dos serviços de saúde e tem vindo a ocupar um lugar progressivamente mais importante na avaliação da qualidade dos mesmos. Neste âmbito idealizou-se desenvolver uma plataforma digital para análise da satisfação dos utentes de unidades de saúde. O estudo inicial sobre o conceito da satisfação de consumidores e utentes permitiu consolidar os conceitos associados à temática em estudo. Conhecer as oito dimensões que, de acordo com os investigadores englobam a satisfação do utente é um dos pontos relevantes do estudo inicial. Para avaliar junto do utente a sua satisfação é necessário questiona-lo diretamente. Para efeito desenvolveu-se um inquérito de satisfação estudando cuidadosamente cada um dos elementos que deste fazem parte. No desenvolvimento do inquérito de satisfação foram seguidas as seguintes etapas: Planeamento do questionário, partindo das oito dimensões da satisfação do utente até às métricas que serão avaliadas junto do utente; Análise dos dados a recolher, definindo-se, para cada métrica, se os dados serão nominais, ordinais ou provenientes de escalas balanceadas; Por último a formulação das perguntas do inquérito de satisfação foi alvo de estudo cuidado para garantir que o utente percecione da melhor forma o objetivo da questão. A definição das especificações da plataforma e do questionário passou por diferentes estudos, entre eles uma análise de benchmarking[2], que permitiram definir que o inquérito iv estará localizado numa zona acessível da unidade de saúde, será respondido com recurso a um ecrã táctil (tablet) e que estará alojado na web. As aplicações web desenvolvidas atualmente apresentam um design apelativo e intuitivo. Foi fundamental levar a cabo um estudo do design da aplicação web, como garantia que as cores utilizadas, o tipo de letra, e o local onde a informação são os mais adequados. Para desenvolver a aplicação web foi utilizada a linguagem de programação Ruby, com recurso à framework Ruby on Rails. Para a implementação da aplicação foram estudadas as diferentes tecnologias disponíveis, com enfoque no estudo do sistema de gestão de base de dados a utilizar. O desenvolvimento da aplicação web teve também como objetivo melhorar a gestão da informação gerada pelas respostas ao inquérito de satisfação. O colaborador da MedSUPPORT é o responsável pela gestão da informação pelo que as suas necessidades foram atendidas. Um menu para a gestão da informação é disponibilizado ao administrador da aplicação, colaborador MedSUPPORT. O menu de gestão da informação permitirá uma análise simplificada do estado atual com recurso a um painel do tipo dashboard e, a fim de melhorar a análise interna dos dados terá uma função de exportação dos dados para folha de cálculo. Para validação do estudo efetuado foram realizados os testes de funcionamento à plataforma, tanto à sua funcionalidade como à sua utilização em contexto real pelos utentes inquiridos nas unidades de saúde. Os testes em contexto real objetivaram validar o conceito junto dos utentes inquiridos.
Resumo:
Este artículo resume el proceso de implementación del Laboratorio de Televisión Digital (DTV) de la Universidad de Cuenca, que surge como un entorno confiable de experimentación e investigación que hace uso de las características asociadas al estándar ISDB-Tb adoptado por Ecuador en el año 2010 para la transmisión de señales de televisión abierta. El objetivo de este artículo es documentar los aspectos que se han considerado para simular un escenario real en el que un Transport Stream (TS) formado por contenido audiovisual y aplicaciones interactivas, primero se genera, para luego transmitirse a través del canal de comunicaciones, y finalmente ser recibido en una televisión con receptor ISDB-Tb. Así, se facilita el desarrollo y la experimentación de nuevos servicios aprovechando el nuevo formato de DTV.
Resumo:
Maintaining accessibility to and understanding of digital information over time is a complex challenge that often requires contributions and interventions from a variety of individuals and organizations. The processes of preservation planning and evaluation are fundamentally implicit and share similar complexity. Both demand comprehensive knowledge and understanding of every aspect of to-be-preserved content and the contexts within which preservation is undertaken. Consequently, means are required for the identification, documentation and association of those properties of data, representation and management mechanisms that in combination lend value, facilitate interaction and influence the preservation process. These properties may be almost limitless in terms of diversity, but are integral to the establishment of classes of risk exposure, and the planning and deployment of appropriate preservation strategies. We explore several research objectives within the course of this thesis. Our main objective is the conception of an ontology for risk management of digital collections. Incorporated within this are our aims to survey the contexts within which preservation has been undertaken successfully, the development of an appropriate methodology for risk management, the evaluation of existing preservation evaluation approaches and metrics, the structuring of best practice knowledge and lastly the demonstration of a range of tools that utilise our findings. We describe a mixed methodology that uses interview and survey, extensive content analysis, practical case study and iterative software and ontology development. We build on a robust foundation, the development of the Digital Repository Audit Method Based on Risk Assessment. We summarise the extent of the challenge facing the digital preservation community (and by extension users and creators of digital materials from many disciplines and operational contexts) and present the case for a comprehensive and extensible knowledge base of best practice. These challenges are manifested in the scale of data growth, the increasing complexity and the increasing onus on communities with no formal training to offer assurances of data management and sustainability. These collectively imply a challenge that demands an intuitive and adaptable means of evaluating digital preservation efforts. The need for individuals and organisations to validate the legitimacy of their own efforts is particularly prioritised. We introduce our approach, based on risk management. Risk is an expression of the likelihood of a negative outcome, and an expression of the impact of such an occurrence. We describe how risk management may be considered synonymous with preservation activity, a persistent effort to negate the dangers posed to information availability, usability and sustainability. Risk can be characterised according to associated goals, activities, responsibilities and policies in terms of both their manifestation and mitigation. They have the capacity to be deconstructed into their atomic units and responsibility for their resolution delegated appropriately. We continue to describe how the manifestation of risks typically spans an entire organisational environment, and as the focus of our analysis risk safeguards against omissions that may occur when pursuing functional, departmental or role-based assessment. We discuss the importance of relating risk-factors, through the risks themselves or associated system elements. To do so will yield the preservation best-practice knowledge base that is conspicuously lacking within the international digital preservation community. We present as research outcomes an encapsulation of preservation practice (and explicitly defined best practice) as a series of case studies, in turn distilled into atomic, related information elements. We conduct our analyses in the formal evaluation of memory institutions in the UK, US and continental Europe. Furthermore we showcase a series of applications that use the fruits of this research as their intellectual foundation. Finally we document our results in a range of technical reports and conference and journal articles. We present evidence of preservation approaches and infrastructures from a series of case studies conducted in a range of international preservation environments. We then aggregate this into a linked data structure entitled PORRO, an ontology relating preservation repository, object and risk characteristics, intended to support preservation decision making and evaluation. The methodology leading to this ontology is outlined, and lessons are exposed by revisiting legacy studies and exposing the resource and associated applications to evaluation by the digital preservation community.
Resumo:
This thesis examines digital technologies policies designed for Australian schools and the ways they are understood and interpreted by students, school staff, teachers, principals and policy writers. This study explores the ways these research participant groups interpret and understand the ‘ethical dimension’ of schools’ digital technologies policies for teaching and learning. In this thesis the ethical dimension is considered to be a dynamic concept which encompasses various elements including; decisions, actions, values, issues, debates, education, discourses, and notions of right and wrong, in relation to ethics and uses of digital technologies in schools. In this study policy is taken to mean not only written texts but discursive processes, policy documents including national declarations, strategic plans and ‘acceptable use’ policies to guide the use of digital technologies in schools. The research is situated in the context of changes that have occurred in Australia and internationally over the last decade that have seen a greater focus on the access to and use of digital technologies in schools. In Australian school education, the attention placed on digital technologies in schools has seen the release of policies at the national, state, territory, education office and school levels, to guide their use. Prominent among these policies has been the Digital Education Revolution policy, launched in 2007 and concluded in 2013. This research aims to answers the question: What does an investigation reveal about understandings of the ethical dimension of digital technologies policies and their implementation in school education? The objective of this research is to examine the ethical dimension of digital technologies policies and to interpret and understand the responses of the research participants to the issues, silences, discourses and language, which characterise this dimension. In doing so, it is intended that the research can allow the participants to have a voice that, may be different to the official discourses located in digital technologies policies. The thesis takes a critical and interpretative approach to policies and examines the role of digital technologies policies as discourse. Interpretative theory is utilised as it provides a conceptual lens from which to interpret different perspectives and the implications of these in the construction of meaning in relation to schools’ digital technologies policies. Critical theory is used in tandem with interpretative theory as it represents a conceptual basis from which to critique and question underlying assumptions and discourses that are associated with the ethical dimension of schools’ digital technologies policies. The research methods used are semi-structured interviews and policy document analysis. Policies from the national, state, territory, education office and school level were analysed and contribute to understanding the way the ethical dimension of digital technologies policies is represented as a discourse. Students, school staff, teachers, principals and policy writers participated in research interviews and their views and perspectives were canvassed in relation to the ethical use of digital technologies and the policies that are designed to regulate their use. The thesis presents an argument that the ethical dimension of schools’ digital technologies policies and use is an under-researched area, and there are gaps in understanding and knowledge in the literature which remain to be addressed. It is envisaged that the thesis can make a meaningful contribution to understand the ways in which schools’ digital technologies policies are understood in school contexts. It is also envisaged that the findings from the research can inform policy development by analysing the voices and views of those in schools. The findings of the policy analysis revealed that there is little attention given to the ethical dimension in digital technologies at the national level. A discourse of compliance and control pervades digital technologies policies from the state, education office and school levels, which reduces ethical considerations to technical, legal and regulatory requirements. The discourse is largely instrumentalist and neglects the educative dimension of digital technologies which has the capacity to engender their ethical use. The findings from the interview conversations revealed that students, school staff and teachers perceive digital technologies policies to be difficult to understand, and not relevant to their situation and needs. They also expressed a desire to have greater consultation and participation in the formation and enactment of digital technologies policies, and they believe they are marginalised from these processes in their schools. Arising from the analysis of the policies and interview conversations, an argument is presented that in the light of the prominent role played by digital technologies and their potential for enhancing all aspects of school education, more research is required to provide a more holistic and richer understanding of the policies that are constructed to control and mediate their use.
Resumo:
This CPM project focuses on the document approval process that the Division of State Human Resources consulting team utilizes as it relates to classification and compensation requests, e.g. job reclassifications, PD update requests, and salary requests. The ultimate goal is to become more efficient by utilizing electronic signatures and electronic form filling to streamline the current process of document approvals.
Resumo:
El desarrollo tecnológico y la expansión de las formas de comunicación en Colombia, no solo trajeron consigo grandes beneficios, sino también nuevos retos para el Estado Moderno. Actualmente, la oferta de espacios de difusión de propaganda electoral ha aumentado, mientras persiste un marco legal diseñado para los medios de comunicación del Siglo XX. Por tanto, este trabajo no solo realiza un diagnóstico de los actuales mecanismos de control administrativo sobre la propaganda electoral en Internet, sino también propone unos mecanismos que garanticen los principios de la actividad electoral, siendo esta la primera propuesta en Colombia. Por el poco estudio del tema, su alcance es exploratorio, se basa en un enfoque jurídico-institucional. Se utilizaron métodos cualitativos de recolección de datos (trabajo de archivo y entrevistas) y de análisis (tipologías, comparaciones, exegesis del marco legal), pero también elementos cuantitativos como análisis estadísticos.
Resumo:
O CERN - a Organização Europeia para a Investigação Nuclear - é um dos maiores centros de investigação a nível mundial, responsável por diversas descobertas na área da física bem como na área das ciências da computação. O CERN Document Server, também conhecido como CDS Invenio, é um software desenvolvido no CERN, que tem como objectivo fornecer um conjunto de ferramentas para gerir bibliotecas digitais. A fim de melhorar as funcionalidades do CDS Invenio foi criado um novo módulo, chamado BibCirculation, para gerir os livros (e outros itens) da biblioteca do CERN, funcionando como um sistema integrado de gestão de bibliotecas. Esta tese descreve os passos que foram dados para atingir os vários objectivos deste projecto, explicando, entre outros, o processo de integração com os outros módulos existentes bem como a forma encontrada para associar informações dos livros com os metadados do CDS lnvenio. É também possível encontrar uma apresentação detalhada sobre todo o processo de implementação e os testes realizados. Finalmente, são apresentadas as conclusões deste projecto e o trabalho a desenvolver futuramente. ABSTRACT: CERN - The European Organization for Nuclear Research - is one of the largest research centers worldwide, responsible for several discoveries in physics as well as in computer science. The CERN Document Server, also known as CDS Invenio, is a software developed at CERN, which aims to provide a set of tools for managing digital libraries. ln order to improve the functionalities of CDS Invenio a new module was developed, called BibCirculation, to manage books (and other items) from the CERN library, and working as an Integrated Library System. This thesis shows the steps that have been done to achieve the several goals of this project, explaining, among others aspects, the process of integration with other existing modules as well as the way to associate the information about books with the metadata from CDS lnvenio. You can also find detailed explanation of the entire implementation process and testing. Finally, there are presented the conclusions of this project and ideas for future development.