930 resultados para XML database management system
Resumo:
[ES]Este Trabajo de Fin de Grado ha tenido como objetivo el desarrollo de un sistema de gestión de datos. Dichos datos son recopilados a lo largo de campañas de detección de cantos marcados con transpondedores RFID. Este sistema, encargo del Departamento de Física de la ULPGC, ha sido usado para geolocalizar de manera fiable piedras marcadas con transpondedores RFID durante campañas realizadas en una playa situada en el norte de Gran Canaria. El sistema muestra las posiciones de los cantos detectados en un mapa de Google de la zona objeto de estudio, gestiona la estación total y almacena los datos de detección en una base de datos. Dicha base de datos permite la gestión de los datos recopilados a lo largo de varias campañas en una o varias localizaciones. Desde el punto de vista hardware, el sistema se compone de un par de motas sensoriales, un lector de marcas RFID, una estación total TOPCON y un pequeño ordenador portátil con acceso a Internet.
Resumo:
[ES] Un servicio de urgencias de una zona ofrece asistencia sanitaria y tiene como principal objetivo atender la patología urgente que acude al hospital y el nivel de compromiso que se asume consiste en diagnosticar, tratar y estabilizar, en la medida posible, dicha patología urgente. Otro objetivo es gestionar la demanda de atención urgente por parte del ciudadano a través de un sistema de selección prioritaria inicial (Triaje) que selecciona, prioriza, organiza y gestiona la demanda de atención. Para poder controlar y realizar el trabajo de la forma más eficaz se utilizan herramientas de gestión necesarias para el control de los pacientes, desde que se realiza su ingreso en el servicio de urgencias hasta el alta del mismo. Las aplicaciones desarrolladas son las siguientes: Gestión de Pacientes en Urgencias: Esta aplicación asignará un estado inicial al paciente y permitirá ir cambiando el estado del mismo usando el método del Triaje (valoración), el más difundido en la medicina de urgencias. Además, se podrán solicitar pruebas diagnósticas y la visualización de marcadores de analíticas para comprobar su evolución. Finalmente, se podrá desarrollar un informe de alta para el paciente. Informadores de Urgencias: La aplicación gestiona la localización física del paciente dentro del servicio de urgencias, permitiendo asimismo el cambio entre las distintas localizaciones y el control para la información a los familiares de los mismos, pudiendo almacenar los familiares y teléfonos de contactos para que estos puedan ser informados. El desarrollo se ha realizado utilizando el MVC (modelo - vista - controlador) que es patrón de arquitectura que separa los datos de una aplicación, la interfaz gráfica de usuario y la lógica de control de componentes. El software utilizado para el desarrollo de las aplicaciones es CACHÉ de Intersystems que permite la creación de una base de datos multidimensional. El modelo de objetos de Caché se basa en el estándar ODMG (Object Database Management Group, Grupo de gestión de bases de datos de objetos) y soporta muchas características avanzadas. CACHÉ dispone de Zen, una biblioteca completa de componentes de objetos preconstruidos y herramientas de desarrollo basadas en la tecnología CSP (Caché Server Pages) y de objetos de InterSystems. ZEN es especialmente apropiado para desarrollar una versión Web de las aplicaciones cliente/servidor creadas originalmente con herramientas como Visual Basic o PowerBuilder.
Resumo:
The aim of this thesis is the study of techniques for efficient management and use of the spectrum based on cognitive radio technology. The ability of cognitive radio technologies to adapt to the real-time conditions of its operating environment, offers the potential for more flexible use of the available spectrum. In this context, the international interest is particularly focused on the “white spaces” in the UHF band of digital terrestrial television. Spectrum sensing and geo-location database have been considered in order to obtain information on the electromagnetic environment. Different methodologies have been considered in order to investigate spectral resources potentially available for the white space devices in the TV band. The adopted methodologies are based on the geo-location database approach used either in autonomous operation or in combination with sensing techniques. A novel and computationally efficient methodology for the calculation of the maximum permitted white space device EIRP is then proposed. The methodology is suitable for implementation in TV white space databases. Different Italian scenarios are analyzed in order to identify both the available spectrum and the white space device emission limits. Finally two different applications of cognitive radio technology are considered. The first considered application is the emergency management. The attention is focused on the consideration of both cognitive and autonomic networking approaches when deploying an emergency management system. The cognitive technology is then considered in applications related to satellite systems. In particular a hybrid cognitive satellite-terrestrial is introduced and an analysis of coexistence between terrestrial and satellite networks by considering a cognitive approach is performed.
Resumo:
La tesi si propone di sviluppare un modello, l'architettura e la tecnologia per il sistema di denominazione del Middleware Coordinato TuCSoN, compresi gli agenti, i nodi e le risorse. Identità universali che rappresentano queste entità, sia per la mobilità fisica sia per quella virtuale, per un Management System (AMS, NMS, RMS) distribuito; tale modulo si occupa anche di ACC e trasduttori, prevedendo questioni come la tolleranza ai guasti, la persistenza, la coerenza, insieme con il coordinamento disincarnata in rete, come accade con le tecnologie Cloud. All’interno dell’elaborato, per prima cosa si è fatta una introduzione andando a descrivere tutto ciò che è contenuto nell’elaborato in modo da dare una visione iniziale globale del lavoro eseguito. Di seguito (1° capitolo) si è descritta tutta la parte relativa alle conoscenze di base che bisogna avere per la comprensione dell’elaborato; tali conoscenze sono relative a TuCSoN (il middleware coordinato con cui il modulo progettato dovrà interfacciarsi) e Cassandra (sistema server distribuito su cui si appoggia la parte di mantenimento e salvataggio dati del modulo). In seguito (2° capitolo) si è descritto JADE, un middleware da cui si è partiti con lo studio per la progettazione del modello e dell’architettura del modulo. Successivamente (3° capitolo) si è andati a spiegare la struttura e il modello del modulo considerato andando ad esaminare tutti i dettagli relativi alle entità interne e di tutti i legami fra esse. In questa parte si è anche dettagliata tutta la parte relativa alla distribuzione sulla rete del modulo e dei suoi componenti. In seguito (4° capitolo) è stata dettagliata e spiegata tutta la parte relativa al sistema di denominazione del modulo, quindi la sintassi e l’insieme di procedure che l’entità consumatrice esterna deve effettuare per ottenere un “nome universale” e quindi anche tutti i passaggi interni del modulo per fornire l’identificatore all’entità consumatrice. Nel capitolo successivo (5° capitolo) si sono descritti tutti i casi di studio relativi alle interazioni con le entità esterne, alle entità interne in caso in cui il modulo sia o meno distribuito sulla rete, e i casi di studio relativi alle politiche, paradigmi e procedure per la tolleranza ai guasti ed agli errori in modo da dettagliare i metodi di riparazione ad essi. Successivamente (6° capitolo) sono stati descritti i possibili sviluppi futuri relativi a nuove forme di interazione fra le entità che utilizzano questo modulo ed alle possibili migliorie e sviluppi tecnologici di questo modulo. Infine sono state descritte le conclusioni relative al modulo progettato con tutti i dettagli in modo da fornire una visione globale di quanto inserito e descritto nell’elaborato.
Resumo:
QUESTIONS UNDER STUDY / PRINCIPLES: Interest groups advocate centre-specific outcome data as a useful tool for patients in choosing a hospital for their treatment and for decision-making by politicians and the insurance industry. Haematopoietic stem cell transplantation (HSCT) requires significant infrastructure and represents a cost-intensive procedure. It therefore qualifies as a prime target for such a policy. METHODS: We made use of the comprehensive database of the Swiss Blood Stem Cells Transplant Group (SBST) to evaluate potential use of mortality rates. Nine institutions reported a total of 4717 HSCT - 1427 allogeneic (30.3%), 3290 autologous (69.7%) - in 3808 patients between the years 1997 and 2008. Data were analysed for survival- and transplantation-related mortality (TRM) at day 100 and at 5 years. RESULTS: The data showed marked and significant differences between centres in unadjusted analyses. These differences were absent or marginal when the results were adjusted for disease, year of transplant and the EBMT risk score (a score incorporating patient age, disease stage, time interval between diagnosis and transplantation, and, for allogeneic transplants, donor type and donor-recipient gender combination) in a multivariable analysis. CONCLUSIONS: These data indicate comparable quality among centres in Switzerland. They show that comparison of crude centre-specific outcome data without adjustment for the patient mix may be misleading. Mandatory data collection and systematic review of all cases within a comprehensive quality management system might, in contrast, serve as a model to ascertain the quality of other cost-intensive therapies in Switzerland.
Resumo:
This paper is focused on the integration of state-of-the-art technologies in the fields of telecommunications, simulation algorithms, and data mining in order to develop a Type 1 diabetes patient's semi to fully-automated monitoring and management system. The main components of the system are a glucose measurement device, an insulin delivery system (insulin injection or insulin pumps), a mobile phone for the GPRS network, and a PDA or laptop for the Internet. In the medical environment, appropriate infrastructure for storage, analysis and visualizing of patients' data has been implemented to facilitate treatment design by health care experts.
Resumo:
Der CampusSource Workshop fand vom 10. bis 12. Oktober 2006 an der Westfälischen Wilhelms Universität (WWU) in Münster statt. Kernpunkte der Veranstaltung waren die Entwicklung einer Engine zur Verknüpfung von e-Learning Anwendungen mit Systemen der HIS GmbH und die Erstellung von Lehr- und Lerninhalten mit dem Ziel der Wiederverwendung. Im zweiten Kapitel sind Vorträge der Veranstaltung im Adobe Flash Format zusammengetragen. Zur Betrachtung der Vorträge ist der Adobe Flash Player, mindestens in der Version 6 erforderlich
Resumo:
In this paper the software architecture of a framework which simplifies the development of applications in the area of Virtual and Augmented Reality is presented. It is based on VRML/X3D to enable rendering of audio-visual information. We extended our VRML rendering system by a device management system that is based on the concept of a data-flow graph. The aim of the system is to create Mixed Reality (MR) applications simply by plugging together small prefabricated software components, instead of compiling monolithic C++ applications. The flexibility and the advantages of the presented framework are explained on the basis of an exemplary implementation of a classic Augmented Realityapplication and its extension to a collaborative remote expert scenario.
Resumo:
CampusContent (CC) is a DFG-funded competence center for eLearning with its own portal. It links content and people who support sharing and reuse of high quality learning materials and codified pedagogical know-how, such as learning objectives, pedagogical scenarios, recommended learning activities, and learning paths. The heart of the portal is a distributed repository whose contents are linked to various other CampusContent portals. Integrated into each portal are user-friendly tools for designing reusable learning content, exercises, and templates for learning units and courses. Specialized authoring tools permit the configuration, adaption, and automatic generation of interactive Flash animations using Adobe's Flexbuilder technology. More coarse-grained content components such as complete learning units and entire courses, in which contents and materials taken from the repository are embedded, can be created with XML-based authoring tools. Open service interface allow the deep or shallow integration of the portal provider's preferred authoring and learning tools. The portal is built on top of the Enterprise Content Management System Alfresco, which comes with social networking functionality that has been adapted to accommmodate collaboration, sharing and reuse within trusted communities of practice.
Resumo:
In this article the use of Learning Management Systems (LMS) at the School of Engineering, University of Borås, in the year 2004 and the academic year 2009-2010 is investigated. The tools in the LMS were classified into four groups (tools for distribution, tools for communication, tools for interaction and tools for course administration) and the pattern of use was analyzed. The preliminary interpretation of the results was discussed with a group of teachers from the School of Engineering with long experience of using LMS. High expectations about LMS as a tool to facilitate flexible education, student centered methods and the creation of an effective learning environment is abundant in the literature. This study, however, shows that in most of the surveyed courses the available LMS is predominantly used to distribute documents to students. The authors argue that a more elaborate use of LMS and a transformation of pedagogical practices towards social constructivist, learner centered procedures should be treated as an integrated process of professional development.
Resumo:
Animal production, hay production and feeding, and the yields and composition of forage from summer and winter grass-legume pastures and winter corn crop residue fields from a year-round grazing system were compared with those of a conventional system. The year-round grazing system utilized 1.67 acres of smooth bromegrass-orchardgrass-birdsfoot trefoil pasture per cow in the summer, and 1.25 acres of stockpiled tall fescue-red clover pasture per cow, 1.25 acres of stockpiled smooth bromegrass-red clover pasture per cow, and 1.25 acres of corn crop residues per cow during winter for spring- and fall-calving cows and stockers. First-cutting hay was harvested from the tall fescue-red clover and smooth bromegrass-red clover pastures to meet supplemental needs of cows and calves during winter. In the conventional system (called the minimal land system), spring-calving cows grazed smooth bromegrass-orchardgrass-birdsfoot trefoil pastures at 3.33 acres/cow during summer with first cutting hay removed from one-half of these acres. This hay was fed to these cows in a drylot during winter. All summer grazing was done by rotational stocking for both systems, and winter grazing of the corn crop residues and stockpiled forages for pregnant spring-calving cows and lactating fall-calving cows in the year-round system was managed by strip-stocking. Hay was fed to springcalving cows in both systems to maintain a mean body condition score of 5 on a 9-point scale, but was fed to fall-calving cows to maintain a mean body condition score of greater than 3. Over winter, fall-calving cows lost more body weight and condition than spring calving cows, but there were no differences in body weight or condition score change between spring-calving cows in either system. Fall- and spring-calving cows in the yearround grazing system required 934 and 1,395 lb. hay dry matter/cow for maintenance during the winter whereas spring-calving cows in drylot required 4,776 lb. hay dry matter/cow. Rebreeding rates were not affected by management system. Average daily gains of spring-born calves did not differ between systems, but were greater than fall calves. Because of differences in land areas for the two systems, weight production of calves per acre of cows in the minimal land system was greater than those of the year-round grazing system, but when the additional weight gains of the stocker cattle were considered, production of total growing animals did not differ between the two systems.
Resumo:
Management by Objectives (MBO) as it has been implemented in the Houston Academy of Medicine--Texas Medical Center Library is described. That MBO must be a total management system and not just another library program is emphasized throughout the discussion and definitions of the MBO system parts: (1) mission statement; (2) role functions; (3) role relationships; (4) effectiveness areas; (5) objective; (6) action plans; and (7) performance review and evaluation. Examples from the library's implementation are given within the discussion of each part to give the reader a clearer picture of the library's actual experiences with the MBO process. Tables are included for further clarification. In conclusion some points are made which the author feels are particularly crucial to any library MBO implementation.
Resumo:
The paper showcases the field- and lab-documentation system developed for Kinneret Regional Project, an international archaeological expedition to the Northwestern shore of the Sea of Galilee (Israel) under the auspices of the University of Bern, the University of Helsinki, Leiden University and Wofford College. The core of the data management system is a fully relational, server-based database framework, which also includes time-based and static GIS services, stratigraphic analysis tools and fully indexed document/digital image archives. Data collection in the field is based on mobile, hand-held devices equipped with a custom-tailored stand-alone application. Comprehensive three-dimensional documentation of all finds and findings is achieved by means of total stations and/or high-precision GPS devices. All archaeological information retrieved in the field – including tachymetric data – is synched with the core system on the fly and thus immediately available for further processing in the field lab (within the local network) or for post-excavation analysis at remote institutions (via the WWW). Besides a short demonstration of the main functionalities, the paper also presents some of the key technologies used and illustrates usability aspects of the system’s individual components.
Resumo:
Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.