811 resultados para Distribution Management System
Resumo:
Background The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information. Results We have implemented an extension of Chado – the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications. Conclusions Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different “omics” technologies with patient’s clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans webcite.
Resumo:
La tesi si propone di sviluppare un modello, l'architettura e la tecnologia per il sistema di denominazione del Middleware Coordinato TuCSoN, compresi gli agenti, i nodi e le risorse. Identità universali che rappresentano queste entità, sia per la mobilità fisica sia per quella virtuale, per un Management System (AMS, NMS, RMS) distribuito; tale modulo si occupa anche di ACC e trasduttori, prevedendo questioni come la tolleranza ai guasti, la persistenza, la coerenza, insieme con il coordinamento disincarnata in rete, come accade con le tecnologie Cloud. All’interno dell’elaborato, per prima cosa si è fatta una introduzione andando a descrivere tutto ciò che è contenuto nell’elaborato in modo da dare una visione iniziale globale del lavoro eseguito. Di seguito (1° capitolo) si è descritta tutta la parte relativa alle conoscenze di base che bisogna avere per la comprensione dell’elaborato; tali conoscenze sono relative a TuCSoN (il middleware coordinato con cui il modulo progettato dovrà interfacciarsi) e Cassandra (sistema server distribuito su cui si appoggia la parte di mantenimento e salvataggio dati del modulo). In seguito (2° capitolo) si è descritto JADE, un middleware da cui si è partiti con lo studio per la progettazione del modello e dell’architettura del modulo. Successivamente (3° capitolo) si è andati a spiegare la struttura e il modello del modulo considerato andando ad esaminare tutti i dettagli relativi alle entità interne e di tutti i legami fra esse. In questa parte si è anche dettagliata tutta la parte relativa alla distribuzione sulla rete del modulo e dei suoi componenti. In seguito (4° capitolo) è stata dettagliata e spiegata tutta la parte relativa al sistema di denominazione del modulo, quindi la sintassi e l’insieme di procedure che l’entità consumatrice esterna deve effettuare per ottenere un “nome universale” e quindi anche tutti i passaggi interni del modulo per fornire l’identificatore all’entità consumatrice. Nel capitolo successivo (5° capitolo) si sono descritti tutti i casi di studio relativi alle interazioni con le entità esterne, alle entità interne in caso in cui il modulo sia o meno distribuito sulla rete, e i casi di studio relativi alle politiche, paradigmi e procedure per la tolleranza ai guasti ed agli errori in modo da dettagliare i metodi di riparazione ad essi. Successivamente (6° capitolo) sono stati descritti i possibili sviluppi futuri relativi a nuove forme di interazione fra le entità che utilizzano questo modulo ed alle possibili migliorie e sviluppi tecnologici di questo modulo. Infine sono state descritte le conclusioni relative al modulo progettato con tutti i dettagli in modo da fornire una visione globale di quanto inserito e descritto nell’elaborato.
Resumo:
SMARTDIAB is a platform designed to support the monitoring, management, and treatment of patients with type 1 diabetes mellitus (T1DM), by combining state-of-the-art approaches in the fields of database (DB) technologies, communications, simulation algorithms, and data mining. SMARTDIAB consists mainly of two units: 1) the patient unit (PU); and 2) the patient management unit (PMU), which communicate with each other for data exchange. The PMU can be accessed by the PU through the internet using devices, such as PCs/laptops with direct internet access or mobile phones via a Wi-Fi/General Packet Radio Service access network. The PU consists of an insulin pump for subcutaneous insulin infusion to the patient and a continuous glucose measurement system. The aforementioned devices running a user-friendly application gather patient's related information and transmit it to the PMU. The PMU consists of a diabetes data management system (DDMS), a decision support system (DSS) that provides risk assessment for long-term diabetes complications, and an insulin infusion advisory system (IIAS), which reside on a Web server. The DDMS can be accessed from both medical personnel and patients, with appropriate security access rights and front-end interfaces. The DDMS, apart from being used for data storage/retrieval, provides also advanced tools for the intelligent processing of the patient's data, supporting the physician in decision making, regarding the patient's treatment. The IIAS is used to close the loop between the insulin pump and the continuous glucose monitoring system, by providing the pump with the appropriate insulin infusion rate in order to keep the patient's glucose levels within predefined limits. The pilot version of the SMARTDIAB has already been implemented, while the platform's evaluation in clinical environment is being in progress.
Resumo:
This paper is focused on the integration of state-of-the-art technologies in the fields of telecommunications, simulation algorithms, and data mining in order to develop a Type 1 diabetes patient's semi to fully-automated monitoring and management system. The main components of the system are a glucose measurement device, an insulin delivery system (insulin injection or insulin pumps), a mobile phone for the GPRS network, and a PDA or laptop for the Internet. In the medical environment, appropriate infrastructure for storage, analysis and visualizing of patients' data has been implemented to facilitate treatment design by health care experts.
Resumo:
In this paper the software architecture of a framework which simplifies the development of applications in the area of Virtual and Augmented Reality is presented. It is based on VRML/X3D to enable rendering of audio-visual information. We extended our VRML rendering system by a device management system that is based on the concept of a data-flow graph. The aim of the system is to create Mixed Reality (MR) applications simply by plugging together small prefabricated software components, instead of compiling monolithic C++ applications. The flexibility and the advantages of the presented framework are explained on the basis of an exemplary implementation of a classic Augmented Realityapplication and its extension to a collaborative remote expert scenario.
Resumo:
This article investigates barriers to a wider utilization of a Learning Management System (LMS). The study aims to identify the reasons why some tools in the LMS are rarely used, in spite of assertions that the learning experience and students’ performance can be improved by interaction and collaboration, facilitated by the LMS. Lecturers’ perceptions about the use of LMSs over the last four years at the School of Engineering, University of Borås were investigated. Seventeen lecturers who were interviewed in 2006 were interviewed again in 2011. The lecturers’ still use the LMS primarily for distribution of documents and course administration. The results indicate that their attitudes have not changed significantly. The apparent reluctance to utilize interactive features in the LMS is analyzed, by looking at the expected impact on the lecturers’ work situation. The author argues that the main barrier to a wider utilization of LMS is the lecturers’ fear of additional demands on their time. Hence, if educational institutions want a wider utilization of LMS, some kind of incentives for lecturers are needed, in addition to support and training.
Resumo:
Animal production, hay production and feeding, and the yields and composition of forage from summer and winter grass-legume pastures and winter corn crop residue fields from a year-round grazing system were compared with those of a conventional system. The year-round grazing system utilized 1.67 acres of smooth bromegrass-orchardgrass-birdsfoot trefoil pasture per cow in the summer, and 1.25 acres of stockpiled tall fescue-red clover pasture per cow, 1.25 acres of stockpiled smooth bromegrass-red clover pasture per cow, and 1.25 acres of corn crop residues per cow during winter for spring- and fall-calving cows and stockers. First-cutting hay was harvested from the tall fescue-red clover and smooth bromegrass-red clover pastures to meet supplemental needs of cows and calves during winter. In the conventional system (called the minimal land system), spring-calving cows grazed smooth bromegrass-orchardgrass-birdsfoot trefoil pastures at 3.33 acres/cow during summer with first cutting hay removed from one-half of these acres. This hay was fed to these cows in a drylot during winter. All summer grazing was done by rotational stocking for both systems, and winter grazing of the corn crop residues and stockpiled forages for pregnant spring-calving cows and lactating fall-calving cows in the year-round system was managed by strip-stocking. Hay was fed to springcalving cows in both systems to maintain a mean body condition score of 5 on a 9-point scale, but was fed to fall-calving cows to maintain a mean body condition score of greater than 3. Over winter, fall-calving cows lost more body weight and condition than spring calving cows, but there were no differences in body weight or condition score change between spring-calving cows in either system. Fall- and spring-calving cows in the yearround grazing system required 934 and 1,395 lb. hay dry matter/cow for maintenance during the winter whereas spring-calving cows in drylot required 4,776 lb. hay dry matter/cow. Rebreeding rates were not affected by management system. Average daily gains of spring-born calves did not differ between systems, but were greater than fall calves. Because of differences in land areas for the two systems, weight production of calves per acre of cows in the minimal land system was greater than those of the year-round grazing system, but when the additional weight gains of the stocker cattle were considered, production of total growing animals did not differ between the two systems.
Resumo:
Management by Objectives (MBO) as it has been implemented in the Houston Academy of Medicine--Texas Medical Center Library is described. That MBO must be a total management system and not just another library program is emphasized throughout the discussion and definitions of the MBO system parts: (1) mission statement; (2) role functions; (3) role relationships; (4) effectiveness areas; (5) objective; (6) action plans; and (7) performance review and evaluation. Examples from the library's implementation are given within the discussion of each part to give the reader a clearer picture of the library's actual experiences with the MBO process. Tables are included for further clarification. In conclusion some points are made which the author feels are particularly crucial to any library MBO implementation.
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.
Resumo:
In Europe, Cardiovascular Diseases (CVD) are the leading source of death, causing 45% of all deceases. Besides, Heart Failure, the paradigm of CVD, mainly affects people older than 65. In the current aging society, the European MyHeart Project was created, whose mission is to empower citizens to fight CVD by leading a preventive lifestyle and being able to be diagnosed at an early stage. This paper presents the development of a Heart Failure Management System, based on daily monitoring of Vital Body Signals, with wearable and mobile technologies, for the continuous assessment of this chronic disease. The System makes use of the latest technologies for monitoring heart condition, both with wearable garments (e.g. for measuring ECG and Respiration); and portable devices (such as Weight Scale and Blood Pressure Cuff) both with Bluetooth capabilities
Resumo:
The construction industry, one of the most important ones in the development of a country, generates unavoidable impacts on the environment. The social demand towards greater respect for the environment is a high and general outcry. Therefore, the construction industry needs to reduce the impact it produces. Proper waste management is not enough; we must take a further step in environmental management, where new measures need to be introduced for the prevention at source, such as good practices to promote recycling. Following the amendment of the legal frame applicable to Construction and Demolition Waste (C&D waste), important developments have been incorporated in European and International laws, aiming to promote the culture of reusing and recycling. This change of mindset, that is progressively taking place in society, is allowing for the consideration of C&D waste no longer as an unusable waste, but as a reusable material. The main objective of the work presented in this paper is to enhance C&D waste management systems through the development of preventive measures during the construction process. These measures concern all the agents intervening in the construction process as only the personal implication of all of them can ensure an efficient management of the C&D waste generated. Finally, a model based on preventive measures achieves organizational cohesion between the different stages of the construction process, as well as promoting the conservation of raw materials through the use and waste minimization. All of these in order to achieve a C&D waste management system, whose primary goal is zero waste generation
Resumo:
Knowledge management is critical for the success of virtual communities, especially in the case of distributed working groups. A representative example of this scenario is the distributed software development, where it is necessary an optimal coordination to avoid common problems such as duplicated work. In this paper the feasibility of using the workflow technology as a knowledge management system is discussed, and a practical use case is presented. This use case is an information system that has been deployed within a banking environment. It combines common workflow technology with a new conception of the interaction among participants through the extension of existing definition languages.
Resumo:
This paper introduces a new emerging software component, the idea management system, which helps to gather, organise, select and manage the innovative ideas provided by the communities gathered around organisations or enterprises. We define the notion of the idea life cycle, which provides a framework for characterising tools and techniques that drive the evolution of community submitted data inside idea management systems. Furthermore, we show the dependencies between the community-created information and the enterprise processes that are a result of using idea management systems and point out the possible benefits.
Resumo:
In parallel to the effort of creating Open Linked Data for the World Wide Web there is a number of projects aimed for developing the same technologies but in the context of their usage in closed environments such as private enterprises. In the paper, we present results of research on interlinking structured data for use in Idea Management Systems - a still rare breed of knowledge management systems dedicated to innovation management. In our study, we show the process of extending an ontology that initially covers only the Idea Management System structure towards the concept of linking with distributed enterprise data and public data using Semantic Web technologies. Furthermore we point out how the established links can help to solve the key problems of contemporary Idea Management Systems