790 resultados para Quality Management System
Resumo:
The Green Supply Chain Management (GSCM) is gaining prominence in the academy and business, as an approach that aims to promote economic and environmental gains. The GSCM is operated through the Environmental Management System Tools and treated as an Environmental Management System (EMS), involving Reverse Logistics, Green Purchasing, Green Sourcing, Green Design, Green Packaging, Green Operation, Green Manufacturing, Green Innovation and Customer Awareness. The objective of this study is to map the GSCM tools and identify their practice in a consumer goods industry in the Vale do Paraiba. The approach and data collection were made in the company's database chosen as the object of study, as well as through on site visits and interviews. The results showed that the tools Green Operation, Green Manufacturing, Green Innovation and Green Sourcing are applied in the company and just Costumer Awareness tool showed no practice at all. To other tools was identified ideology or interest of the company in applying them
Resumo:
In this paper, we will present an overview of the smart grid defining the three main systems that compose it: smart infrastructure system, smart management system and smart protection system. We will conceptualize a functionality of smart management system, the conservative voltage reduction, citing its benefits and its history of application. And, finally, we'll cover a test in which we reduce the nominal voltages on incandescent bulbs, CFL and LED, in the context of residential lighting, and on LED and HPS, in the context of public lighting. The test aims to check whether the voltage reduction adversely affects sources of lighting by measuring the temperature manually with a thermal imaging camera FLIR and illuminance with a LUX meter. The set of power factor, total harmonic distortion, and input power values will be collected automatically through the power quality Analyzer Fluke 345 with a probe Fluke Hall Effect Current. For residential lighting, it was found that both CFL and LED had good performance with the smallest variations in illuminance. Between both, the LED source had the lowest harmonics and the lowest power consumption, on the other hand incandescent bulbs had a bad performance as expected. Public light sources also had a good performance and obtained power factors within the standards, as opposed to the CFL and LED residential sources. The data collected clearly shows the feasibility for nominal voltage reductions. Even with small reductions, there are possibilities of savings which can be passed on to the utilities and consumers
Resumo:
Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.
Resumo:
Sistemas de gestão desenvolvidos para a web, a partir de metadados, permitem manutenção eficiente de grandes quantidades de informação. Um vocabulário controlado como o utilizado pelo Sistema Integrado de Bibliotecas da USP (SIBi/USP) necessita de atualização contínua realizada através de uma rede colaborativa com a participação de bibliotecários indexadores de todas as áreas do conhecimento. Este trabalho apresenta os resultados obtidos com o sistema de gestão desenvolvido pelo Grupo de Gerenciamento para a manutenção do Vocabulário Controlado do SIBi/USP. O fluxo deste sistema consiste em filtros de validação realizados pelos componentes do Grupo de Gerenciamento do Vocabulário. A metodologia de gestão do Vocabulário possui além deste sistema, uma política de governança. Os resultados obtidos nos seis anos desde a ativação do sistema de gestão pela Base de Sugestões consistiram em: 1192 inclusões de descritores, 240 alterações, 61 exclusões, totalizando 1493 operações. A gestão e o controle de qualidade do Vocabulário permitiram o aprimoramento do tratamento e da recuperação da informação no Banco de Dados Bibliográficos da USP – DEDALUS.
Resumo:
[ES] Para favorecer la evaluación de la docencia, la Agencia Nacional de Evaluación de la Calidad y la Acreditación (a partir de ahora ANECA) pone en marcha el Programa de Apoyo a la Evaluación de la Actividad Docente del Profesorado Universitario (a partir de ahora DOCENTIA) con el objeto de apoyar a las universidades en el diseño de mecanismos propios para gestionar la calidad de la actividad docente del profesorado universitario y favorecer su desarrollo y reconocimiento. Un pilar fundamental en este procedimiento son las encuestas que los estudiantes realizan para valorar la calidad docente de sus profesores. El Gabinete de Evaluación Institucional (a partir de ahora GEI) de la Universidad de Las Palmas de Gran Canaria (a partir de ahora ULPGC) es el encargado de gestionar todo el proceso de encuestación y por este motivo, surge la inmensa necesidad de disponer de una aplicación informática que permita hacer un seguimiento de dicho proceso. Este proyecto es un sistema de gestión que permite conocer qué profesores de la ULPGC han sido valorados por sus alumnos y en qué asignaturas. El objetivo principal es hacer un seguimiento durante el proceso de encuestación para intentar asegurar que todos los profesores han sido valorados en al menos alguna de sus asignaturas. Solo los profesores que hayan sido valorados en al menos una de sus asignaturas podrán acceder a los grados de excelencia docente del Programa DOCENTIA-ULPGC. De ahí la importancia de este proyecto fin de grado.
Resumo:
In the recent years, consumers became more aware and sensible in respect to environment and food safety matters. They are more and more interested in organic agriculture and markets and tend to prefer ‘organic’ products more than their traditional counterparts. To increase the quality and reduce the cost of production in organic and low-input agriculture, the 6FP-European “QLIF” project investigated the use of natural products such as bio-inoculants. They are mostly composed by arbuscular mycorrhizal fungi and other microorganisms, so-called “plant probiotic” microorganisms (PPM), because they help keeping an high yield, even under abiotic and biotic stressful conditions. Italian laws (DLgs 217, 2006) have recently included them as “special fertilizers”. This thesis focuses on the use of special fertilizers when growing tomatoes with organic methods in open field conditions, and the effects they induce on yield, quality and microbial rhizospheric communities. The primary objective was to achieve a better understanding of how plant-probiotic micro-flora management could buffer future reduction of external inputs, while keeping tomato fruit yield, quality and system sustainability. We studied microbial rhizospheric communities with statistical, molecular and histological methods. This work have demonstrated that long-lasting introduction of inoculum positively affected micorrhizal colonization and resistance against pathogens. Instead repeated introduction of compost negatively affected tomato quality, likely because it destabilized the ripening process, leading to over-ripening and increasing the amount of not-marketable product. Instead. After two years without any significant difference, the third year extreme combinations of inoculum and compost inputs (low inoculum with high amounts of compost, or vice versa) increased mycorrhizal colonization. As a result, in order to reduce production costs, we recommend using only inoculum rather than compost. Secondly, this thesis analyses how mycorrhizal colonization varies in respect to different tomato cultivars and experimental field locations. We found statistically significant differences between locations and between arbuscular colonization patterns per variety. To confirm these histological findings, we started a set of molecular experiments. The thesis discusses preliminary results and recommends their continuation and refinement to gather the complete results.
Resumo:
La tesi si propone di sviluppare un modello, l'architettura e la tecnologia per il sistema di denominazione del Middleware Coordinato TuCSoN, compresi gli agenti, i nodi e le risorse. Identità universali che rappresentano queste entità, sia per la mobilità fisica sia per quella virtuale, per un Management System (AMS, NMS, RMS) distribuito; tale modulo si occupa anche di ACC e trasduttori, prevedendo questioni come la tolleranza ai guasti, la persistenza, la coerenza, insieme con il coordinamento disincarnata in rete, come accade con le tecnologie Cloud. All’interno dell’elaborato, per prima cosa si è fatta una introduzione andando a descrivere tutto ciò che è contenuto nell’elaborato in modo da dare una visione iniziale globale del lavoro eseguito. Di seguito (1° capitolo) si è descritta tutta la parte relativa alle conoscenze di base che bisogna avere per la comprensione dell’elaborato; tali conoscenze sono relative a TuCSoN (il middleware coordinato con cui il modulo progettato dovrà interfacciarsi) e Cassandra (sistema server distribuito su cui si appoggia la parte di mantenimento e salvataggio dati del modulo). In seguito (2° capitolo) si è descritto JADE, un middleware da cui si è partiti con lo studio per la progettazione del modello e dell’architettura del modulo. Successivamente (3° capitolo) si è andati a spiegare la struttura e il modello del modulo considerato andando ad esaminare tutti i dettagli relativi alle entità interne e di tutti i legami fra esse. In questa parte si è anche dettagliata tutta la parte relativa alla distribuzione sulla rete del modulo e dei suoi componenti. In seguito (4° capitolo) è stata dettagliata e spiegata tutta la parte relativa al sistema di denominazione del modulo, quindi la sintassi e l’insieme di procedure che l’entità consumatrice esterna deve effettuare per ottenere un “nome universale” e quindi anche tutti i passaggi interni del modulo per fornire l’identificatore all’entità consumatrice. Nel capitolo successivo (5° capitolo) si sono descritti tutti i casi di studio relativi alle interazioni con le entità esterne, alle entità interne in caso in cui il modulo sia o meno distribuito sulla rete, e i casi di studio relativi alle politiche, paradigmi e procedure per la tolleranza ai guasti ed agli errori in modo da dettagliare i metodi di riparazione ad essi. Successivamente (6° capitolo) sono stati descritti i possibili sviluppi futuri relativi a nuove forme di interazione fra le entità che utilizzano questo modulo ed alle possibili migliorie e sviluppi tecnologici di questo modulo. Infine sono state descritte le conclusioni relative al modulo progettato con tutti i dettagli in modo da fornire una visione globale di quanto inserito e descritto nell’elaborato.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
SMARTDIAB is a platform designed to support the monitoring, management, and treatment of patients with type 1 diabetes mellitus (T1DM), by combining state-of-the-art approaches in the fields of database (DB) technologies, communications, simulation algorithms, and data mining. SMARTDIAB consists mainly of two units: 1) the patient unit (PU); and 2) the patient management unit (PMU), which communicate with each other for data exchange. The PMU can be accessed by the PU through the internet using devices, such as PCs/laptops with direct internet access or mobile phones via a Wi-Fi/General Packet Radio Service access network. The PU consists of an insulin pump for subcutaneous insulin infusion to the patient and a continuous glucose measurement system. The aforementioned devices running a user-friendly application gather patient's related information and transmit it to the PMU. The PMU consists of a diabetes data management system (DDMS), a decision support system (DSS) that provides risk assessment for long-term diabetes complications, and an insulin infusion advisory system (IIAS), which reside on a Web server. The DDMS can be accessed from both medical personnel and patients, with appropriate security access rights and front-end interfaces. The DDMS, apart from being used for data storage/retrieval, provides also advanced tools for the intelligent processing of the patient's data, supporting the physician in decision making, regarding the patient's treatment. The IIAS is used to close the loop between the insulin pump and the continuous glucose monitoring system, by providing the pump with the appropriate insulin infusion rate in order to keep the patient's glucose levels within predefined limits. The pilot version of the SMARTDIAB has already been implemented, while the platform's evaluation in clinical environment is being in progress.
Resumo:
This paper is focused on the integration of state-of-the-art technologies in the fields of telecommunications, simulation algorithms, and data mining in order to develop a Type 1 diabetes patient's semi to fully-automated monitoring and management system. The main components of the system are a glucose measurement device, an insulin delivery system (insulin injection or insulin pumps), a mobile phone for the GPRS network, and a PDA or laptop for the Internet. In the medical environment, appropriate infrastructure for storage, analysis and visualizing of patients' data has been implemented to facilitate treatment design by health care experts.
Resumo:
In this paper the software architecture of a framework which simplifies the development of applications in the area of Virtual and Augmented Reality is presented. It is based on VRML/X3D to enable rendering of audio-visual information. We extended our VRML rendering system by a device management system that is based on the concept of a data-flow graph. The aim of the system is to create Mixed Reality (MR) applications simply by plugging together small prefabricated software components, instead of compiling monolithic C++ applications. The flexibility and the advantages of the presented framework are explained on the basis of an exemplary implementation of a classic Augmented Realityapplication and its extension to a collaborative remote expert scenario.
Resumo:
CampusContent (CC) is a DFG-funded competence center for eLearning with its own portal. It links content and people who support sharing and reuse of high quality learning materials and codified pedagogical know-how, such as learning objectives, pedagogical scenarios, recommended learning activities, and learning paths. The heart of the portal is a distributed repository whose contents are linked to various other CampusContent portals. Integrated into each portal are user-friendly tools for designing reusable learning content, exercises, and templates for learning units and courses. Specialized authoring tools permit the configuration, adaption, and automatic generation of interactive Flash animations using Adobe's Flexbuilder technology. More coarse-grained content components such as complete learning units and entire courses, in which contents and materials taken from the repository are embedded, can be created with XML-based authoring tools. Open service interface allow the deep or shallow integration of the portal provider's preferred authoring and learning tools. The portal is built on top of the Enterprise Content Management System Alfresco, which comes with social networking functionality that has been adapted to accommmodate collaboration, sharing and reuse within trusted communities of practice.
Resumo:
In this article the use of Learning Management Systems (LMS) at the School of Engineering, University of Borås, in the year 2004 and the academic year 2009-2010 is investigated. The tools in the LMS were classified into four groups (tools for distribution, tools for communication, tools for interaction and tools for course administration) and the pattern of use was analyzed. The preliminary interpretation of the results was discussed with a group of teachers from the School of Engineering with long experience of using LMS. High expectations about LMS as a tool to facilitate flexible education, student centered methods and the creation of an effective learning environment is abundant in the literature. This study, however, shows that in most of the surveyed courses the available LMS is predominantly used to distribute documents to students. The authors argue that a more elaborate use of LMS and a transformation of pedagogical practices towards social constructivist, learner centered procedures should be treated as an integrated process of professional development.
Resumo:
Quality of education should be stable or permanently increased – even if the number of students rises. Quality of education is often related to possibilities for active learning and individual facilitation. This paper deals with the question how high-quality learning within oversized courses could be enabled and it presents the approach of e-flashcards that enables active learning and individual facilitation within large scale university courses.