43 resultados para Computer systems organization: general-emerging technologies

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a new benchmark to drive making decisions in maintenance of computer systems. This benchmark is made from load average sample data. The main goal is to improve reliability and performance of a set of devices or components. In particular, the stability of the system is measured in terms of variability of the load. A forecast of the behavior of this stability is also proposal as part of the reporting benchmark. At the final stage, a more stable system is obtained and its global reliability and performance can be then evaluated by means of appropriate specifications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a cooperative manoeuvre among three dual mode cars – vehicles equipped with sensors and actuators, and that can be driven either manually or autonomously. One vehicle is driven autonomously and the other two are driven manually. The main objective is to test two decision algorithms for priority conflict resolution at intersections so that a vehicle autonomously driven can take their own decision about crossing an intersection mingling with manually driven cars without the need for infrastructure modifications. To do this, the system needs the position, speeds, and turning intentions of the rest of the cars involved in the manoeuvre. This information is acquired via communications, but other methods are also viable, such as artificial vision. The idea of the experiments was to adjust the speed of the manually driven vehicles to force a situation where all three vehicles arrive at an intersection at the same time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The future Internet is expected to be composed of a mesh of interoperable web services accessible from all over the web. This approach has not yet caught on since global user?service interaction is still an open issue. This paper states one vision with regard to next-generation front-end Web 2.0 technology that will enable integrated access to services, contents and things in the future Internet. In this paper, we illustrate how front-ends that wrap traditional services and resources can be tailored to the needs of end users, converting end users into prosumers (creators and consumers of service-based applications). To do this, we propose an architecture that end users without programming skills can use to create front-ends, consult catalogues of resources tailored to their needs, easily integrate and coordinate front-ends and create composite applications to orchestrate services in their back-end. The paper includes a case study illustrating that current user-centred web development tools are at a very early stage of evolution. We provide statistical data on how the proposed architecture improves these tools. This paper is based on research conducted by the Service Front End (SFE) Open Alliance initiative.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente documento pretende ofrecer una visión general del estado del conjunto de herramientas disponibles para el análisis y explotación de vulnerabilidades en sistemas informáticos y más concretamente en redes de ordenadores. Por un lado se ha procedido a describir analíticamente el conjunto de herramientas de software libre que se ofrecen en la actualidad para analizar y detectar vulnerabilidades en sistemas informáticos. Se ha descrito el funcionamiento, las opciones, y la motivación de uso para dichas herramientas, comparándolas con otras en algunos casos, describiendo sus diferencias en otros, y justificando su elección en todos ellos. Por otro lado se ha procedido a utilizar dichas herramientas analizadas con el objetivo de desarrollar ejemplos concretos de uso con sus diferentes parámetros seleccionados observando su comportamiento y tratando de discernir qué datos son útiles para obtener información acerca de las vulnerabilidades existentes en el sistema. Además, se ha desarrollado un caso práctico en el que se pone en práctica el conocimiento teórico presentado de forma que el lector sea capaz de asentar lo aprendido comprobando mediante un caso real la utilidad de las herramientas descritas. Los resultados obtenidos han demostrado que el análisis y detección de vulnerabilidades por parte de un administrador de sistemas competente permite ofrecer a la organización en cuestión un conjunto de técnicas para mejorar su seguridad informática y así evitar problemas con potenciales atacantes. ABSTRACT. This paper tries to provide an overview of the features of the set of tools available for the analysis and exploitation of vulnerabilities in computer systems and more specifically in computer networks. On the one hand we pretend analytically describe the set of free software tools that are offered today to analyze and detect vulnerabilities in computer systems. We have described the operation, options, and motivation to use these tools in comparison with other in some case, describing their differences in others, and justifying them in all cases. On the other hand we proceeded to use these analyzed tools in order to develop concrete examples of use with different parameters selected by observing their behavior and trying to discern what data are useful for obtaining information on existing vulnerabilities in the system. In addition, we have developed a practical case in which we put in practice the theoretical knowledge presented so that the reader is able to settle what has been learned through a real case verifying the usefulness of the tools previously described. The results have shown that vulnerabilities analysis and detection made by a competent system administrator can provide to an organization a set of techniques to improve its systems and avoid any potential attacker.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current solutions to the interoperability problem in Home Automation systems are based on a priori agreements where protocols are standardized and later integrated through specific gateways. In this regards, spontaneous interoperability, or the ability to integrate new devices into the system with minimum planning in advance, is still considered a major challenge that requires new models of connectivity. In this paper we present an ontology-driven communication architecture whose main contribution is that it facilitates spontaneous interoperability at system model level by means of semantic integration. The architecture has been validated through a prototype and the main challenges for achieving complete spontaneous interoperability are also evaluated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High brightness semiconductor lasers are potential transmitters for future space lidar systems. In the framework of the European Project BRITESPACE, we propose an all-semiconductor laser source for an Integrated Path Differential Absorption lidar system for column-averaged measurements of atmospheric CO2 in future satellite missions. The complete system architecture has to be adapted to the particular emission properties of these devices using a Random Modulated Continuous Wave approach. We present the initial experimental results of the InGaAsP/InP monolithic Master Oscillator Power Amplifiers, providing the ON and OFF wavelengths close to the selected absorption line around 1572 nm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En términos generales, m-salud puede definirse como el conjunto de sistemas de información, sensores médicos y tecnologías de comunicaciones móviles para el cuidado de la salud. La creciente disponibilidad, miniaturización, comportamiento, velocidades de transmisión de datos cada vez mayores y la esperada convergencia de tecnologías de red y comunicaciones inalámbricas en torno a los sistemas de salud móviles están acelerando el despliegue de estos sistemas y la provisión de servicios de m-salud, como por ejemplo, la teleasistencia móvil. El concepto emergente de m-salud conlleva retos importantes (estudios técnicos, análisis, modelado de la provisión de servicios, etc.) que hay que afrontar para impulsar la evolución de los sistemas y servicios de e-salud ofrecidos desde tecnologías de telecomunicación que utilizan acceso por cable y redes fijas, hacia configuraciones móviles e inalámbricas de última generación. En este trabajo se analizará primeramente el significado e implicaciones de m-salud y la situación en la que se encuentra; los retos a los que hay que enfrentarse para su implantación y provisión así como su tendencia. De los múltiples y diferentes servicios que se pueden proveer se ha identificado el servicio de Localización de Personas LoPe, lanzado por Cruz Roja en febrero de 2007, para teleasistencia móvil y que permite conocer en todo momento la ubicación de la persona que porta su dispositivo asociado. Orientado a personas con discapacidad, en situación de riesgo o dependencia por deterioro cognitivo, tiene como objetivo ayudarlas a recuperar su autonomía personal. La provisión de este servicio se modelará mediante dinámica de sistemas, ya que esta teoría se considera idónea para modelar sistemas complejos que evolucionan con el tiempo. El resultado final es un modelo que implementado a través de la herramienta Studio 8® de la compañía noruega Powersim Software AS nos ha permitido analizar y evaluar su comportamiento a lo largo del tiempo, además de permitirnos extraer conclusiones sobre el mismo y plantear futuras mejoras sobre el servicio. ABSTRACT. In general terms, m-health can be defined as “mobile computing, medical sensor, and communications technologies for health care.” The increased availability, miniaturization, performance, enhanced data rates, and the expected convergence of future wireless communication and network technologies around mobile health systems are accelerating the deployment of m-health systems and services, for instance, mobile telecare. The emerging concept of m-health involves significant challenges (technical studies, analysis, modeling of service provision, etc.) that must be tackled to drive the development of e-health services and systems offered by telecommunication technologies that use wired and fixed networks towards wireless and mobile new generation networks. Firstly, in this master’s thesis, the meaning and implications of m-health and its current situation are analyzed. This analysis also includes the challenges that must be tackled for the implementation and provision of m-health technologies and services and the m-health trends. Among the many different m-health services already delivered, the Localización de Personas LoPe service has been identified to work with it. This service, launched by Spanish Red Cross in February 2007, enables to locate people who carry the associated device. It’s aimed at people with disabilities, at risk or dependency due to cognitive impairment and helps them to recover their personal autonomy. The provision of this service will be modeled with system dynamics considering that this theory suits very well the modeling of complex systems which evolve over time. The final result is a system dynamics model of the service implemented with Studio 8® tool developed by Powersim Software AS, a Norwegian company. This model has allowed us to analyze and evaluate its behaviour over time, as well as to draw conclusions and to consider some future improvements in the service.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los retos y oportunidades a los que se enfrentan las organizaciones y administraciones de las primeras décadas del siglo XXI se caracterizan por una serie de fuerzas perturbadoras como la globalización, el avance de las tecnologías emergentes y el desequilibrio económico, que están actuando como impulsores de la transformación del mercado. La acción conjunta de estos factores está obligando a todas las empresas industriales a tener que trabajar con mayores y más exigentes niveles de productividad planteándose continuamente como mejorar y lograr satisfacer los requerimientos de los clientes. De esta situación surge la necesidad de volver a plantearse de nuevo ¿quién es el cliente?, ¿qué valora el cliente? y ¿cómo se pueden generan beneficios sostenibles? La aplicación de esta reflexión a la industria naval militar marca los objetivos a los que esta tesis doctoral busca dar respuesta. El primer objetivo, de carácter general, consiste en la definición de un modelo de negocio sostenible para la industria naval militar del 2025 que se adapte a los requisitos del cliente y al nuevo escenario político, económico, social, tecnológico y ambiental que rodea esta industria. El segundo objetivo, consecuencia del modelo general, trata de desarrollar una metodología para ejecutar programas de apoyo al ciclo de vida del “buque militar”. La investigación se estructura en cuatro partes: en la primera se justifica, por un lado, la necesidad del cambio de modelo y por otro se identifican los factores estructurantes para la definición del modelo. La segunda parte revisa la literatura existente sobre uno de los aspectos básicos para el nuevo modelo, el concepto Producto-Servicio. La tercera parte se centra totalmente en la industria naval militar estudiando los aspectos concretos del sector y, en base al trabajo de campo realizado, se identifican los puntos que más valoran las Marinas de Guerra y como estas gestionan al buque militar durante todo su ciclo de vida. Por último se presentan los principios del modelo propuesto y se desarrollan los pilares básicos para la ejecución de proyectos de Apoyo al Ciclo de Vida (ACV). Como resultado de la investigación, el modelo propuesto para la industria naval militar se fundamenta en once principios: 1. El buque militar (producto de alto valor añadido) debe ser diseñado y construido en un astillero del país que desarrolla el programa de defensa. 2. El diseño tiene que estar orientado al valor para el cliente, es decir, se tiene que diseñar el buque militar para que cumpla su misión, eficaz y eficientemente, durante toda su vida operativa, asegurando la seguridad del buque y de las personas y protegiendo el medio ambiente de acuerdo con las regulaciones vigentes. 3. La empresa debe suministrar soluciones integrales de apoyo al ciclo de vida al producto. 4. Desarrollar y mantener las capacidades de integración de sistemas complejos para todo el ciclo de vida del buque militar. 5. Incorporar las tecnologías digitales al producto, a los procesos, a las personas y al propio modelo de negocio. 6. Desarrollar planes de actuación con el cliente domestico a largo plazo. Estos planes tienen que estar basados en tres premisas: (i) deben incluir el ciclo de vida completo, desde la fase de investigación y desarrollo hasta la retirada del buque del servicio; (ii) la demanda debe ser sofisticada, es decir las exigencias del cliente, tanto desde la óptica de producto como de eficiencia, “tiran” del contratista y (iii) permitir el mantenimiento del nivel tecnológico y de las capacidades industriales de la compañía a futuro y posicionarla para que pueda competir en el mercado de exportación. 7. Impulsar el sector militar de exportación mediante una mayor actividad comercial a nivel internacional. 8. Fomentar la multilocalización ya que representa una oportunidad de crecimiento y favorece la exportación posibilitando el suministro de soluciones integrales en el país destino. 9. Reforzar la diplomacia institucional como palanca para la exportación. 10. Potenciar el liderazgo tecnológico tanto en producto como en procesos con políticas activas de I + D+ i. 11. Reforzar la capacidad de financiación con soluciones innovadoras. El segundo objetivo de esta tesis se centra en el desarrollo de soluciones integrales de Apoyo al Ciclo de Vida (ACV). La metodología planteada trata de minimizar la brecha entre capacidades y necesidades a lo largo de la vida operativa del barco. Es decir, el objetivo principal de los programas de ACV es que la unidad conserve durante toda su vida operativa, en términos relativos a las tecnologías existentes, las capacidades equivalentes a las que tendrá cuando entre en servicio. Los ejes de actuación para conseguir que un programa de Apoyo al Ciclo de Vida cumpla su objetivo son: el diseño orientado al valor, la ingeniería de Apoyo al Ciclo de Vida, los proyectos de refresco de tecnología, el mantenimiento Inteligente y los contratos basados en prestaciones. ABSTRACT On the first decades of the 21st century, organizations and administrations face challenges and come across opportunities threatened by a number of disruptive forces such as globalization, the ever-changing emerging technologies and the economic imbalances acting as drivers of the market transformation. This combination of factors is forcing all industrial companies to have more and higher demanding productivity levels, while bearing always in mind how to improve and meet the customer’s requirements. In this situation, we need to question ourselves again: Who is the customer? What does the customer value? And how can we deliver sustainable economic benefits? Considering this matter in a military naval industry framework sets the goals that this thesis intends to achieve. The first general goal is the definition of a new sustainable business model for the 2025 naval industry, adapted to the customer requirements and the new political, economic, social, technological and environmental scenario. And the second goal that arises as a consequence of the general model develops a methodology to implement “warship” through life support programs. The research is divided in four parts: the first one justifies, on the one hand, the need to change the existing model and, on the other, identifies the model structural factors. On the second part, current literature regarding one of the key issues on the new model (the Product-Service concept) is reviewed. Based on field research, the third part focuses entirely on military shipbuilding, analyzing specific key aspects of this field and identifying which of them are valued the most by Navies and how they manage through life cycles of warships. Finally, the foundation of the proposed model is presented and also the basic grounds for implementing a Through Life Support (TLS) program are developed. As a result of this research, the proposed model for the naval industry is based on eleven (11) key principles: 1. The warship (a high added value product) must be designed and built in a shipyard at the country developing the defense program. 2. Design must be customer value oriented, i.e.warship must be designed to effectively fulfill its mission throughout its operational life, ensuring safety at the ship and for the people and protecting the environment in accordance with current regulations. 3. The industry has to provide integrated Through Life Support solutions. 4. Develop and maintain integrated complex systems capabilities for the entire warship life cycle. 5. Introduce the product, processes, people and business model itself to digital technologies. 6. Develop long-term action plans with the domestic customer. These plans must be based on three premises: (i) the complete life cycle must be included, starting from the research and development stage throughout the ship’s disposal; (ii) customer demand has to be sophisticated, i.e. customer requirements, both from the efficiency and product perspective, "attract" the contractor and (iii) technological level and manufacturing capabilities of the company in the future must be maintained and a competitive position on the export market has to be achieved. 7. Promote the military exporting sector through increased international business. 8. Develop contractor multi-location as it entails an opportunity for growth and promote export opportunities providing integrated solutions in the customer's country. 9. Strengthen institutional diplomacy as a lever for export. 10. Promote technological leadership in both product and processes with active R & D & I policies (Research & Development & Innovation) 11. Strengthen financing capacity through innovative solutions. The second goal of this thesis is focused on developing integrated Through Life Support (TLS) solutions. The proposed methodology tries to minimize the gap between needs and capabilities through the ship operational life. It means, the main TLS program objective is to maintain the ship’s performance and capabilities during operational life, in relative terms to current technologies, equivalent to those the ship had when it entered service. The main actions to fulfill the TLS program objectives are: value-oriented design, TLS engineering, technology updating projects, intelligent maintenance and performance based contracts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The TALISMAN+ project, financed by the Spanish Ministry of Science and Innovation, aims to research and demonstrate innovative solutions transferable to society which offer services and products based on information and communication technologies in order to promote personal autonomy in prevention and monitoring scenarios. It will solve critical interoperability problems among systems and emerging technologies in a context where heterogeneity brings about accessibility barriers not yet overcome and demanded by the scientific, technological or social-health settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identification and tracking of objects in specific environments such as harbors or security areas is a matter of great importance nowadays. With this purpose, numerous systems based on different technologies have been developed, resulting in a great amount of gathered data displayed through a variety of interfaces. Such amount of information has to be evaluated by human operators in order to take the correct decisions, sometimes under highly critical situations demanding both speed and accuracy. In order to face this problem we describe IDT-3D, a platform for identification and tracking of vessels in a harbour environment able to represent fused information in real time using a Virtual Reality application. The effectiveness of using IDT-3D as an integrated surveillance system is currently under evaluation. Preliminary results point to a significant decrease in the times of reaction and decision making of operators facing up a critical situation. Although the current application focus of IDT-3D is quite specific, the results of this research could be extended to the identification and tracking of targets in other controlled environments of interest as coastlines, borders or even urban areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La seguridad verificada es una metodología para demostrar propiedades de seguridad de los sistemas informáticos que se destaca por las altas garantías de corrección que provee. Los sistemas informáticos se modelan como programas probabilísticos y para probar que verifican una determinada propiedad de seguridad se utilizan técnicas rigurosas basadas en modelos matemáticos de los programas. En particular, la seguridad verificada promueve el uso de demostradores de teoremas interactivos o automáticos para construir demostraciones completamente formales cuya corrección es certificada mecánicamente (por ordenador). La seguridad verificada demostró ser una técnica muy efectiva para razonar sobre diversas nociones de seguridad en el área de criptografía. Sin embargo, no ha podido cubrir un importante conjunto de nociones de seguridad “aproximada”. La característica distintiva de estas nociones de seguridad es que se expresan como una condición de “similitud” entre las distribuciones de salida de dos programas probabilísticos y esta similitud se cuantifica usando alguna noción de distancia entre distribuciones de probabilidad. Este conjunto incluye destacadas nociones de seguridad de diversas áreas como la minería de datos privados, el análisis de flujo de información y la criptografía. Ejemplos representativos de estas nociones de seguridad son la indiferenciabilidad, que permite reemplazar un componente idealizado de un sistema por una implementación concreta (sin alterar significativamente sus propiedades de seguridad), o la privacidad diferencial, una noción de privacidad que ha recibido mucha atención en los últimos años y tiene como objetivo evitar la publicación datos confidenciales en la minería de datos. La falta de técnicas rigurosas que permitan verificar formalmente este tipo de propiedades constituye un notable problema abierto que tiene que ser abordado. En esta tesis introducimos varias lógicas de programa quantitativas para razonar sobre esta clase de propiedades de seguridad. Nuestra principal contribución teórica es una versión quantitativa de una lógica de Hoare relacional para programas probabilísticos. Las pruebas de correción de estas lógicas son completamente formalizadas en el asistente de pruebas Coq. Desarrollamos, además, una herramienta para razonar sobre propiedades de programas a través de estas lógicas extendiendo CertiCrypt, un framework para verificar pruebas de criptografía en Coq. Confirmamos la efectividad y aplicabilidad de nuestra metodología construyendo pruebas certificadas por ordendor de varios sistemas cuyo análisis estaba fuera del alcance de la seguridad verificada. Esto incluye, entre otros, una meta-construcción para diseñar funciones de hash “seguras” sobre curvas elípticas y algoritmos diferencialmente privados para varios problemas de optimización combinatoria de la literatura reciente. ABSTRACT The verified security methodology is an emerging approach to build high assurance proofs about security properties of computer systems. Computer systems are modeled as probabilistic programs and one relies on rigorous program semantics techniques to prove that they comply with a given security goal. In particular, it advocates the use of interactive theorem provers or automated provers to build fully formal machine-checked versions of these security proofs. The verified security methodology has proved successful in modeling and reasoning about several standard security notions in the area of cryptography. However, it has fallen short of covering an important class of approximate, quantitative security notions. The distinguishing characteristic of this class of security notions is that they are stated as a “similarity” condition between the output distributions of two probabilistic programs, and this similarity is quantified using some notion of distance between probability distributions. This class comprises prominent security notions from multiple areas such as private data analysis, information flow analysis and cryptography. These include, for instance, indifferentiability, which enables securely replacing an idealized component of system with a concrete implementation, and differential privacy, a notion of privacy-preserving data mining that has received a great deal of attention in the last few years. The lack of rigorous techniques for verifying these properties is thus an important problem that needs to be addressed. In this dissertation we introduce several quantitative program logics to reason about this class of security notions. Our main theoretical contribution is, in particular, a quantitative variant of a full-fledged relational Hoare logic for probabilistic programs. The soundness of these logics is fully formalized in the Coq proof-assistant and tool support is also available through an extension of CertiCrypt, a framework to verify cryptographic proofs in Coq. We validate the applicability of our approach by building fully machine-checked proofs for several systems that were out of the reach of the verified security methodology. These comprise, among others, a construction to build “safe” hash functions into elliptic curves and differentially private algorithms for several combinatorial optimization problems from the recent literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The inherent complexity of modern cloud infrastructures has created the need for innovative monitoring approaches, as state-of-the-art solutions used for other large-scale environments do not address specific cloud features. Although cloud monitoring is nowadays an active research field, a comprehensive study covering all its aspects has not been presented yet. This paper provides a deep insight into cloud monitoring. It proposes a unified cloud monitoring taxonomy, based on which it defines a layered cloud monitoring architecture. To illustrate it, we have implemented GMonE, a general-purpose cloud monitoring tool which covers all aspects of cloud monitoring by specifically addressing the needs of modern cloud infrastructures. Furthermore, we have evaluated the performance, scalability and overhead of GMonE with Yahoo Cloud Serving Benchmark (YCSB), by using the OpenNebula cloud middleware on the Grid’5000 experimental testbed. The results of this evaluation demonstrate the benefits of our approach, surpassing the monitoring performance and capabilities of cloud monitoring alternatives such as those present in state-of-the-art systems such as Amazon EC2 and OpenNebula.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El proceso de cambio de una sociedad industrial a una sociedad del conocimiento, que experimenta el mundo globalizado en el siglo XXI, induce a las empresas y organizaciones a desarrollar ventajas competitivas y sostenibles basadas en sus activos intangibles, entre los cuales destacan los sistemas de gestión en general y los sistemas de gestión de la calidad (SGC) en particular. Las organizaciones dedicadas a la producción de petróleo están influenciadas por dicha tendencia. El petróleo es un recurso natural con reservas limitadas, cuya producción y consumo ha crecido progresivamente, aportando la mayor cuota (35 %) del total de la energía que se consume en el mundo contemporáneo, aporte que se mantendrá hasta el año 2035, según las previsiones más conservadoras. Por tanto, se hace necesario desarrollar modelos de producción innovadores, que contribuyan a la mejora del factor de recobro de los yacimientos y de la vida útil de los mismos, al tiempo que satisfagan los requerimientos de producción y consumo diarios de los exigentes mercados globales. El objeto de esta investigación es el desarrollo de un modelo de gestión de la calidad y su efecto en el desempeño organizacional, a través del efecto mediador de los constructos satisfacción del cliente interno y gestión del conocimiento en la producción de petróleo. Esta investigación de carácter explicativo, no experimental, transeccional y ex-postfacto, se realizó en la región petrolífera del lago de Maracaibo, al occidente de Venezuela, la cual tiene más de 70 años en producción y cuenta con yacimientos maduros. La población objeto de estudio fue de 369 trabajadores petroleros, quienes participaron en las mesas técnicas de la calidad, durante los meses de mayo y julio del año 2012, los cuales en su mayoría están en proceso de formación como analistas, asesores y auditores de los SGC. La técnica de muestreo aplicada fue de tipo aleatorio simple, con una muestra de 252 individuos. A la misma se le aplicó un cuestionario diseñado ad hoc, el cual fue validado por las técnicas de juicio de expertos y prueba piloto. El procedimiento de investigación se realizó a través de una secuencia, que incluyó la elaboración de un modelo teórico, basado en la revisión del estado del arte; un modelo factorial, sobre la base del análisis factorial de los datos de la encuesta; un modelo de regresión lineal, elaborado a través del método de regresión lineal simple y múltiple; un modelo de análisis de sendero, realizado con el software Amos 20 SPSS y finalmente, un modelo informático, realizado con el simulador Vensim PLE v.6.2. Los resultados obtenidos indican que el modelo teórico se transformó en un modelo empírico, en el cual, la variable independiente fue el SGC, la variable mediadora fue la integración de las dimensiones eliminación de la no conformidad, satisfacción del cliente interno y aprendizaje organizacional (ENCSCIAO) y la variable respuesta la integración de las dimensiones desempeño organizacional y aprendizaje organizacional (DOOA). Se verificó el efecto mediador del ENSCIAO sobre la relación SGC-DOOA con una bondad del ajuste, del 42,65%. En el modelo de regresión múltiple se encontró que las variables determinantes son eliminación de la no conformidad (ENC), conocimiento adquirido (CA) y conocimiento espontáneo (CE), lo cual fue corroborado con el modelo de análisis de sendero. El modelo informático se desarrolló empleando datos aproximados de una unidad de producción tipo, generándose cuatro escenarios; siendo el más favorable, aquel en el cual se aplicaba el SGC y variables relacionadas, reduciendo la desviación de la producción, incrementando el factor de recobro y ampliando la vida útil del yacimiento. Se concluye que la aplicación del SGC y constructos relacionados favorece el desempeño y la producción de las unidades de explotación de yacimientos petrolíferos maduros. Los principales aportes de la tesis son la obtención de un modelo de gestión de la producción de petróleo en yacimientos maduros, basado en los SGC. Asimismo, el desarrollo de un concepto de gestión de la calidad asociado a la reducción de la desviación de la producción petrolífera anual, al incremento del factor de recobro y al aumento de la vida útil del yacimiento. Las futuras líneas de investigación están orientadas a la aplicación del modelo en contextos reales y específicos, para medir su impacto y realizar los ajustes pertinentes. ABSTRACT The process of change from an industrial society to a knowledge-based society, which undergoes the globalized world in the twenty-first century, induces companies and organizations to develop a sustainable and competitive advantages based on its intangible assets, among which are noteworthy the management systems in general and particularly the quality management systems (QMS). Organizations engaged in oil production are influenced by said trend. Oil is a natural resource with limited reserves, where production and consumption has grown progressively, providing the largest share (35%) of the total energy consumed in the contemporary world, a contribution that will remain until the year 2035 according to the more conservative trust estimations. Therefore, it becomes necessary to develop innovative production models which contribute with the improvement of reservoirs´ recovery factor and the lifetime thereof, while meeting the production requirements and daily consumption of demanding global markets. The aim of this research is to develop a model of quality management and its effect on organizational performance through the mediator effect of the constructs, internal customer satisfaction and knowledge management in oil production. This research of explanatory nature, not experimental, transactional and expos-facto was carried out in the oil-region of Maracaibo Lake located to the west of Venezuela, which has more than 70 years in continuous production and has mature reservoirs. The population under study was 369 oil workers who participated in the technical quality workshops, during the months of May and July of 2012, the majority of which were in the process of training as analysts, consultants and auditors of the QMS. The sampling technique applied was simple random type. To a sample of 252 individuals of the population it was applied an ad hoc designed questionnaire, which was validated by the techniques of expert judgment and pilot test. The research procedure was performed through a sequence, which included the elaboration of a theoretical model, based on the review of the state of the art; a factorial model with based on factorial analysis of the survey data; a linear regression model, developed through the method of simple and multiple linear regression; a structural equation model, made with software °Amos 20 SPSS° and finally, a computer model, performed with the simulator Vensim PLE v.6.2. The results indicate that the theoretical model was transformed into an empirical model, in which the independent variable was the QMS, the mediator variable was the integration of the dimensions: elimination of non-conformity, internal customer satisfaction and organizational learning (ENCSCIAO) and the response variable the integration of the dimensions, organizational performance and learning organizational (DOOA). ENSCIAO´s mediator effect on the relation QMS-DOOA was verified with a goodness of fit of 42,65%. In the multiple regression model was found to be the determining variables are elimination of nonconformity (ENC), knowledge acquired (CA) and spontaneous knowledge (EC), which was verified with the structural equation model. The computer model was developed based on approximate data of an oil production unit type, creating four (04) scenarios; being the most favorable, that one which it was applied the QMS and related variables, reducing the production deviation, increasing the recovery factor and extending the lifetime of the reservoir. It is concluded that QMS implementation powered with the related constructs, favors performance and production of mature oilfield of exploitation reservoirs units. The main contributions of this thesis are obtaining a management model for oil production in mature oilfields, based on QMS. In addition, development of a concept of quality associated to reduce the annual oil production deviation, increase the recovery factor and increase oilfield lifetime. Future lines of research are oriented to the implementation of this model in real and specific contexts to measure its impact and make the necessary adjustments that might take place.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Bologna Declaration and the implementation of the European Higher Education Area are promoting the use of active learning methodologies. The aim of this study is to evaluate the effects obtained after applying active learning methodologies to the achievement of generic competences as well as to the academic performance. This study has been carried out at the Universidad Politécnica de Madrid, where these methodologies have been applied to the Operating Systems I subject of the degree in Technical Engineering in Computer Systems. The fundamental hypothesis tested was whether the implementation of active learning methodologies (cooperative learning and problem based learning) favours the achievement of certain generic competences (‘teamwork’ and ‘planning and time management’) and also whether this fact improved the academic performance of our students. The original approach of this work consists in using psychometric tests to measure the degree of acquired student’s generic competences instead of using opinion surveys, as usual. Results indicated that active learning methodologies improve the academic performance when compared to the traditional lecture/discussion method, according to the success rate obtained. These methods seem to have as well an effect on the teamwork competence (the perception of the behaviour of the other members in the group) but not on the perception of each students’ behaviour. Active learning does not produce any significant change in the generic competence ‘planning and time management'.