976 resultados para Deployment Model.


Relevância:

60.00% 60.00%

Publicador:

Resumo:

El crecimiento que ha venido presentando CENIT Transporte y logística de hidrocarburos S.A.S. ha conducido a su área de tecnología a revisar los procesos tecnológicos con los que apoya a todas las áreas de negocio para el logro de la estrategia empresarial. Para poder cumplir con esta promesa de valor hacia la organización, el área de tecnología bajo un marco de trabajo reconocido debe analizar, definir, estructurar e implementar formalmente sus procesos y de esta manera también cumplirá con los lineamientos establecidos por la casa matriz Ecopetrol S.A. COBIT 5-ISACA (2012) como marco de trabajo que integra las mejores practicas a nivel de tecnología, permite ser usado como el estándar que mejor se adecua a la necesidad del área de tecnología para definir e implementar un gobierno de TI en CENIT. Este trabajo propone un modelo para la definición e implementación de procesos de gobierno de tecnologías de la información que cuenta con una herramienta que basada en el nivel de importancia, aporte al foco de análisis y esfuerzo requerido permite de manera automática que los procesos se clasifiquen y prioricen generando las bases para estructurar el plan de trabajo que permitirá de manera organizada lograr la formalización de sus procesos. Este modelo y su herramienta se usaron para definir y clasificar los procesos que permitirán la implementación del gobierno de TI en CENIT. El resultado es que tres (3) de los procesos de COBIT 5.0 que inicialmente ayudarán a implementar formalmente el gobierno de TI, son en su orden APO01, APO02 y EDM01. Adicionalmente este modelo y herramienta propuesta pueden ser usadas para clasificar los procesos de COBIT 5.0 bajo diferentes focos de análisis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los sistemas empotrados son cada día más comunes y complejos, de modo que encontrar procesos seguros, eficaces y baratos de desarrollo software dirigidos específicamente a esta clase de sistemas es más necesario que nunca. A diferencia de lo que ocurría hasta hace poco, en la actualidad los avances tecnológicos en el campo de los microprocesadores de los últimos tiempos permiten el desarrollo de equipos con prestaciones más que suficientes para ejecutar varios sistemas software en una única máquina. Además, hay sistemas empotrados con requisitos de seguridad (safety) de cuyo correcto funcionamiento depende la vida de muchas personas y/o grandes inversiones económicas. Estos sistemas software se diseñan e implementan de acuerdo con unos estándares de desarrollo software muy estrictos y exigentes. En algunos casos puede ser necesaria también la certificación del software. Para estos casos, los sistemas con criticidades mixtas pueden ser una alternativa muy valiosa. En esta clase de sistemas, aplicaciones con diferentes niveles de criticidad se ejecutan en el mismo computador. Sin embargo, a menudo es necesario certificar el sistema entero con el nivel de criticidad de la aplicación más crítica, lo que hace que los costes se disparen. La virtualización se ha postulado como una tecnología muy interesante para contener esos costes. Esta tecnología permite que un conjunto de máquinas virtuales o particiones ejecuten las aplicaciones con unos niveles de aislamiento tanto temporal como espacial muy altos. Esto, a su vez, permite que cada partición pueda ser certificada independientemente. Para el desarrollo de sistemas particionados con criticidades mixtas se necesita actualizar los modelos de desarrollo software tradicionales, pues estos no cubren ni las nuevas actividades ni los nuevos roles que se requieren en el desarrollo de estos sistemas. Por ejemplo, el integrador del sistema debe definir las particiones o el desarrollador de aplicaciones debe tener en cuenta las características de la partición donde su aplicación va a ejecutar. Tradicionalmente, en el desarrollo de sistemas empotrados, el modelo en V ha tenido una especial relevancia. Por ello, este modelo ha sido adaptado para tener en cuenta escenarios tales como el desarrollo en paralelo de aplicaciones o la incorporación de una nueva partición a un sistema ya existente. El objetivo de esta tesis doctoral es mejorar la tecnología actual de desarrollo de sistemas particionados con criticidades mixtas. Para ello, se ha diseñado e implementado un entorno dirigido específicamente a facilitar y mejorar los procesos de desarrollo de esta clase de sistemas. En concreto, se ha creado un algoritmo que genera el particionado del sistema automáticamente. En el entorno de desarrollo propuesto, se han integrado todas las actividades necesarias para desarrollo de un sistema particionado, incluidos los nuevos roles y actividades mencionados anteriormente. Además, el diseño del entorno de desarrollo se ha basado en la ingeniería guiada por modelos (Model-Driven Engineering), la cual promueve el uso de los modelos como elementos fundamentales en el proceso de desarrollo. Así pues, se proporcionan las herramientas necesarias para modelar y particionar el sistema, así como para validar los resultados y generar los artefactos necesarios para el compilado, construcción y despliegue del mismo. Además, en el diseño del entorno de desarrollo, la extensión e integración del mismo con herramientas de validación ha sido un factor clave. En concreto, se pueden incorporar al entorno de desarrollo nuevos requisitos no-funcionales, la generación de nuevos artefactos tales como documentación o diferentes lenguajes de programación, etc. Una parte clave del entorno de desarrollo es el algoritmo de particionado. Este algoritmo se ha diseñado para ser independiente de los requisitos de las aplicaciones así como para permitir al integrador del sistema implementar nuevos requisitos del sistema. Para lograr esta independencia, se han definido las restricciones al particionado. El algoritmo garantiza que dichas restricciones se cumplirán en el sistema particionado que resulte de su ejecución. Las restricciones al particionado se han diseñado con una capacidad expresiva suficiente para que, con un pequeño grupo de ellas, se puedan expresar la mayor parte de los requisitos no-funcionales más comunes. Las restricciones pueden ser definidas manualmente por el integrador del sistema o bien pueden ser generadas automáticamente por una herramienta a partir de los requisitos funcionales y no-funcionales de una aplicación. El algoritmo de particionado toma como entradas los modelos y las restricciones al particionado del sistema. Tras la ejecución y como resultado, se genera un modelo de despliegue en el que se definen las particiones que son necesarias para el particionado del sistema. A su vez, cada partición define qué aplicaciones deben ejecutar en ella así como los recursos que necesita la partición para ejecutar correctamente. El problema del particionado y las restricciones al particionado se modelan matemáticamente a través de grafos coloreados. En dichos grafos, un coloreado propio de los vértices representa un particionado del sistema correcto. El algoritmo se ha diseñado también para que, si es necesario, sea posible obtener particionados alternativos al inicialmente propuesto. El entorno de desarrollo, incluyendo el algoritmo de particionado, se ha probado con éxito en dos casos de uso industriales: el satélite UPMSat-2 y un demostrador del sistema de control de una turbina eólica. Además, el algoritmo se ha validado mediante la ejecución de numerosos escenarios sintéticos, incluyendo algunos muy complejos, de más de 500 aplicaciones. ABSTRACT The importance of embedded software is growing as it is required for a large number of systems. Devising cheap, efficient and reliable development processes for embedded systems is thus a notable challenge nowadays. Computer processing power is continuously increasing, and as a result, it is currently possible to integrate complex systems in a single processor, which was not feasible a few years ago.Embedded systems may have safety critical requirements. Its failure may result in personal or substantial economical loss. The development of these systems requires stringent development processes that are usually defined by suitable standards. In some cases their certification is also necessary. This scenario fosters the use of mixed-criticality systems in which applications of different criticality levels must coexist in a single system. In these cases, it is usually necessary to certify the whole system, including non-critical applications, which is costly. Virtualization emerges as an enabling technology used for dealing with this problem. The system is structured as a set of partitions, or virtual machines, that can be executed with temporal and spatial isolation. In this way, applications can be developed and certified independently. The development of MCPS (Mixed-Criticality Partitioned Systems) requires additional roles and activities that traditional systems do not require. The system integrator has to define system partitions. Application development has to consider the characteristics of the partition to which it is allocated. In addition, traditional software process models have to be adapted to this scenario. The V-model is commonly used in embedded systems development. It can be adapted to the development of MCPS by enabling the parallel development of applications or adding an additional partition to an existing system. The objective of this PhD is to improve the available technology for MCPS development by providing a framework tailored to the development of this type of system and by defining a flexible and efficient algorithm for automatically generating system partitionings. The goal of the framework is to integrate all the activities required for developing MCPS and to support the different roles involved in this process. The framework is based on MDE (Model-Driven Engineering), which emphasizes the use of models in the development process. The framework provides basic means for modeling the system, generating system partitions, validating the system and generating final artifacts. The framework has been designed to facilitate its extension and the integration of external validation tools. In particular, it can be extended by adding support for additional non-functional requirements and support for final artifacts, such as new programming languages or additional documentation. The framework includes a novel partitioning algorithm. It has been designed to be independent of the types of applications requirements and also to enable the system integrator to tailor the partitioning to the specific requirements of a system. This independence is achieved by defining partitioning constraints that must be met by the resulting partitioning. They have sufficient expressive capacity to state the most common constraints and can be defined manually by the system integrator or generated automatically based on functional and non-functional requirements of the applications. The partitioning algorithm uses system models and partitioning constraints as its inputs. It generates a deployment model that is composed by a set of partitions. Each partition is in turn composed of a set of allocated applications and assigned resources. The partitioning problem, including applications and constraints, is modeled as a colored graph. A valid partitioning is a proper vertex coloring. A specially designed algorithm generates this coloring and is able to provide alternative partitions if required. The framework, including the partitioning algorithm, has been successfully used in the development of two industrial use cases: the UPMSat-2 satellite and the control system of a wind-power turbine. The partitioning algorithm has been successfully validated by using a large number of synthetic loads, including complex scenarios with more that 500 applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Se han desarrollado varios modelos prometedores para la captura digital de datos de movilidad, que pueden ser aplicados en la planificación urbana, de transporte y de ordenamiento territorial. Por ello el objetivo de este trabajo es desarrollar una metodología que recolecte información de movilidad con la cual se generen matrices Origen-Destino (OD) y de tiempos de viajes, además que identifique puntos de interés, modos y rutas frecuentes de viaje mediante el desarrollo e implementación de una aplicación para dispositivos móviles Android. Metodología: Se produjo una aplicación para dispositivos móviles con sistema operativo Android, en base a modelos existentes. Esta aplicación obtuvo datos de movilidad a partir de los sensores de localización incorporados en los móviles (GPS), para su posterior migración a una base de datos en la nube y consiguiente post proceso con herramientas de análisis como KNIME, Python y QuantumGis. La aplicación fue probada por 68 estudiantes voluntarios de la Universidad de Cuenca, durante 14 días del mes de enero de 2016. Resultados: Con la información completa de 44 participantes se obtuvieron matrices OD y de tiempos de viajes para diferentes períodos del día, las cuales permitieron identificar variaciones de interacción entre zonas, variaciones de número y tiempo de viajes. Fueron reconocidos también modos de transporte como caminata, bicicleta y motorizados para una sub muestra (n=6). Se detectaron los POIs Residencia (91%), Trabajo/Estudio (74%) y puntos intermedios (20% del total de POIs) y se logró observar comportamientos de movilidad atípico. Finalmente se compararon las rutas más frecuentadas por los usuarios con las rutas óptimas teóricas calculadas, encontrando que el 63.6% de los usuarios coincidían con el recorrido de estas últimas. Conclusiones: El método planteado presenta coherencia con trabajos previos, mostrando niveles de confianza equiparables. El mayor reto es la implementación masiva del modelo creado para la recolección de datos útiles para planes de movilidad.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Identifying appropriate decision criteria and making optimal decisions in a structured way is a complex process. This paper presents an approach for doing this in the form of a hybrid Quality Function Deployment (QFD) and Cybernetic Analytic Network Process (CANP) model for project manager selection. This involves the use of QFD to translate the owner's project management expectations into selection criteria and the CANP to weight the expectations and selection criteria. The supermatrix approach then prioritises the candidates with respect to the overall decision-making goal. A case study is used to demonstrate the use of the model in selecting a renovation project manager. This involves the development of 18 selection criteria in response to the owner's three main expectations of time, cost and quality.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The lifetime calculation of large dense sensor networks with fixed energy resources and the remaining residual energy have shown that for a constant energy resource in a sensor network the fault rate at the cluster head is network size invariant when using the network layer with no MAC losses.Even after increasing the battery capacities in the nodes the total lifetime does not increase after a max limit of 8 times. As this is a serious limitation lots of research has been done at the MAC layer which allows to adapt to the specific connectivity, traffic and channel polling needs for sensor networks. There have been lots of MAC protocols which allow to control the channel polling of new radios which are available to sensor nodes to communicate. This further reduces the communication overhead by idling and sleep scheduling thus extending the lifetime of the monitoring application. We address the two issues which effects the distributed characteristics and performance of connected MAC nodes. (1) To determine the theoretical minimum rate based on joint coding for a correlated data source at the singlehop, (2a) to estimate cluster head errors using Bayesian rule for routing using persistence clustering when node densities are the same and stored using prior probability at the network layer, (2b) to estimate the upper bound of routing errors when using passive clustering were the node densities at the multi-hop MACS are unknown and not stored at the multi-hop nodes a priori. In this paper we evaluate many MAC based sensor network protocols and study the effects on sensor network lifetime. A renewable energy MAC routing protocol is designed when the probabilities of active nodes are not known a priori. From theoretical derivations we show that for a Bayesian rule with known class densities of omega1, omega2 with expected error P* is bounded by max error rate of P=2P* for single-hop. We study the effects of energy losses using cross-layer simulation of - large sensor network MACS setup, the error rate which effect finding sufficient node densities to have reliable multi-hop communications due to unknown node densities. The simulation results show that even though the lifetime is comparable the expected Bayesian posterior probability error bound is close or higher than Pges2P*.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

It is essential to monitor deteriorated civil engineering structures cautiously to detect symptoms of their serious disruptions. A wireless sensor network can be an effective system for monitoring civil engineering structures. It is fast to deploy sensors especially in difficult-to-access areas, and it is extendable without any cable extensions. Since our target is to monitor deteriorations of civil engineering structures such as cracks at tunnel linings, most of the locations of sensors are known, and sensors are not required to move dynamically. Therefore, we focus on developing a deployment plan of a static network in order to reduce the value of a cost function such as initial installation cost and summation of communication distances of the network. The key issue of the deployment is the location of relays that forward sensing data from sensors to a data collection device called a gateway. In this paper, we propose a relay deployment-planning tool that can be used to design a wireless sensor network for monitoring civil engineering structures. For the planning tool, we formalize the model and implement a local search based algorithm to find a quasi-optimal solution. Our solution guarantees two routings from a sensor to a gateway, which can provide higher reliability of the network. We also show the application of our experimental tool to the actual environment in the London Underground.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose – The purpose of this research is to study the perceived impact of some factors on the resources allocation processes of the Nigerian universities and to suggest a framework that will help practitioners and academics to understand and improve such processes. Design/methodology/approach – The study adopted the interpretive qualitative approach aimed at an ‘in-depth’ understanding of the resource allocation experiences of key university personnel and their perceived impact of the contextual factors affecting such processes. The analysis of individual narratives from each university established the conditions and factors impacting the resources allocation processes within each institution. Findings – The resources allocation process issues in the Nigerian universities may be categorised into people (core and peripheral units’ challenge, and politics and power); process (resources allocation processes); and resources (critical financial shortage and resources dependence response). The study also provides insight that resourcing efficiency in Nigerian universities appears strongly constrained by the rivalry among the resource managers. The efficient resources allocation process (ERAP) model is proposed to resolve the identified resourcing deficiencies. Research limitations/implications – The research is not focused to provide generalizable observations but ‘in-depth’ perceived factors and their impact on the resources allocation processes in Nigerian universities. The study is limited to the internal resources allocation issues within the universities and excludes the external funding factors. The resource managers’ responses to the identified factors may affect their internal resourcing efficiency. Further research using more empirical samples is required to obtain more widespread results and the implications for all universities. Originality/value – This study contributes a fresh literature framework to resources allocation processes focusing at ‘people’, ‘process’ and ‘resources’. Also a middle range theory triangulation is developed in relation to better understanding of resourcing process management. The study will be of interest to university managers and policy makers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this study is to examine and extend Noer’s theoretical model of the new employment relationship. Design/methodology/approach – Case study methodology is used to scrutinise the model. The results of a literature-based survey on the elements underpinning the five values in the model are analysed from dual perspectives of individual and organization using a multi-source assessment instrument. A schema is developed to guide and inform a series of focus group discussions from an analysis of the survey data. Using content analysis, the transcripts from the focus group discussions are evaluated using the model’s values and their elements. The transcripts are also reviewed for implicit themes. The case studied is Flight Centre Limited, an Australian-based international retail travel company. Findings – Using this approach, some elements of the five values in Noer’s model are identified as characteristic of the company’s psychological contract. Specifically, to some extent, the model’s values of flexible deployment, customer focus, performance focus, project-based work, and human spirit and work can be applied in this case. A further analysis of the transcripts validates three additional values in the psychological contract literature: commitment; learning and development; and open information. As a result of the findings, Noer’s model is extended to eight values. Research limitations/implications – The study offers a research-based model of the new employment relationship. Since generalisations from the case study findings cannot be applied directly to other settings, the opportunity to test this model in a variety of contexts is open to other researchers. Originality/value – In practice, the methodology used is a unique process for benchmarking the psychological contract. The process may be applied in other business settings. By doing so, organization development professionals have a consulting framework for comparing an organization’s dominant psychological contract with the extended model presented here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Minimizing complexity of group key exchange (GKE) protocols is an important milestone towards their practical deployment. An interesting approach to achieve this goal is to simplify the design of GKE protocols by using generic building blocks. In this paper we investigate the possibility of founding GKE protocols based on a primitive called multi key encapsulation mechanism (mKEM) and describe advantages and limitations of this approach. In particular, we show how to design a one-round GKE protocol which satisfies the classical requirement of authenticated key exchange (AKE) security, yet without forward secrecy. As a result, we obtain the first one-round GKE protocol secure in the standard model. We also conduct our analysis using recent formal models that take into account both outsider and insider attacks as well as the notion of key compromise impersonation resilience (KCIR). In contrast to previous models we show how to model both outsider and insider KCIR within the definition of mutual authentication. Our analysis additionally implies that the insider security compiler by Katz and Shin from ACM CCS 2005 can be used to achieve more than what is shown in the original work, namely both outsider and insider KCIR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.