809 resultados para personal information management model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work involves the organization and content perspectives on Enterprise Content Management (ECM) framework. The case study at the Federal University of Rio Grande do Norte was based on ECM model to analyse the information management provided by the three main administrative systems: The Integrated Management of Academic Activities (SIGAA), Integrated System of Inheritance, and Contracts Administration (SIPAC) and the Integrated System for Administration and Human Resources (SIGRH). A case study protocol was designed to provide greater reliability to research process. Four propositions were examined in order to reach the specific objectives of identification and evaluation of ECM components from UFRN perspective. The preliminary phase provided the guidelines for the data collection. In total, 75 individuals were interviewed. Interviews with four managers directly involved on systems design were recorded (average duration of 90 minutes). The 70 remaining individuals were approached in random way in UFRN s units, including teachers, administrative-technical employees and students. The results showed the presence of many ECM elements in the management of UFRN administrative information. The technological component with higher presence was "management of web content / collaboration". But initiatives of other components (e.g. email and document management) were found and are in continuous improvement. The assessment made use of eQual 4.0 to examine the effectiveness of applications under three factors: usability, quality of information and offered service. In general, the quality offered by the systems was very good and walk side by side with the obtained benefits of ECM strategy adoption in the context of the whole institution

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Overabundance of white-tailed deer (Odocoileus virginianus) continues to challenge wildlife professionals nationwide, especially in urban settings. Moreover, wildlife managers often lack general site-specific information on deer movements, survival, and reproduction that are critical for management planning. We conducted radio-telemetry research concurrent with deer culling in forest preserves in northeastern Illinois and used empirical data to construct predictive population models. We culled 2,826 deer from 16 forest preserves in DuPage County (1992-1999) including 1,736 from the 10 km2 Waterfall Glen Forest Preserve. We also radio-marked 129 deer from 8 preserves in DuPage and adjacent Cook County (1994-1998). Recruitment was inversely associated with deer density suggesting a classic density-dependent response. Female deer were philopatric and 20% of adult males dispersed. Survival was high for all sex and age classes, and deer-vehicle collisions accounted for >55% of known mortalities. Based upon data from other areas, early attempts to apply population models to deer at Waterfall Glen Forest Preserve were not useful. The subsequent quantification of the density-dependent recruitment response and use of other empirical data strengthened the predictive capability of models. Our experience illustrates the importance of understanding demographics of overabundant deer in order to set realistic objectives and make sound management decisions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need to effectively manage the documentation covering the entire production process, from the concept phase right through to market realise, constitutes a key issue in the creation of a successful and highly competitive product. For almost forty years the most commonly used strategies to achieve this have followed Product Lifecycle Management (PLM) guidelines. Translated into information management systems at the end of the '90s, this methodology is now widely used by companies operating all over the world in many different sectors. PLM systems and editor programs are the two principal types of software applications used by companies for their process aotomation. Editor programs allow to store in documents the information related to the production chain, while the PLM system stores and shares this information so that it can be used within the company and made it available to partners. Different software tools, which capture and store documents and information automatically in the PLM system, have been developed in recent years. One of them is the ''DirectPLM'' application, which has been developed by the Italian company ''Focus PLM''. It is designed to ensure interoperability between many editors and the Aras Innovator PLM system. In this dissertation we present ''DirectPLM2'', a new version of the previous software application DirectPLM. It has been designed and developed as prototype during the internship by Focus PLM. Its new implementation separates the abstract logic of business from the real commands implementation, previously strongly dependent on Aras Innovator. Thanks to its new design, Focus PLM can easily develop different versions of DirectPLM2, each one devised for a specific PLM system. In fact, the company can focus the development effort only on a specific set of software components which provides specialized functions interacting with that particular PLM system. This allows shorter Time-To-Market and gives the company a significant competitive advantage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this research is to investigate the consequences of sharing or using information generated in one phase of the project to subsequent life cycle phases. Sometimes the assumptions supporting the information change, and at other times the context within which the information was created changes in a way that causes the information to become invalid. Often these inconsistencies are not discovered till the damage has occurred. This study builds on previous research that proposed a framework based on the metaphor of ‘ecosystems’ to model such inconsistencies in the 'supply chain' of life cycle information (Brokaw and Mukherjee, 2012). The outcome of such inconsistencies often results in litigation. Therefore, this paper studies a set of legal cases that resulted from inconsistencies in life cycle information, within the ecosystems framework. For each project, the errant information type, creator and user of the information and their relationship, time of creation and usage of the information in the life cycle of the project are investigated to assess the causes of failure of precise and accurate information flow as well as the impact of such failures in later stages of the project. The analysis shows that the misleading information is mostly due to lack of collaboration. Besides, in all the studied cases, lack of compliance checking, imprecise data and insufficient clarifications hinder accurate and smooth flow of information. The paper presents findings regarding the bottleneck of the information flow process during the design, construction and post construction phases. It also highlights the role of collaboration as well as information integration and management during the project life cycle and presents a baseline for improvement in information supply chain through the life cycle of the project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Credit-rationing model similar to Stiglitz and Weiss [1981] is combined with the information externality model of Lang and Nakamura [1993] to examine the properties of mortgage markets characterized by both adverse selection and information externalities. In a credit-rationing model, additional information increases lenders ability to distinguish risks, which leads to increased supply of credit. According to Lang and Nakamura, larger supply of credit leads to additional market activities and therefore, greater information. The combination of these two propositions leads to a general equilibrium model. This paper describes properties of this general equilibrium model. The paper provides another sufficient condition in which credit rationing falls with information. In that, external information improves the accuracy of equity-risk assessments of properties, which reduces credit rationing. Contrary to intuition, this increased accuracy raises the mortgage interest rate. This allows clarifying the trade offs associated with reduced credit rationing and the quality of applicant pool.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background. Childhood immunization programs have dramatically reduced the morbidity and mortality associated with vaccine-preventable diseases. Proper documentation of immunizations that have been administered is essential to prevent duplicate immunization of children. To help improve documentation, immunization information systems (IISs) have been developed. IISs are comprehensive repositories of immunization information for children residing within a geographic region. The two models for participation in an IIS are voluntary inclusion, or "opt-in," and voluntary exclusion, or "opt-out." In an opt-in system, consent must be obtained for each participant, conversely, in an opt-out IIS, all children are included unless procedures to exclude the child are completed. Consent requirements for participation vary by state; the Texas IIS, ImmTrac, is an opt-in system.^ Objectives. The specific objectives are to: (1) Evaluate the variance among the time and costs associated with collecting ImmTrac consent at public and private birthing hospitals in the Greater Houston area; (2) Estimate the total costs associated with collecting ImmTrac consent at selected public and private birthing hospitals in the Greater Houston area; (3) Describe the alternative opt-out process for collecting ImmTrac consent at birth and discuss the associated cost savings relative to an opt-in system.^ Methods. Existing time-motion studies (n=281) conducted between October, 2006 and August, 2007 at 8 birthing hospitals in the Greater Houston area were used to assess the time and costs associated with obtaining ImmTrac consent at birth. All data analyzed are deidentified and contain no personal information. Variations in time and costs at each location were assessed and total costs per child and costs per year were estimated. The cost of an alternative opt-out system was also calculated.^ Results. The median time required by birth registrars to complete consent procedures varied from 72-285 seconds per child. The annual costs associated with obtaining consent for 388,285 newborns in ImmTrac's opt-in consent process were estimated at $702,000. The corresponding costs of the proposed opt-out system were estimated to total $194,000 per year. ^ Conclusions. Substantial variation in the time and costs associated with completion of ImmTrac consent procedures were observed. Changing to an opt-out system for participation could represent significant cost savings. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technological progress has profoundly changed the way personal data are collected, accessed and used. Those data make possible unprecedented customization of advertising which, in turn, is the business model adopted by many of the most successful Internet companies. Yet measuring the value being generated is still a complex task. This paper presents a review of the literature on this subject. It has been found that the economic analysis of personal information has been conducted up to now from a qualitative perspective mainly linked to privacy issues. A better understanding of a quantitative approach to this topic is urgently needed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Security intrusions in large systems is a problem due to its lack of scalability with the current IDS-based approaches. This paper describes the RECLAMO project, where an architecture for an Automated Intrusion Response System (AIRS) is being proposed. This system will infer the most appropriate response for a given attack, taking into account the attack type, context information, and the trust and reputation of the reporting IDSs. RECLAMO is proposing a novel approach: diverting the attack to a specific honeynet that has been dynamically built based on the attack information. Among all components forming the RECLAMO's architecture, this paper is mainly focused on defining a trust and reputation management model, essential to recognize if IDSs are exposing an honest behavior in order to accept their alerts as true. Experimental results confirm that our model helps to encourage or discourage the launch of the automatic reaction process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La gestión de las tecnologías de la información tiene cada vez más importancia dentro de un mundo totalmente digitalizado y donde la capacidad de respuesta al cambio puede marcar el devenir de una compañía, y resulta cada vez más evidente que los modelos de gestión tradicionales utilizados en la mayoría de las compañías no son capaces de dar respuesta por si solos a estas nuevas necesidades. Aun teniendo identificado este área de mejora, son muchas las empresas reacias a abordar estos cambios, principalmente por el cambio rupturista que significa a nivel interno. De cara a facilitar esta transformación, se propone en este documento un modelo de transición controlada donde las grandes compañías puedan incorporar nuevas alternativas y herramientas ágiles de forma paulatina y asegurando que el proceso de cambio es seguro y efectivo. Mediante una modificación del ciclo de vida de proyecto dentro de la compañía, se incorporan en las áreas, equipos o dominios de la empresa que se identifiquen los nuevos modelos de gestión ágil, permitiendo así una transición gradual y controlada, y pudiendo además analizar los detalles sobre todo en etapas tempranas de la transformación. Una vez seleccionada el área o dominio objeto de la transformación, se realiza un análisis a nivel de Portfolio de proyectos, identificando aquellos que cumplen una serie de condiciones que les permiten ser gestionados utilizando modelos de gestión ágil. Para ello, se plantea una matriz de decisión con las principales variables a tener en cuenta a la hora de tomar una decisión. Una vez seleccionado y consensuado con los interesados el modelo de gestión utilizando la matriz de decisión, se plantean una serie de herramientas y métricas asociadas para que la gestión ágil del proyecto dé una visibilidad completa y detallada del estado en cada momento, asegurando un correcto proceso de gestión de proyectos para proveer visibilidad regular del progreso, riesgos, planes de contingencia y problemas, con las alertas y escalaciones adecuadas. Además de proponerse una serie de herramientas y métricas para la gestión ágil del proyecto, se plantean las modificaciones necesarias sobre las tipologías habituales de contrato y se propone un nuevo modelo de contrato: el Contrato Agile. La principal diferencia entre este nuevo modelo de contrato y los habituales es que, al igual que las metodologías ágiles, es ejecutado en segmentos o iteraciones. En definitiva, el objetivo de este documento es proveer un mecanismo que facilite la inclusión de nuevos modelos ágiles de gestión en grandes organizaciones, llevando a cabo una transición controlada, con herramientas y métricas adaptadas para tener visibilidad completa sobre los proyectos en todo momento.---ABSTRACT---The information technology management is every time more important in a totally digitized world, where the capacity to response the change could mark the future of a company, and results every time more evident that the traditional management models used in the most of the companies are not able to respond by themselves to these new necessities. Even having identified this improvement area, many companies are reluctant to address these changes, mainly due to the disruptive change that it means internally in the companies. In order to facilitate this transformation, this document proposed a controlled transition model to help the big companies to incorporate new alternatives and agile tools gradually and ensuring that the change process is safe and effective. Through a modification the project life cycle inside the company, the new agile management models are incorporated in the selected areas, teams or domains, permitting a gradual and controlled transition, and enabling further analyze the details above all in the early phases of the transformation. Once is selected the area or domain object of the transformation, a portfolio level analysis is performed, identifying those projects that meet a some conditions that allow them to be managed using agile management models. For that, a decision matrix is proposed with the principal variables to have into account at the time of decision making. Once the management model is selected using the decision matrix and it is agreed with the different stakeholders, a group of tools and metrics associated with the agile management projects are proposed to provide a regular visibility of the project progress, risks, contingency plans and problems, with proper alerts and escalations. Besides the group of tools and metrics proposed for agile project management, the necessary modifications over the traditional contract models and a new contract model are proposed: the Agile Contract. The main difference between this new contract model and the traditional ones is that, as the agile methodologies, it is executed in segments or iterations. To sum up, the objective of this document is to provide a mechanism that facilitates the inclusion of new agile management models in big companies, with a controlled transition and proposing adapted tools and metrics to have a full visibility over the project in all the phases of the project life cycle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The building sector has experienced a significant decline in recent years in Spain and Europe as a result of the financial crisis that began in 2007. This drop accompanies a low penetration of information and communication technologies in inter-organizational oriented business processes. The market decrease is causing a slowdown in the building sector, where only flexible small and medium enterprises (SMEs) survive thanks to specialization and innovation in services, which allow them to face new market demands. Inter-organizational information systems (IOISs) support innovation in services, and are thus a strategic tool for SMEs to obtain competitive advantage. Because of the inherent complexity of IOIS adoption, this research extends Kurnia and Johnston's (2000) theoretical model of IOIS adoption with an empirical model of IOIS characterization. The resultant model identifies the factors influencing IOIS adoption in SMEs in the building sector, to promote further service innovation for competitive and collaborative advantages. An empirical longitudinal study over six consecutive years using data from Spanish SMEs in the building sector validates the model, using the partial least squares technique and analyzing temporal stability. The main findings of this research are the four ways an IOIS might contribute to service innovation in the building sector. Namely: a) improving client interfaces and the link between service providers and end users; b) defining a specific market where SMEs can develop new service concepts; c) enhancing the service delivery system in traditional customer?supplier relationships; and d) introducing information and communication technologies and tools to improve information management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los servicios telemáticos han transformando la mayoría de nuestras actividades cotidianas y ofrecen oportunidades sin precedentes con características como, por ejemplo, el acceso ubicuo, la disponibilidad permanente, la independencia del dispositivo utilizado, la multimodalidad o la gratuidad, entre otros. No obstante, los beneficios que destacan en cuanto se reflexiona sobre estos servicios, tienen como contrapartida una serie de riesgos y amenazas no tan obvios, ya que éstos se nutren de y tratan con datos personales, lo cual suscita dudas respecto a la privacidad de las personas. Actualmente, las personas que asumen el rol de usuarios de servicios telemáticos generan constantemente datos digitales en distintos proveedores. Estos datos reflejan parte de su intimidad, de sus características particulares, preferencias, intereses, relaciones sociales, hábitos de consumo, etc. y lo que es más controvertido, toda esta información se encuentra bajo la custodia de distintos proveedores que pueden utilizarla más allá de las necesidades y el control del usuario. Los datos personales y, en particular, el conocimiento sobre los usuarios que se puede extraer a partir de éstos (modelos de usuario) se han convertido en un nuevo activo económico para los proveedores de servicios. De este modo, estos recursos se pueden utilizar para ofrecer servicios centrados en el usuario basados, por ejemplo, en la recomendación de contenidos, la personalización de productos o la predicción de su comportamiento, lo cual permite a los proveedores conectar con los usuarios, mantenerlos, involucrarlos y en definitiva, fidelizarlos para garantizar el éxito de un modelo de negocio. Sin embargo, dichos recursos también pueden utilizarse para establecer otros modelos de negocio que van más allá de su procesamiento y aplicación individual por parte de un proveedor y que se basan en su comercialización y compartición con otras entidades. Bajo esta perspectiva, los usuarios sufren una falta de control sobre los datos que les refieren, ya que esto depende de la voluntad y las condiciones impuestas por los proveedores de servicios, lo cual implica que habitualmente deban enfrentarse ante la disyuntiva de ceder sus datos personales o no acceder a los servicios telemáticos ofrecidos. Desde el sector público se trata de tomar medidas que protejan a los usuarios con iniciativas y legislaciones que velen por su privacidad y que aumenten el control sobre sus datos personales, a la vez que debe favorecer el desarrollo económico propiciado por estos proveedores de servicios. En este contexto, esta tesis doctoral propone una arquitectura y modelo de referencia para un ecosistema de intercambio de datos personales centrado en el usuario que promueve la creación, compartición y utilización de datos personales y modelos de usuario entre distintos proveedores, al mismo tiempo que ofrece a los usuarios las herramientas necesarias para ejercer su control en cuanto a la cesión y uso de sus recursos personales y obtener, en su caso, distintos incentivos o contraprestaciones económicas. Las contribuciones originales de la tesis son la especificación y diseño de una arquitectura que se apoya en un proceso de modelado distribuido que se ha definido en el marco de esta investigación. Éste se basa en el aprovechamiento de recursos que distintas entidades (fuentes de datos) ofrecen para generar modelos de usuario enriquecidos que cubren las necesidades específicas de terceras entidades, considerando la participación del usuario y el control sobre sus recursos personales (datos y modelos de usuario). Lo anterior ha requerido identificar y caracterizar las fuentes de datos con potencial de abastecer al ecosistema, determinar distintos patrones para la generación de modelos de usuario a partir de datos personales distribuidos y heterogéneos y establecer una infraestructura para la gestión de identidad y privacidad que permita a los usuarios expresar sus preferencias e intereses respecto al uso y compartición de sus recursos personales. Además, se ha definido un modelo de negocio de referencia que sustenta las investigaciones realizadas y que ha sido particularizado en dos ámbitos de aplicación principales, en concreto, el sector de publicidad en redes sociales y el sector financiero para la implantación de nuevos servicios. Finalmente, cabe destacar que las contribuciones de esta tesis han sido validadas en el contexto de distintos proyectos de investigación industrial aplicada y también en el marco de proyectos fin de carrera que la autora ha tutelado o en los que ha colaborado. Los resultados obtenidos han originado distintos méritos de investigación como dos patentes en explotación, la publicación de un artículo en una revista con índice de impacto y diversos artículos en congresos internacionales de relevancia. Algunos de éstos han sido galardonados con premios de distintas instituciones, así como en las conferencias donde han sido presentados. ABSTRACT Information society services have changed most of our daily activities, offering unprecedented opportunities with certain characteristics, such as: ubiquitous access, permanent availability, device independence, multimodality and free-of-charge services, among others. However, all the positive aspects that emerge when thinking about these services have as counterpart not-so-obvious threats and risks, because they feed from and use personal data, thus creating concerns about peoples’ privacy. Nowadays, people that play the role of user of services are constantly generating digital data in different service providers. These data reflect part of their intimacy, particular characteristics, preferences, interests, relationships, consumer behavior, etc. Controversy arises because this personal information is stored and kept by the mentioned providers that can use it beyond the user needs and control. Personal data and, in particular, the knowledge about the user that can be obtained from them (user models) have turned into a new economic asset for the service providers. In this way, these data and models can be used to offer user centric services based, for example, in content recommendation, tailored-products or user behavior, all of which allows connecting with the users, keeping them more engaged and involved with the provider, finally reaching customer loyalty in order to guarantee the success of a business model. However, these resources can be used to establish a different kind of business model; one that does not only processes and individually applies personal data, but also shares and trades these data with other entities. From that perspective, the users lack control over their referred data, because it depends from the conditions imposed by the service providers. The consequence is that the users often face the following dilemma: either giving up their personal data or not using the offered services. The Public Sector takes actions in order to protect the users approving, for example, laws and legal initiatives that reinforce privacy and increase control over personal data, while at the same time the authorities are also key players in the economy development that derives from the information society services. In this context, this PhD Dissertation proposes an architecture and reference model to achieve a user-centric personal data ecosystem that promotes the creation, sharing and use of personal data and user models among different providers, while offering users the tools to control who can access which data and why and if applicable, to obtain different incentives. The original contributions obtained are the specification and design of an architecture that supports a distributed user modelling process defined by this research. This process is based on leveraging scattered resources of heterogeneous entities (data sources) to generate on-demand enriched user models that fulfill individual business needs of third entities, considering the involvement of users and the control over their personal resources (data and user models). This has required identifying and characterizing data sources with potential for supplying resources, defining different generation patterns to produce user models from scattered and heterogeneous data, and establishing identity and privacy management infrastructures that allow users to set their privacy preferences regarding the use and sharing of their resources. Moreover, it has also been proposed a reference business model that supports the aforementioned architecture and this has been studied for two application fields: social networks advertising and new financial services. Finally, it has to be emphasized that the contributions obtained in this dissertation have been validated in the context of several national research projects and master thesis that the author has directed or has collaborated with. Furthermore, these contributions have produced different scientific results such as two patents and different publications in relevant international conferences and one magazine. Some of them have been awarded with different prizes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las metodologías de desarrollo ágiles han sufrido un gran auge en entornos industriales durante los últimos años debido a la rapidez y fiabilidad de los procesos de desarrollo que proponen. La filosofía DevOps y específicamente las metodologías derivadas de ella como Continuous Delivery o Continuous Deployment promueven la gestión completamente automatizada del ciclo de vida de las aplicaciones, desde el código fuente a las aplicaciones ejecutándose en entornos de producción. La automatización se ve como un medio para producir procesos repetibles, fiables y rápidos. Sin embargo, no todas las partes de las metodologías Continuous están completamente automatizadas. En particular, la gestión de la configuración de los parámetros de ejecución es un problema que ha sido acrecentado por la elasticidad y escalabilidad que proporcionan las tecnologías de computación en la nube. La mayoría de las herramientas de despliegue actuales pueden automatizar el despliegue de la configuración de parámetros de ejecución, pero no ofrecen soporte a la hora de fijar esos parámetros o de validar los ficheros que despliegan, principalmente debido al gran abanico de opciones de configuración y el hecho de que el valor de muchos de esos parámetros es fijado en base a preferencias expresadas por el usuario. Esto hecho hace que pueda parecer que cualquier solución al problema debe estar ajustada a una aplicación específica en lugar de ofrecer una solución general. Con el objetivo de solucionar este problema, propongo un modelo de configuración que puede ser inferido a partir de instancias de configuración existentes y que puede reflejar las preferencias de los usuarios para ser usado para facilitar los procesos de configuración. El modelo de configuración puede ser usado como la base de un proceso de configuración interactivo capaz de guiar a un operador humano a través de la configuración de una aplicación para su despliegue en un entorno determinado o para detectar cambios de configuración automáticamente y producir una configuración válida que se ajuste a esos cambios. Además, el modelo de configuración debería ser gestionado como si se tratase de cualquier otro artefacto software y debería ser incorporado a las prácticas de gestión habituales. Por eso también propongo un modelo de gestión de servicios que incluya información relativa a la configuración de parámetros de ejecución y que además es capaz de describir y gestionar propuestas arquitectónicas actuales tales como los arquitecturas de microservicios. ABSTRACT Agile development methodologies have risen in popularity within the industry in recent years due to the speed and reliability of the processes they propose. The DevOps philosophy and specifically the methodologies derived from it such as Continuous Delivery and Continuous Deployment push for a totally automated management of the application lifecycle, from the source code to the software running in production environment. Automation in this regard is used as a means to produce repeatable, reliable and fast processes. However, not all parts of the Continuous methodologies are completely automatized. In particular, management of runtime parameter configuration is a problem that has increased its impact in deployment process due to the scalability and elasticity provided by cloud technologies. Most deployment tools nowadays can automate the deployment of runtime parameter configuration, but they offer no support for parameter setting o configuration validation, as the range of different configuration options and the fact that the value of many of those parameters is based on user preference seems to imply that any solution to the problem will have to be tailored to a specific application. With the aim to solve this problem I propose a configuration model that can be inferred from existing configurations and reflect user preferences in order to ease the configuration process. The configuration model can be used as the base of an interactive configuration process capable of guiding a human operator through the configuration of an application for its deployment in a specific environment or to automatically detect configuration changes and produce valid runtime parameter configurations that take into account those changes. Additionally, the configuration model should be managed as any other software artefact and should be incorporated into current management practices. I also propose a service management model that includes the configuration information and that is able to describe and manage current architectural practices such as the microservices architecture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As Organizações chamadas de Clubes Sócioesportivos mantém um modelo de administração que as caracteriza, que é a gestão realizada por voluntários e apoiada por gestores profissionais. A literatura aponta que a estrutura organizacional de entidades esportivas é peculiar, especialmente em clubes de futebol, onde persiste a fronteira entre os gestores voluntários que representam o poder executivo e legislativo e os gestores profissionais que são os que controlam e executam o planejamento financeiro e de atividades. No entanto são escassos estudos voltados a Clubes Sócioesportivos. O objetivo do presente estudo foi de identificar, descrever e comparar aspectos do processo de gestão da área de esportes de Clubes Sócioesportivos e analisa-los à luz de teorias e modelos administrativos. A pesquisa teve abordagem qualitativa, sendo realizado estudo de campo junto a seis clubes sócioesportivos da cidade de São Paulo. Para tanto, foram construídos e aplicados dois instrumentos: questionário e entrevista semiestruturada junto aos gestores de esporte das entidades. As informações obtidas foram analisadas comparativamente entre as entidades. Verificou-se que os Clubes utilizam modelos administrativos tradicionais com desenhos organizacionais. O planejamento destas organizações é baseado estritamente no orçamento anual, não havendo um planejamento plurianual ou estratégico. A tomada de decisão tem como alicerce a experiência pessoal do gestor voluntário, apoiada na vivência do gestor profissional. Não foram encontradas técnicas apuradas de tomada de decisão. As decisões mais importantes quanto a administração do Clube mantém um rito de preocupação com as responsabilidades. Os recursos humanos são selecionados pelo gestor profissional com o aval do gestor voluntário mantendo uma linha de contratação coerente e que está voltada a atender as demandas do Clube. Conclui-se que os Clubes estudados apresentam poucos aspectos diferentes do tradicional nas suas administrações, mantém uma estrutura organizacional própria, e os processos de tomada de decisões na área de esportes são fortemente vinculados ao planejamento financeiro

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The CancerGrid consortium is developing open-standards cancer informatics to address the challenges posed by modern cancer clinical trials. This paper presents the service-oriented software paradigm implemented in CancerGrid to derive clinical trial information management systems for collaborative cancer research across multiple institutions. Our proposal is founded on a combination of a clinical trial (meta)model and WSRF (Web Services Resource Framework), and is currently being evaluated for use in early phase trials. Although primarily targeted at cancer research, our approach is readily applicable to other areas for which a similar information model is available.