940 resultados para system dynamics performance
Resumo:
Los sistemas electorales en sentido estricto, adicional a sus efectos técnicos, tienen efectos colaterales que sólo pueden ser visiblemente detectados después de tres o cuatro elecciones, lo que hace que el tiempo razonable de observación para el análisis y evaluación de los mismos no debe ser inferior a una década. Motivo por el cual el escenario político-electoral colombiano de los últimos tiempos se constituye en un laboratorio académico sin precedentes en nuestra historia. Más aún cuando a través del Acto Legislativo 01 de 2003 el Congreso logra aprobar su reforma política, en medio de un ambiente de tensiones y contrapesos entre el Legislativo y Ejecutivo, que busca cada uno a su manera, reformar estructuralmente la Constitución Política de Colombia, particularmente, en relación con la forma de obtener, conservar y ejercer el poder. Ante esta coyuntura de implementación y adaptación de la reforma, en el ámbito netamente electoral, el Observatorio de Procesos Electorales (OPE) ha emprendido la tarea de hacer seguimiento y sistematización de la información pertinente que le permita analizar los sistemas electorales –en sus efectos técnicos–, e ir observando a largo plazo sus efectos colaterales, así como su impacto real en la representación política, en la dinámica del sistema político y en el grado de gobernabilidad. En este cuadernillo se contextualizan los antecedentes de la elección senatorial y se presentan los resultados preliminares del seguimiento y sistematización de la información relacionada con el impacto inmediato de la reforma en la elección de 2006.
Resumo:
Se presenta el análisis de sensibilidad de un modelo de percepción de marca y ajuste de la inversión en marketing desarrollado en el Laboratorio de Simulación de la Universidad del Rosario. Este trabajo de grado consta de una introducción al tema de análisis de sensibilidad y su complementario el análisis de incertidumbre. Se pasa a mostrar ambos análisis usando un ejemplo simple de aplicación del modelo mediante la aplicación exhaustiva y rigurosa de los pasos descritos en la primera parte. Luego se hace una discusión de la problemática de medición de magnitudes que prueba ser el factor más complejo de la aplicación del modelo en el contexto práctico y finalmente se dan conclusiones sobre los resultados de los análisis.
Resumo:
El proyecto de tesis denominado “Modelización bajo el enfoque de dinámica de sistemas de una cadena de abastecimiento para la industria vitivinícola” busca construir un modelo que aporte una solución óptima al problema logístico encontrado en la cadena de suministro, para que empresas nacionales o internacionales que tengan un funcionamiento similar al del sistema estudiado, puedan tomarlo como ejemplo o referencia. Así mismo, esta investigación pretende encontrar los problemas más frecuentes en cadenas de este tipo con el fin de construir un marco conceptual y teórico fundamentado en la Teoría General de Sistemas (TGS) que genere finalmente un modelo basado en la dinámica de sistemas el cual permitirá a las empresas diseñar y comparar las diferentes intervenciones derivadas del modelo que propicien la generación de capacidades dirigidas al logro de la competitividad de forma perdurable.
Resumo:
La coordinació i assignació de tasques en entorns distribuïts ha estat un punt important de la recerca en els últims anys i aquests temes són el cor dels sistemes multi-agent. Els agents en aquests sistemes necessiten cooperar i considerar els altres agents en les seves accions i decisions. A més a més, els agents han de coordinar-se ells mateixos per complir tasques complexes que necessiten més d'un agent per ser complerta. Aquestes tasques poden ser tan complexes que els agents poden no saber la ubicació de les tasques o el temps que resta abans de que les tasques quedin obsoletes. Els agents poden necessitar utilitzar la comunicació amb l'objectiu de conèixer la tasca en l'entorn, en cas contrari, poden perdre molt de temps per trobar la tasca dins de l'escenari. De forma similar, el procés de presa de decisions distribuït pot ser encara més complexa si l'entorn és dinàmic, amb incertesa i en temps real. En aquesta dissertació, considerem entorns amb sistemes multi-agent amb restriccions i cooperatius (dinàmics, amb incertesa i en temps real). En aquest sentit es proposen dues aproximacions que permeten la coordinació dels agents. La primera és un mecanisme semi-centralitzat basat en tècniques de subhastes combinatòries i la idea principal es minimitzar el cost de les tasques assignades des de l'agent central cap als equips d'agents. Aquest algoritme té en compte les preferències dels agents sobre les tasques. Aquestes preferències estan incloses en el bid enviat per l'agent. La segona és un aproximació d'scheduling totalment descentralitzat. Això permet als agents assignar les seves tasques tenint en compte les preferències temporals sobre les tasques dels agents. En aquest cas, el rendiment del sistema no només depèn de la maximització o del criteri d'optimització, sinó que també depèn de la capacitat dels agents per adaptar les seves assignacions eficientment. Addicionalment, en un entorn dinàmic, els errors d'execució poden succeir a qualsevol pla degut a la incertesa i error de accions individuals. A més, una part indispensable d'un sistema de planificació és la capacitat de re-planificar. Aquesta dissertació també proveeix una aproximació amb re-planificació amb l'objectiu de permetre als agent re-coordinar els seus plans quan els problemes en l'entorn no permeti la execució del pla. Totes aquestes aproximacions s'han portat a terme per permetre als agents assignar i coordinar de forma eficient totes les tasques complexes en un entorn multi-agent cooperatiu, dinàmic i amb incertesa. Totes aquestes aproximacions han demostrat la seva eficiència en experiments duts a terme en l'entorn de simulació RoboCup Rescue.
Resumo:
A área da “política e administração da saúde”, tem merecido um interesse crescente nas últimas décadas. Provavelmente em consequência do substancial aumento das despesas de saúde que tem ocorrido em todo o mundo mas, também porque se tem verificado uma sensível melhoria da situação de saúde das populações, o que faz com que, “policy makers”, académicos, analistas do setor e “media” tragam as questões de saúde para as primeiras páginas, valorizando-as e tentando melhorar a compreensão sobre o muito complexo processo de prestação em saúde.Não se trata no entanto de uma melhoria que usualmente seja quantificada, ocorrendo até que, se são frequentes, as tentativas de medir os custos e a produção da saúde, setor que tem uma importante dimensão económica, o mesmo não se verifica em relação aos seus resultados (o impacto que os cuidados tiveram na saúde das populações) e ainda menos em relação aos chamados “ganhos em saúde”, afinal o objectivo maior dos sistemas de saúde.Assim, entre a subida das despesas e a melhoria dos resultados, há uma falta de relacionamento que torna difícil fazer um balanço, pelo que é urgente adotar modelos de avaliação da prestação e dos seus resultados que sejam explícitos e ajudem a validar a efetividade da prestação e dos resultados obtidos. O presente trabalho pretende ser um contributo para clarificar esta questão e procurar um indicador corrente que possa ser utilizado para objetivar os “ganhos em saúde” e que, por ser quantificável, possa permitir a definição de medidas de efetividade dos resultados obtidos e de avaliação da performance dos sistemas de saúde.Não será mais uma medida de medição da produção (outputs) mas que pode resolver muitos problemas de há longos anos, e dar suporte ao confronto recursos/resultados e permitindo avaliar a performance de sistemas de saúde, com consistência face aos seus objectivos e fiabilidade, sendo capaz de detetar as mudanças e de mostrar as diferenças.
Resumo:
As grandes mudanças organizacionais existentes na administração pública exigem cada vez mais uma reavaliação constante do formato de gestão, obrigando a uma missão consistente e a uma visão contínua. Nesta abordagem, o presente estudo visa demonstrar a importância que a avaliação de desempenho representa no seio da administração local. É objetivo central deste trabalho, avaliar o impacto da aplicação da avaliação de desempenho na Administração Local - Sistema Integrado de Avaliação de Desempenho na Administração Pública (SIADAP), bem como as políticas e práticas de Recursos Humanos aplicadas: a formação e a comunicação. Pretende-se, ainda, perceber até que ponto estes fatores contribuem para a compreensão do SIADAP. Para tal, utilizamos uma metodologia qualitativa para analisar e comparar as perspetivas dos avaliados e avaliadores de forma a perceber diferenças em relação aos diversos perfis funcionais. Os resultados apontam no sentido de que a monitorização dos objetivos e competências durante o processo de avaliação de desempenho influencia o modo como os avaliados classificam o sistema de Avaliação de Desempenho (SIADAP). Permitiu conhecer, sob o ponto de vista dos avaliadores, a sua perceção sobre o sistema como instrumento na promoção de uma cultura de mérito. Por fim, possibilitou recolher a opinião de avaliados e avaliadores sobre as quatro grandezas em estudo (subjetividade, importância, avaliação e satisfação), o que contribuiu para uma melhor compreensão desta temática, bem como da sua influência no desenvolvimento das pessoas e das organizações.
Integrating methods for developing sustainability indicators that can facilitate learning and action
Resumo:
Bossel's (2001) systems-based approach for deriving comprehensive indicator sets provides one of the most holistic frameworks for developing sustainability indicators. It ensures that indicators cover all important aspects of system viability, performance, and sustainability, and recognizes that a system cannot be assessed in isolation from the systems upon which it depends and which in turn depend upon it. In this reply, we show how Bossel's approach is part of a wider convergence toward integrating participatory and reductionist approaches to measure progress toward sustainable development. However, we also show that further integration of these approaches may be able to improve the accuracy and reliability of indicators to better stimulate community learning and action. Only through active community involvement can indicators facilitate progress toward sustainable development goals. To engage communities effectively in the application of indicators, these communities must be actively involved in developing, and even in proposing, indicators. The accuracy, reliability, and sensitivity of the indicators derived from local communities can be ensured through an iterative process of empirical and community evaluation. Communities are unlikely to invest in measuring sustainability indicators unless monitoring provides immediate and clear benefits. However, in the context of goals, targets, and/or baselines, sustainability indicators can more effectively contribute to a process of development that matches local priorities and engages the interests of local people.
Resumo:
In this study a minimum variance neuro self-tuning proportional-integral-derivative (PID) controller is designed for complex multiple input-multiple output (MIMO) dynamic systems. An approximation model is constructed, which consists of two functional blocks. The first block uses a linear submodel to approximate dominant system dynamics around a selected number of operating points. The second block is used as an error agent, implemented by a neural network, to accommodate the inaccuracy possibly introduced by the linear submodel approximation, various complexities/uncertainties, and complicated coupling effects frequently exhibited in non-linear MIMO dynamic systems. With the proposed model structure, controller design of an MIMO plant with n inputs and n outputs could be, for example, decomposed into n independent single input-single output (SISO) subsystem designs. The effectiveness of the controller design procedure is initially verified through simulations of industrial examples.
Resumo:
New construction algorithms for radial basis function (RBF) network modelling are introduced based on the A-optimality and D-optimality experimental design criteria respectively. We utilize new cost functions, based on experimental design criteria, for model selection that simultaneously optimizes model approximation, parameter variance (A-optimality) or model robustness (D-optimality). The proposed approaches are based on the forward orthogonal least-squares (OLS) algorithm, such that the new A-optimality- and D-optimality-based cost functions are constructed on the basis of an orthogonalization process that gains computational advantages and hence maintains the inherent computational efficiency associated with the conventional forward OLS approach. The proposed approach enhances the very popular forward OLS-algorithm-based RBF model construction method since the resultant RBF models are constructed in a manner that the system dynamics approximation capability, model adequacy and robustness are optimized simultaneously. The numerical examples provided show significant improvement based on the D-optimality design criterion, demonstrating that there is significant room for improvement in modelling via the popular RBF neural network.
Resumo:
Current e-learning systems are increasing their importance in higher education. However, the state of the art of e-learning applications, besides the state of the practice, does not achieve the level of interactivity that current learning theories advocate. In this paper, the possibility of enhancing e-learning systems to achieve deep learning has been studied by replicating an experiment in which students had to learn basic software engineering principles. One group learned these principles using a static approach, while the other group learned the same principles using a system-dynamics-based approach, which provided interactivity and feedback. The results show that, quantitatively, the latter group achieved a better understanding of the principles; furthermore, qualitatively, they enjoyed the learning experience
Resumo:
The objective of this paper is to reconsider the Maximum Entropy Production conjecture (MEP) in the context of a very simple two-dimensional zonal-vertical climate model able to represent the total material entropy production due at the same time to both horizontal and vertical heat fluxes. MEP is applied first to a simple four-box model of climate which accounts for both horizontal and vertical material heat fluxes. It is shown that, under condition of fixed insolation, a MEP solution is found with reasonably realistic temperature and heat fluxes, thus generalising results from independent two-box horizontal or vertical models. It is also shown that the meridional and the vertical entropy production terms are independently involved in the maximisation and thus MEP can be applied to each subsystem with fixed boundary conditions. We then extend the four-box model by increasing its resolution, and compare it with GCM output. A MEP solution is found which is fairly realistic as far as the horizontal large scale organisation of the climate is concerned whereas the vertical structure looks to be unrealistic and presents seriously unstable features. This study suggest that the thermal meridional structure of the atmosphere is predicted fairly well by MEP once the insolation is given but the vertical structure of the atmosphere cannot be predicted satisfactorily by MEP unless constraints are imposed to represent the determination of longwave absorption by water vapour and clouds as a function of the state of the climate. Furthermore an order-of-magnitude estimate of contributions to the material entropy production due to horizontal and vertical processes within the climate system is provided by using two different methods. In both cases we found that approximately 40 mW m−2 K−1 of material entropy production is due to vertical heat transport and 5–7 mW m−2 K−1 to horizontal heat transport
Resumo:
Integrated simulation models can be useful tools in farming system research. This chapter reviews three commonly used approaches, i.e. linear programming, system dynamics and agent-based models. Applications of each approach are presented and strengths and drawbacks discussed. We argue that, despite some challenges, mainly related to the integration of different approaches, model validation and the representation of human agents, integrated simulation models contribute important insights to the analysis of farming systems. They help unravelling the complex and dynamic interactions and feedbacks among bio-physical, socio-economic, and institutional components across scales and levels in farming systems. In addition, they can provide a platform for integrative research, and can support transdisciplinary research by functioning as learning platforms in participatory processes.
Resumo:
Relating system dynamics to the broad systems movement, the key notion is that reinforcing loops deserve no less attention than balancing loops. Three specific propositions follow. First, since reinforcing loops arise in surprising places, investigations of complex systems must consider their possible existence and potential impact. Second, because the strength of reinforcing loops can be misinferred - we include an example from the field of servomechanisms - computer simulation can be essential. Be it project management, corporate growth or inventory oscillation, simulation helps to assess consequences of reinforcing loops and options for interventions. Third, in social systems the consequences of reinforcing loops are not inevitable. Examples concerning globalization illustrate how difficult it might be to challenge such assumptions. However, system dynamics and ideas from contemporary social theory help to show that even the most complex social systems are, in principle, subject to human influence. In conclusion, by employing these ideas, by attending to reinforcing as well as balancing loops, system dynamics work can improve the understanding of social systems and illuminate our choices when attempting to steer them.
Resumo:
We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission-driven rather than concentration-driven perturbed parameter ensemble of a global climate model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration-driven simulations (with 10–90th percentile ranges of 1.7 K for the aggressive mitigation scenario, up to 3.9 K for the high-end, business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 K (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission-driven experiments, they do not change existing expectations (based on previous concentration-driven experiments) on the timescales over which different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in the case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration scenarios used to drive GCM ensembles, lies towards the lower end of our simulated distribution. This design decision (a legacy of previous assessments) is likely to lead concentration-driven experiments to under-sample strong feedback responses in future projections. Our ensemble of emission-driven simulations span the global temperature response of the CMIP5 emission-driven simulations, except at the low end. Combinations of low climate sensitivity and low carbon cycle feedbacks lead to a number of CMIP5 responses to lie below our ensemble range. The ensemble simulates a number of high-end responses which lie above the CMIP5 carbon cycle range. These high-end simulations can be linked to sampling a number of stronger carbon cycle feedbacks and to sampling climate sensitivities above 4.5 K. This latter aspect highlights the priority in identifying real-world climate-sensitivity constraints which, if achieved, would lead to reductions on the upper bound of projected global mean temperature change. The ensembles of simulations presented here provides a framework to explore relationships between present-day observables and future changes, while the large spread of future-projected changes highlights the ongoing need for such work.
Resumo:
Cross-layer techniques represent efficient means to enhance throughput and increase the transmission reliability of wireless communication systems. In this paper, a cross-layer design of aggressive adaptive modulation and coding (A-AMC), truncated automatic repeat request (T-ARQ), and user scheduling is proposed for multiuser multiple-input-multiple-output (MIMO) maximal ratio combining (MRC) systems, where the impacts of feedback delay (FD) and limited feedback (LF) on channel state information (CSI) are also considered. The A-AMC and T-ARQ mechanism selects the appropriate modulation and coding schemes (MCSs) to achieve higher spectral efficiency while satisfying the service requirement on the packet loss rate (PLR), profiting from the feasibility of using different MCSs to retransmit a packet, which is destined to a scheduled user selected to exploit multiuser diversity and enhance the system's performance in terms of both transmission efficiency and fairness. The system's performance is evaluated in terms of the average PLR, average spectral efficiency (ASE), outage probability, and average packet delay, which are derived in closed form, considering transmissions over Rayleigh-fading channels. Numerical results and comparisons are provided and show that A-AMC combined with T-ARQ yields higher spectral efficiency than the conventional scheme based on adaptive modulation and coding (AMC), while keeping the achieved PLR closer to the system's requirement and reducing delay. Furthermore, the effects of the number of ARQ retransmissions, numbers of transmit and receive antennas, normalized FD, and cardinality of the beamforming weight vector codebook are studied and discussed.