940 resultados para system dynamics performance
Resumo:
This paper describes an automatic-dependent surveillance-broadcast (ADS-B) implementation for air-to-air and ground-based experimental surveillance within a prototype of a fully automated air traffic management (ATM) system, under a trajectory-based-operations paradigm. The system is built using an air-inclusive implementation of system wide information management (SWIM). This work describes the relations between airborne and ground surveillance (SURGND), the prototype surveillance systems, and their algorithms. System's performance is analyzed with simulated and real data. Results show that the proposed ADS-B implementation can fulfill the most demanding surveillance accuracy requirements.
Resumo:
La hipótesis de esta tesis es: "La optimización de la ventana considerando simultáneamente aspectos energéticos y aspectos relativos a la calidad ambiental interior (confort higrotérmico, lumínico y acústico) es compatible, siempre que se conozcan y consideren las sinergias existentes entre ellos desde las primeras fases de diseño". En la actualidad se desconocen las implicaciones de muchas de las decisiones tomadas en torno a la ventana; para que su eficiencia en relación a todos los aspectos mencionados pueda hacerse efectiva es necesaria una herramienta que aporte más información de la actualmente disponible en el proceso de diseño, permitiendo así la optimización integral, en función de las circunstancias específicas de cada proyecto. En la fase inicial de esta investigación se realiza un primer acercamiento al tema, a través del estado del arte de la ventana; analizando la normativa existente, los componentes, las prestaciones, los elementos experimentales y la investigación. Se observa que, en ocasiones, altos requisitos de eficiencia energética pueden suponer una disminución de las prestaciones del sistema en relación con la calidad ambiental interior, por lo que surge el interés por integrar al análisis energético aspectos relativos a la calidad ambiental interior, como son las prestaciones lumínicas y acústicas y la renovación de aire. En este punto se detecta la necesidad de realizar un estudio integral que incorpore los distintos aspectos y evaluar las sinergias que se dan entre las distintas prestaciones que cumple la ventana. Además, del análisis de las soluciones innovadoras y experimentales se observa la dificultad de determinar en qué medida dichas soluciones son eficientes, ya que son soluciones complejas, no caracterizadas y que no están incorporadas en las metodologías de cálculo o en las bases de datos de los programas de simulación. Por lo tanto, se plantea una segunda necesidad, generar una metodología experimental para llevar a cabo la caracterización y el análisis de la eficiencia de sistemas innovadores. Para abordar esta doble necesidad se plantea la optimización mediante una evaluación del elemento acristalado que integre la eficiencia energética y la calidad ambiental interior, combinando la investigación teórica y la investigación experimental. En el ámbito teórico, se realizan simulaciones, cálculos y recopilación de información de distintas tipologías de hueco, en relación con cada prestación de forma independiente (acústica, iluminación, ventilación). A pesar de haber partido con un enfoque integrador, resulta difícil esa integración detectándose una carencia de herramientas disponible. En el ámbito experimental se desarrolla una metodología para la evaluación del rendimiento y de aspectos ambientales de aplicación a elementos innovadores de difícil valoración mediante la metodología teórica. Esta evaluación consiste en el análisis comparativo experimental entre el elemento innovador y un elemento estándar; para llevar a cabo este análisis se han diseñado dos espacios iguales, que denominamos módulos de experimentación, en los que se han incorporado los dos sistemas; estos espacios se han monitorizado, obteniéndose datos de consumo, temperatura, iluminancia y humedad relativa. Se ha realizado una medición durante un periodo de nueve meses y se han analizado y comparado los resultados, obteniendo así el comportamiento real del sistema. Tras el análisis teórico y el experimental, y como consecuencia de esa necesidad de integrar el conocimiento existente se propone una herramienta de evaluación integral del elemento acristalado. El desarrollo de esta herramienta se realiza en base al procedimiento de diagnóstico de calidad ambiental interior (CAI) de acuerdo con la norma UNE 171330 “Calidad ambiental en interiores”, incorporando el factor de eficiencia energética. De la primera parte del proceso, la parte teórica y el estado del arte, se obtendrán los parámetros que son determinantes y los valores de referencia de dichos parámetros. En base a los parámetros relevantes obtenidos se da forma a la herramienta, que consiste en un indicador de producto para ventanas que integra todos los factores analizados y que se desarrolla según la Norma UNE 21929 “Sostenibilidad en construcción de edificios. Indicadores de sostenibilidad”. ABSTRACT The hypothesis of this thesis is: "The optimization of windows considering energy and indoor environmental quality issues simultaneously (hydrothermal comfort, lighting comfort, and acoustic comfort) is compatible, provided that the synergies between these issues are known and considered from the early stages of design ". The implications of many of the decisions made on this item are currently unclear. So that savings can be made, an effective tool is needed to provide more information during the design process than the currently available, thus enabling optimization of the system according to the specific circumstances of each project. The initial phase deals with the study from an energy efficiency point of view, performing a qualitative and quantitative analysis of commercial, innovative and experimental windows. It is observed that sometimes, high-energy efficiency requirements may mean a reduction in the system's performance in relation to user comfort and health, that's why there is an interest in performing an integrated analysis of indoor environment aspects and energy efficiency. At this point a need for a comprehensive study incorporating the different aspects is detected, to evaluate the synergies that exist between the various benefits that meet the window. Moreover, from the analysis of experimental and innovative windows, a difficulty in establishing to what extent these solutions are efficient is observed; therefore, there is a need to generate a methodology for performing the analysis of the efficiency of the systems. Therefore, a second need arises, to generate an experimental methodology to perform characterization and analysis of the efficiency of innovative systems. To address this dual need, the optimization of windows by an integrated evaluation arises, considering energy efficiency and indoor environmental quality, combining theoretical and experimental research. In the theoretical field, simulations and calculations are performed; also information about the different aspects of indoor environment (acoustics, lighting, ventilation) is gathered independently. Despite having started with an integrative approach, this integration is difficult detecting lack available tools. In the experimental field, a methodology for evaluating energy efficiency and indoor environment quality is developed, to be implemented in innovative elements which are difficult to evaluate using a theoretical methodology This evaluation is an experimental comparative analysis between an innovative element and a standard element. To carry out this analysis, two equal spaces, called experimental cells, have been designed. These cells have been monitored, obtaining consumption, temperature, luminance and relative humidity data. Measurement has been performed during nine months and results have been analyzed and compared, obtaining results of actual system behavior. To advance this optimization, windows have been studied from the point of view of energy performance and performance in relation to user comfort and health: thermal comfort, acoustic comfort, lighting comfort and air quality; proposing the development of a methodology for an integrated analysis including energy efficiency and indoor environment quality. After theoretical and experimental analysis and as a result of the need to integrate existing knowledge, a comprehensive evaluation procedure for windows is proposed. This evaluation procedure is developed according to the UNE 171330 "Indoor Environmental Quality", also incorporating energy efficiency and cost as factors to evaluate. From the first part of the research process, outstanding parameters are chosen and reference values of these parameters are set. Finally, based on the parameters obtained, an indicator is proposed as windows product indicator. The indicator integrates all factors analyzed and is developed according to ISO 21929-1:2011"Sustainability in building construction. Sustainability indicators. Part 1: Framework for the development of indicators and a core set of indicators for buildings".
Application of the agency theory for the analysis of performance-based mechanisms in road management
Resumo:
El WCTR es un congreso de reconocido prestigio internacional en el ámbito de la investigación del transporte, y aunque las actas publicadas están en formato digital y sin ISSN ni ISBN, lo consideramos lo suficientemente importante como para que se considere en los indicadores. This paper develops a model based on agency theory to analyze road management systems (under the different contract forms available today) that employ a mechanism of performance indicators to establish the payment of the agent. The base assumption is that of asymmetric information between the principal (Public Authorities) and the agent (contractor) and the risk aversion of this latter. It is assumed that the principal may only measure the agent?s performance indirectly and by means of certain performance indicators that may be verified by the authorities. In this model there is presumed to be a relation between the efforts made by the agent and the performance level measured by the corresponding indicators, though it is also considered that there may be dispersion between both variables that gives rise to a certain degree of randomness in the contract. An analysis of the optimal contract has been made on the basis of this model and in accordance with a series of parameters that characterize the economic environment and the particular conditions of road infrastructure. As a result of the analysis made, it is considered that an optimal contract should generally combine a fixed component and a payment in accordance with the performance level obtained. The higher the risk aversion of the agent and the greater the marginal cost of public funds, the lower the impact of this performance-based payment. By way of conclusion, the system of performance indicators should be as broad as possible but should not overweight those indicators that encompass greater randomness in their results.
Resumo:
This work describes an analytical approach to determine what degree of accuracy is required in the definition of the rail vehicle models used for dynamic simulations. This way it would be possible to know in advance how the results of simulations may be altered due to the existence of errors in the creation of rolling stock models, whilst also identifying their critical parameters. This would make it possible to maximize the time available to enhance dynamic analysis and focus efforts on factors that are strictly necessary.In particular, the parameters related both to the track quality and to the rolling contact were considered in this study. With this aim, a sensitivity analysis was performed to assess their influence on the vehicle dynamic behaviour. To do this, 72 dynamic simulations were performed modifying, one at a time, the track quality, the wheel-rail friction coefficient and the equivalent conicity of both new and worn wheels. Three values were assigned to each parameter, and two wear states were considered for each type of wheel, one for new wheels and another one for reprofiled wheels.After processing the results of these simulations, it was concluded that all the parameters considered show very high influence, though the friction coefficient shows the highest influence. Therefore, it is recommended to undertake any future simulation job with measured track geometry and track irregularities, measured wheel profiles and normative values of wheel-rail friction coefficient.
Resumo:
En términos generales, m-salud puede definirse como el conjunto de sistemas de información, sensores médicos y tecnologías de comunicaciones móviles para el cuidado de la salud. La creciente disponibilidad, miniaturización, comportamiento, velocidades de transmisión de datos cada vez mayores y la esperada convergencia de tecnologías de red y comunicaciones inalámbricas en torno a los sistemas de salud móviles están acelerando el despliegue de estos sistemas y la provisión de servicios de m-salud, como por ejemplo, la teleasistencia móvil. El concepto emergente de m-salud conlleva retos importantes (estudios técnicos, análisis, modelado de la provisión de servicios, etc.) que hay que afrontar para impulsar la evolución de los sistemas y servicios de e-salud ofrecidos desde tecnologías de telecomunicación que utilizan acceso por cable y redes fijas, hacia configuraciones móviles e inalámbricas de última generación. En este trabajo se analizará primeramente el significado e implicaciones de m-salud y la situación en la que se encuentra; los retos a los que hay que enfrentarse para su implantación y provisión así como su tendencia. De los múltiples y diferentes servicios que se pueden proveer se ha identificado el servicio de Localización de Personas LoPe, lanzado por Cruz Roja en febrero de 2007, para teleasistencia móvil y que permite conocer en todo momento la ubicación de la persona que porta su dispositivo asociado. Orientado a personas con discapacidad, en situación de riesgo o dependencia por deterioro cognitivo, tiene como objetivo ayudarlas a recuperar su autonomía personal. La provisión de este servicio se modelará mediante dinámica de sistemas, ya que esta teoría se considera idónea para modelar sistemas complejos que evolucionan con el tiempo. El resultado final es un modelo que implementado a través de la herramienta Studio 8® de la compañía noruega Powersim Software AS nos ha permitido analizar y evaluar su comportamiento a lo largo del tiempo, además de permitirnos extraer conclusiones sobre el mismo y plantear futuras mejoras sobre el servicio. ABSTRACT. In general terms, m-health can be defined as “mobile computing, medical sensor, and communications technologies for health care.” The increased availability, miniaturization, performance, enhanced data rates, and the expected convergence of future wireless communication and network technologies around mobile health systems are accelerating the deployment of m-health systems and services, for instance, mobile telecare. The emerging concept of m-health involves significant challenges (technical studies, analysis, modeling of service provision, etc.) that must be tackled to drive the development of e-health services and systems offered by telecommunication technologies that use wired and fixed networks towards wireless and mobile new generation networks. Firstly, in this master’s thesis, the meaning and implications of m-health and its current situation are analyzed. This analysis also includes the challenges that must be tackled for the implementation and provision of m-health technologies and services and the m-health trends. Among the many different m-health services already delivered, the Localización de Personas LoPe service has been identified to work with it. This service, launched by Spanish Red Cross in February 2007, enables to locate people who carry the associated device. It’s aimed at people with disabilities, at risk or dependency due to cognitive impairment and helps them to recover their personal autonomy. The provision of this service will be modeled with system dynamics considering that this theory suits very well the modeling of complex systems which evolve over time. The final result is a system dynamics model of the service implemented with Studio 8® tool developed by Powersim Software AS, a Norwegian company. This model has allowed us to analyze and evaluate its behaviour over time, as well as to draw conclusions and to consider some future improvements in the service.
Resumo:
El cálculo de cargas de aerogeneradores flotantes requiere herramientas de simulación en el dominio del tiempo que consideren todos los fenómenos que afectan al sistema, como la aerodinámica, la dinámica estructural, la hidrodinámica, las estrategias de control y la dinámica de las líneas de fondeo. Todos estos efectos están acoplados entre sí y se influyen mutuamente. Las herramientas integradas se utilizan para calcular las cargas extremas y de fatiga que son empleadas para dimensionar estructuralmente los diferentes componentes del aerogenerador. Por esta razón, un cálculo preciso de las cargas influye de manera importante en la optimización de los componentes y en el coste final del aerogenerador flotante. En particular, el sistema de fondeo tiene gran impacto en la dinámica global del sistema. Muchos códigos integrados para la simulación de aerogeneradores flotantes utilizan modelos simplificados que no consideran los efectos dinámicos de las líneas de fondeo. Una simulación precisa de las líneas de fondeo dentro de los modelos integrados puede resultar fundamental para obtener resultados fiables de la dinámica del sistema y de los niveles de cargas en los diferentes componentes. Sin embargo, el impacto que incluir la dinámica de los fondeos tiene en la simulación integrada y en las cargas todavía no ha sido cuantificada rigurosamente. El objetivo principal de esta investigación es el desarrollo de un modelo dinámico para la simulación de líneas de fondeo con precisión, validarlo con medidas en un tanque de ensayos e integrarlo en un código de simulación para aerogeneradores flotantes. Finalmente, esta herramienta, experimentalmente validada, es utilizada para cuantificar el impacto que un modelos dinámicos de líneas de fondeo tienen en la computación de las cargas de fatiga y extremas de aerogeneradores flotantes en comparación con un modelo cuasi-estático. Esta es una información muy útil para los futuros diseñadores a la hora de decidir qué modelo de líneas de fondeo es el adecuado, dependiendo del tipo de plataforma y de los resultados esperados. El código dinámico de líneas de fondeo desarrollado en esta investigación se basa en el método de los Elementos Finitos, utilizando en concreto un modelo ”Lumped Mass” para aumentar su eficiencia de computación. Los experimentos realizados para la validación del código se realizaron en el tanque del École Céntrale de Nantes (ECN), en Francia, y consistieron en sumergir una cadena con uno de sus extremos anclados en el fondo del tanque y excitar el extremo suspendido con movimientos armónicos de diferentes periodos. El código demostró su capacidad para predecir la tensión y los movimientos en diferentes posiciones a lo largo de la longitud de la línea con gran precisión. Los resultados indicaron la importancia de capturar la dinámica de las líneas de fondeo para la predicción de la tensión especialmente en movimientos de alta frecuencia. Finalmente, el código se utilizó en una exhaustiva evaluación del efecto que la dinámica de las líneas de fondeo tiene sobre las cargas extremas y de fatiga de diferentes conceptos de aerogeneradores flotantes. Las cargas se calcularon para tres tipologías de aerogenerador flotante (semisumergible, ”spar-buoy” y ”tension leg platform”) y se compararon con las cargas obtenidas utilizando un modelo cuasi-estático de líneas de fondeo. Se lanzaron y postprocesaron más de 20.000 casos de carga definidos por la norma IEC 61400-3 siguiendo todos los requerimientos que una entidad certificadora requeriría a un diseñador industrial de aerogeneradores flotantes. Los resultados mostraron que el impacto de la dinámica de las líneas de fondeo, tanto en las cargas de fatiga como en las extremas, se incrementa conforme se consideran elementos situados más cerca de la plataforma: las cargas en la pala y en el eje sólo son ligeramente modificadas por la dinámica de las líneas, las cargas en la base de la torre pueden cambiar significativamente dependiendo del tipo de plataforma y, finalmente, la tensión en las líneas de fondeo depende fuertemente de la dinámica de las líneas, tanto en fatiga como en extremas, en todos los conceptos de plataforma que se han evaluado. ABSTRACT The load calculation of floating offshore wind turbine requires time-domain simulation tools taking into account all the phenomena that affect the system such as aerodynamics, structural dynamics, hydrodynamics, control actions and the mooring lines dynamics. These effects present couplings and are mutually influenced. The results provided by integrated simulation tools are used to compute the fatigue and ultimate loads needed for the structural design of the different components of the wind turbine. For this reason, their accuracy has an important influence on the optimization of the components and the final cost of the floating wind turbine. In particular, the mooring system greatly affects the global dynamics of the floater. Many integrated codes for the simulation of floating wind turbines use simplified approaches that do not consider the mooring line dynamics. An accurate simulation of the mooring system within the integrated codes can be fundamental to obtain reliable results of the system dynamics and the loads. The impact of taking into account the mooring line dynamics in the integrated simulation still has not been thoroughly quantified. The main objective of this research consists on the development of an accurate dynamic model for the simulation of mooring lines, validate it against wave tank tests and then integrate it in a simulation code for floating wind turbines. This experimentally validated tool is finally used to quantify the impact that dynamic mooring models have on the computation of fatigue and ultimate loads of floating wind turbines in comparison with quasi-static tools. This information will be very useful for future designers to decide which mooring model is adequate depending on the platform type and the expected results. The dynamic mooring lines code developed in this research is based in the Finite Element Method and is oriented to the achievement of a computationally efficient code, selecting a Lumped Mass approach. The experimental tests performed for the validation of the code were carried out at the `Ecole Centrale de Nantes (ECN) wave tank in France, consisting of a chain submerged into a water basin, anchored at the bottom of the basin, where the suspension point of the chain was excited with harmonic motions of different periods. The code showed its ability to predict the tension and the motions at several positions along the length of the line with high accuracy. The results demonstrated the importance of capturing the evolution of the mooring dynamics for the prediction of the line tension, especially for the high frequency motions. Finally, the code was used for an extensive assessment of the effect of mooring dynamics on the computation of fatigue and ultimate loads for different floating wind turbines. The loads were computed for three platforms topologies (semisubmersible, spar-buoy and tension leg platform) and compared with the loads provided using a quasi-static mooring model. More than 20,000 load cases were launched and postprocessed following the IEC 61400-3 guideline and fulfilling the conditions that a certification entity would require to an offshore wind turbine designer. The results showed that the impact of mooring dynamics in both fatigue and ultimate loads increases as elements located closer to the platform are evaluated; the blade and the shaft loads are only slightly modified by the mooring dynamics in all the platform designs, the tower base loads can be significantly affected depending on the platform concept and the mooring lines tension strongly depends on the lines dynamics both in fatigue and extreme loads in all the platform concepts evaluated.
Resumo:
The main objective of this study is to determine the attitudes of school principals regarding a performance based compensation system. This study identifies the attitudes towards specific factors that should be considered in the implementation of a system of performance based compensation. The data have been analyzed to determine if a principal's demographic characteristics affect his/her level of agreement with performance based compensation and the factors for implementation. In addition, this study unveils areas of concern that principals have conveyed regarding the implementation of a performance based compensation system. Data was obtained from 444 public school principals representing 444 schools and 178 districts in the state of Colorado. Measures used in the treatment of the data include descriptive statistics and one-way ANOVA. The major findings of this study were: 1. 82.4% of respondents believe that teachers, principals and administrators should be included in performance based compensation (PBC). 2. The top two indicators that respondents believed should be included in a PBC system are student achievement (88.5%) and teacher evaluations (77.6%) 3. The 3 largest obstacles to PBC that respondents identified are: a. The capacity to link student achievement to teacher evaluations (82.9%) b. Teacher Union Resistance (67.1%) c. Cost (55.9%) 4. Principals in urban, rural and suburban geographic groups disagree about the effects of performance based compensation. 5. The top 5 overall concerns regarding Performance Based Compensation were: a. Concerns regarding effectively using assessment to measure performance of all teachers/equity between teachers b. Concerns regarding evaluation (time for principals to learn, consistency from school to school, time for principals to evaluate, quality of evaluation tool). c. Not in favor of PBC due to philosophical views or concerns about lack of research. d. Concerns regarding the equity between classrooms and districts across the state due to poverty levels and unequal resources. e. Concerns that performance based compensation will result in a decline in teacher collaboration and an increase in competition between teachers. Based upon these findings, the researcher concluded that there is not a strong general acceptance of performance based compensation systems. However, urban principals in Colorado tend to view PBC somewhat more favorably than do principals in suburban or rural areas. Most importantly, systems to link student achievement to teacher evaluation must be collaboratively created to ensure PBC systems are equitable, consistent and fair.
Resumo:
As the population of Colorado continues to grow, the impacts from individual sewage disposal systems, or onsite wastewater systems (OWS), are becoming more apparent. Increased use of OWS impacts not only water quality but land use and development as well. These impacts have led to the need for a new generation of wastewater regulations in the state, a transition from the historic prescriptive requirements to a more progressive, performance-based system. A performance-based system will allow smarter growth, improved water quality, and cost savings for both the regulatory agencies and the OWS industry in Colorado. This project outlines the challenges and essential elements required to make this transition, and provides guidance on how to meet the challenges and overcome barriers to implementing a performance code in Colorado.
Resumo:
We present experimental results on the measurement of fidelity decay under contrasting system dynamics using a nuclear magnetic resonance quantum information processor. The measurements were performed by implementing a scalable circuit in the model of deterministic quantum computation with only one quantum bit. The results show measurable differences between regular and complex behavior and for complex dynamics are faithful to the expected theoretical decay rate. Moreover, we illustrate how the experimental method can be seen as an efficient way for either extracting coarse-grained information about the dynamics of a large system or measuring the decoherence rate from engineered environments.
Resumo:
We propose a novel interpretation and usage of Neural Network (NN) in modeling physiological signals, which are allowed to be nonlinear and/or nonstationary. The method consists of training a NN for the k-step prediction of a physiological signal, and then examining the connection-weight-space (CWS) of the NN to extract information about the signal generator mechanism. We de. ne a novel feature, Normalized Vector Separation (gamma(ij)), to measure the separation of two arbitrary states i and j in the CWS and use it to track the state changes of the generating system. The performance of the method is examined via synthetic signals and clinical EEG. Synthetic data indicates that gamma(ij) can track the system down to a SNR of 3.5 dB. Clinical data obtained from three patients undergoing carotid endarterectomy of the brain showed that EEG could be modeled (within a root-means-squared-error of 0.01) by the proposed method, and the blood perfusion state of the brain could be monitored via gamma(ij), with small NNs having no more than 21 connection weight altogether.
Resumo:
We present the first experimental observation of several bifurcations in a controllable non-linear Hamiltonian system. Dynamics of cold atoms are used to test predictions of non-linear, non-dissipative Hamiltonian dynamics.
Resumo:
This special issue of the Journal of the Operational Research Society is dedicated to papers on the related subjects of knowledge management and intellectual capital. These subjects continue to generate considerable interest amongst both practitioners and academics. This issue demonstrates that operational researchers have many contributions to offer to the area, especially by bringing multi-disciplinary, integrated and holistic perspectives. The papers included are both theoretical as well as practical, and include a number of case studies showing how knowledge management has been implemented in practice that may assist other organisations in their search for a better means of managing what is now recognised as a core organisational activity. It has been accepted by a growing number of organisations that the precise handling of information and knowledge is a significant factor in facilitating their success but that there is a challenge in how to implement a strategy and processes for this handling. It is here, in the particular area of knowledge process handling that we can see the contributions of operational researchers most clearly as is illustrated in the papers included in this journal edition. The issue comprises nine papers, contributed by authors based in eight different countries on five continents. Lind and Seigerroth describe an approach that they call team-based reconstruction, intended to help articulate knowledge in a particular organisational. context. They illustrate the use of this approach with three case studies, two in manufacturing and one in public sector health care. Different ways of carrying out reconstruction are analysed, and the benefits of team-based reconstruction are established. Edwards and Kidd, and Connell, Powell and Klein both concentrate on knowledge transfer. Edwards and Kidd discuss the issues involved in transferring knowledge across frontières (borders) of various kinds, from those borders within organisations to those between countries. They present two examples, one in distribution and the other in manufacturing. They conclude that trust and culture both play an important part in facilitating such transfers, that IT should be kept in a supporting role in knowledge management projects, and that a staged approach to this IT support may be the most effective. Connell, Powell and Klein consider the oft-quoted distinction between explicit and tacit knowledge, and argue that such a distinction is sometimes unhelpful. They suggest that knowledge should rather be regarded as a holistic systemic property. The consequences of this for knowledge transfer are examined, with a particular emphasis on what this might mean for the practice of OR Their view of OR in the context of knowledge management very much echoes Lind and Seigerroth's focus on knowledge for human action. This is an interesting convergence of views given that, broadly speaking, one set of authors comes from within the OR community, and the other from outside it. Hafeez and Abdelmeguid present the nearest to a 'hard' OR contribution of the papers in this special issue. In their paper they construct and use system dynamics models to investigate alternative ways in which an organisation might close a knowledge gap or skills gap. The methods they use have the potential to be generalised to any other quantifiable aspects of intellectual capital. The contribution by Revilla, Sarkis and Modrego is also at the 'hard' end of the spectrum. They evaluate the performance of public–private research collaborations in Spain, using an approach based on data envelopment analysis. They found that larger organisations tended to perform relatively better than smaller ones, even though the approach used takes into account scale effects. Perhaps more interesting was that many factors that might have been thought relevant, such as the organisation's existing knowledge base or how widely applicable the results of the project would be, had no significant effect on the performance. It may be that how well the partnership between the collaborators works (not a factor it was possible to take into account in this study) is more important than most other factors. Mak and Ramaprasad introduce the concept of a knowledge supply network. This builds on existing ideas of supply chain management, but also integrates the design chain and the marketing chain, to address all the intellectual property connected with the network as a whole. The authors regard the knowledge supply network as the natural focus for considering knowledge management issues. They propose seven criteria for evaluating knowledge supply network architecture, and illustrate their argument with an example from the electronics industry—integrated circuit design and fabrication. In the paper by Hasan and Crawford, their interest lies in the holistic approach to knowledge management. They demonstrate their argument—that there is no simple IT solution for organisational knowledge management efforts—through two case study investigations. These case studies, in Australian universities, are investigated through cultural historical activity theory, which focuses the study on the activities that are carried out by people in support of their interpretations of their role, the opportunities available and the organisation's purpose. Human activities, it is argued, are mediated by the available tools, including IT and IS and in this particular context, KMS. It is this argument that places the available technology into the knowledge activity process and permits the future design of KMS to be improved through the lessons learnt by studying these knowledge activity systems in practice. Wijnhoven concentrates on knowledge management at the operational level of the organisation. He is concerned with studying the transformation of certain inputs to outputs—the operations function—and the consequent realisation of organisational goals via the management of these operations. He argues that the inputs and outputs of this process in the context of knowledge management are different types of knowledge and names the operation method the knowledge logistics. The method of transformation he calls learning. This theoretical paper discusses the operational management of four types of knowledge objects—explicit understanding; information; skills; and norms and values; and shows how through the proposed framework learning can transfer these objects to clients in a logistical process without a major transformation in content. Millie Kwan continues this theme with a paper about process-oriented knowledge management. In her case study she discusses an implementation of knowledge management where the knowledge is centred around an organisational process and the mission, rationale and objectives of the process define the scope of the project. In her case they are concerned with the effective use of real estate (property and buildings) within a Fortune 100 company. In order to manage the knowledge about this property and the process by which the best 'deal' for internal customers and the overall company was reached, a KMS was devised. She argues that process knowledge is a source of core competence and thus needs to be strategically managed. Finally, you may also wish to read a related paper originally submitted for this Special Issue, 'Customer knowledge management' by Garcia-Murillo and Annabi, which was published in the August 2002 issue of the Journal of the Operational Research Society, 53(8), 875–884.
Resumo:
The generating functional method is employed to investigate the synchronous dynamics of Boolean networks, providing an exact result for the system dynamics via a set of macroscopic order parameters. The topology of the networks studied and its constituent Boolean functions represent the system's quenched disorder and are sampled from a given distribution. The framework accommodates a variety of topologies and Boolean function distributions and can be used to study both the noisy and noiseless regimes; it enables one to calculate correlation functions at different times that are inaccessible via commonly used approximations. It is also used to determine conditions for the annealed approximation to be valid, explore phases of the system under different levels of noise and obtain results for models with strong memory effects, where existing approximations break down. Links between Boolean networks and general Boolean formulas are identified and results common to both system types are highlighted. © 2012 Copyright Taylor and Francis Group, LLC.
Resumo:
Elementary conformational changes of the backbone of a 21-residue peptide A5(A3RA)3A are studied using molecular dynamics simulations in explicit water. The processes of the conformational transitions and the regimes of stationary fluctuations between them are investigated using minimal perturbations of the system. The perturbations consist of a few degrees rotation of the velocity of one of the systems' atoms and keep the system on the same energy surface. It is found that (i) the system dynamics is insignificantly changed by the perturbations in the regimes between the transitions; (ii) it is very sensitive to the perturbations just before the transitions that prevents the peptide from making the transitions; and (iii) the perturbation of any atom of the system, including distant water molecules is equally effective in preventing the transition. The latter implies strongly collective dynamics of the peptide and water during the transitions.
Resumo:
We explore the dynamics of a periodically driven Duffing resonator coupled elastically to a van der Pol oscillator in the case of 1?:?1 internal resonance in the cases of weak and strong coupling. Whilst strong coupling leads to dominating synchronization, the weak coupling case leads to a multitude of complex behaviours. A two-time scales method is used to obtain the frequency-amplitude modulation. The internal resonance leads to an antiresonance response of the Duffing resonator and a stagnant response (a small shoulder in the curve) of the van der Pol oscillator. The stability of the dynamic motions is also analyzed. The coupled system shows a hysteretic response pattern and symmetry-breaking facets. Chaotic behaviour of the coupled system is also observed and the dependence of the system dynamics on the parameters are also studied using bifurcation analysis.