891 resultados para Process control - Statistical methods
Resumo:
LOPES, Jose Soares Batista et al. Application of multivariable control using artificial neural networks in a debutanizer distillation column.In: INTERNATIONAL CONGRESS OF MECHANICAL ENGINEERING - COBEM, 19, 5-9 nov. 2007, Brasilia. Anais... Brasilia, 2007
Resumo:
Objectivo: o presente estudo pretende caracterizar a qualidade de vida dos idosos da Região de Leiria, comparando aqueles que vivem no Domicílio com os que vivem em Instituições. Para tal propomos caracterizar a população em estudo sóciodemograficamente; identificar factores situacionais consoante o seu local de residência; avaliar níveis de dependência , apoio social e funcionalidade familiar; avaliar a qualidade de vida e identificar a relação entre as várias variáveis e a qualidade de vida. Método: Para tal optou-se por passar um questionário a um total de 238 idosos, 111 residentes em Instituições e 127 residentes no domicílio. Ao longo do processo de recolha de dados foram cumpridas as exigências éticas que pautam a nossa profissão. Foram utilizados métodos de estatística descritiva e de estatística analítica para o tratamento de dados. Resultados: Os resultados obtidos permitiram a caracterização sócio-demográfica dos idosos da região de Leiria. Foi ainda possível comparar os dois grupos em estudo, não se tendo encontrado diferenças significativas entre os dois grupos para as variáveis biopsicossociais. Conclusão: A maioria dos idosos inquiridos tem qualidade de vida, sendo que os idosos residentes no domicílio apresentam maior qualidade de vida. /
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Ma thèse s’intéresse aux politiques de santé conçues pour encourager l’offre de services de santé. L’accessibilité aux services de santé est un problème majeur qui mine le système de santé de la plupart des pays industrialisés. Au Québec, le temps médian d’attente entre une recommandation du médecin généraliste et un rendez-vous avec un médecin spécialiste était de 7,3 semaines en 2012, contre 2,9 semaines en 1993, et ceci malgré l’augmentation du nombre de médecins sur cette même période. Pour les décideurs politiques observant l’augmentation du temps d’attente pour des soins de santé, il est important de comprendre la structure de l’offre de travail des médecins et comment celle-ci affecte l’offre des services de santé. Dans ce contexte, je considère deux principales politiques. En premier lieu, j’estime comment les médecins réagissent aux incitatifs monétaires et j’utilise les paramètres estimés pour examiner comment les politiques de compensation peuvent être utilisées pour déterminer l’offre de services de santé de court terme. En second lieu, j’examine comment la productivité des médecins est affectée par leur expérience, à travers le mécanisme du "learning-by-doing", et j’utilise les paramètres estimés pour trouver le nombre de médecins inexpérimentés que l’on doit recruter pour remplacer un médecin expérimenté qui va à la retraite afin de garder l’offre des services de santé constant. Ma thèse développe et applique des méthodes économique et statistique afin de mesurer la réaction des médecins face aux incitatifs monétaires et estimer leur profil de productivité (en mesurant la variation de la productivité des médecins tout le long de leur carrière) en utilisant à la fois des données de panel sur les médecins québécois, provenant d’enquêtes et de l’administration. Les données contiennent des informations sur l’offre de travail de chaque médecin, les différents types de services offerts ainsi que leurs prix. Ces données couvrent une période pendant laquelle le gouvernement du Québec a changé les prix relatifs des services de santé. J’ai utilisé une approche basée sur la modélisation pour développer et estimer un modèle structurel d’offre de travail en permettant au médecin d’être multitâche. Dans mon modèle les médecins choisissent le nombre d’heures travaillées ainsi que l’allocation de ces heures à travers les différents services offerts, de plus les prix des services leurs sont imposés par le gouvernement. Le modèle génère une équation de revenu qui dépend des heures travaillées et d’un indice de prix représentant le rendement marginal des heures travaillées lorsque celles-ci sont allouées de façon optimale à travers les différents services. L’indice de prix dépend des prix des services offerts et des paramètres de la technologie de production des services qui déterminent comment les médecins réagissent aux changements des prix relatifs. J’ai appliqué le modèle aux données de panel sur la rémunération des médecins au Québec fusionnées à celles sur l’utilisation du temps de ces mêmes médecins. J’utilise le modèle pour examiner deux dimensions de l’offre des services de santé. En premierlieu, j’analyse l’utilisation des incitatifs monétaires pour amener les médecins à modifier leur production des différents services. Bien que les études antérieures ont souvent cherché à comparer le comportement des médecins à travers les différents systèmes de compensation,il y a relativement peu d’informations sur comment les médecins réagissent aux changementsdes prix des services de santé. Des débats actuels dans les milieux de politiques de santé au Canada se sont intéressés à l’importance des effets de revenu dans la détermination de la réponse des médecins face à l’augmentation des prix des services de santé. Mon travail contribue à alimenter ce débat en identifiant et en estimant les effets de substitution et de revenu résultant des changements des prix relatifs des services de santé. En second lieu, j’analyse comment l’expérience affecte la productivité des médecins. Cela a une importante implication sur le recrutement des médecins afin de satisfaire la demande croissante due à une population vieillissante, en particulier lorsque les médecins les plus expérimentés (les plus productifs) vont à la retraite. Dans le premier essai, j’ai estimé la fonction de revenu conditionnellement aux heures travaillées, en utilisant la méthode des variables instrumentales afin de contrôler pour une éventuelle endogeneité des heures travaillées. Comme instruments j’ai utilisé les variables indicatrices des âges des médecins, le taux marginal de taxation, le rendement sur le marché boursier, le carré et le cube de ce rendement. Je montre que cela donne la borne inférieure de l’élasticité-prix direct, permettant ainsi de tester si les médecins réagissent aux incitatifs monétaires. Les résultats montrent que les bornes inférieures des élasticités-prix de l’offre de services sont significativement positives, suggérant que les médecins répondent aux incitatifs. Un changement des prix relatifs conduit les médecins à allouer plus d’heures de travail au service dont le prix a augmenté. Dans le deuxième essai, j’estime le modèle en entier, de façon inconditionnelle aux heures travaillées, en analysant les variations des heures travaillées par les médecins, le volume des services offerts et le revenu des médecins. Pour ce faire, j’ai utilisé l’estimateur de la méthode des moments simulés. Les résultats montrent que les élasticités-prix direct de substitution sont élevées et significativement positives, représentant une tendance des médecins à accroitre le volume du service dont le prix a connu la plus forte augmentation. Les élasticitésprix croisées de substitution sont également élevées mais négatives. Par ailleurs, il existe un effet de revenu associé à l’augmentation des tarifs. J’ai utilisé les paramètres estimés du modèle structurel pour simuler une hausse générale de prix des services de 32%. Les résultats montrent que les médecins devraient réduire le nombre total d’heures travaillées (élasticité moyenne de -0,02) ainsi que les heures cliniques travaillées (élasticité moyenne de -0.07). Ils devraient aussi réduire le volume de services offerts (élasticité moyenne de -0.05). Troisièmement, j’ai exploité le lien naturel existant entre le revenu d’un médecin payé à l’acte et sa productivité afin d’établir le profil de productivité des médecins. Pour ce faire, j’ai modifié la spécification du modèle pour prendre en compte la relation entre la productivité d’un médecin et son expérience. J’estime l’équation de revenu en utilisant des données de panel asymétrique et en corrigeant le caractère non-aléatoire des observations manquantes à l’aide d’un modèle de sélection. Les résultats suggèrent que le profil de productivité est une fonction croissante et concave de l’expérience. Par ailleurs, ce profil est robuste à l’utilisation de l’expérience effective (la quantité de service produit) comme variable de contrôle et aussi à la suppression d’hypothèse paramétrique. De plus, si l’expérience du médecin augmente d’une année, il augmente la production de services de 1003 dollar CAN. J’ai utilisé les paramètres estimés du modèle pour calculer le ratio de remplacement : le nombre de médecins inexpérimentés qu’il faut pour remplacer un médecin expérimenté. Ce ratio de remplacement est de 1,2.
Resumo:
LOPES, Jose Soares Batista et al. Application of multivariable control using artificial neural networks in a debutanizer distillation column.In: INTERNATIONAL CONGRESS OF MECHANICAL ENGINEERING - COBEM, 19, 5-9 nov. 2007, Brasilia. Anais... Brasilia, 2007
Resumo:
In this paper, the temperature of a pilot-scale batch reaction system is modeled towards the design of a controller based on the explicit model predictive control (EMPC) strategy -- Some mathematical models are developed from experimental data to describe the system behavior -- The simplest, yet reliable, model obtained is a (1,1,1)-order ARX polynomial model for which the mentioned EMPC controller has been designed -- The resultant controller has a reduced mathematical complexity and, according to the successful results obtained in simulations, will be used directly on the real control system in a next stage of the entire experimental framework
Resumo:
The thesis is an investigation of the principle of least effort (Zipf 1949 [1972]). The principle is simple (all effort should be least) and universal (it governs the totality of human behavior). Since the principle is also functional, the thesis adopts a functional theory of language as its theoretical framework, i.e. Natural Linguistics. The explanatory system of Natural Linguistics posits that higher principles govern preferences, which, in turn, manifest themselves as concrete, specific processes in a given language. Therefore, the thesis’ aim is to investigate the principle of least effort on the basis of external evidence from English. The investigation falls into the three following strands: the investigation of the principle itself, the investigation of its application in articulatory effort and the investigation of its application in phonological processes. The structure of the thesis reflects the division of its broad aims. The first part of the thesis presents its theoretical background (Chapter One and Chapter Two), the second part of the thesis deals with application of least effort in articulatory effort (Chapter Three and Chapter Four), whereas the third part discusses the principle of least effort in phonological processes (Chapter Five and Chapter Six). Chapter One serves as an introduction, examining various aspects of the principle of least effort such as its history, literature, operation and motivation. It overviews various names which denote least effort, explains the origins of the principle and reviews the literature devoted to the principle of least effort in a chronological order. The chapter also discusses the nature and operation of the principle, providing numerous examples of the principle at work. It emphasizes the universal character of the principle from the linguistic field (low-level phonetic processes and language universals) and the non-linguistic ones (physics, biology, psychology and cognitive sciences), proving that the principle governs human behavior and choices. Chapter Two provides the theoretical background of the thesis in terms of its theoretical framework and discusses the terms used in the thesis’ title, i.e. hierarchy and preference. It justifies the selection of Natural Linguistics as the thesis’ theoretical framework by outlining its major assumptions and demonstrating its explanatory power. As far as the concepts of hierarchy and preference are concerned, the chapter provides their definitions and reviews their various understandings via decision theories and linguistic preference-based theories. Since the thesis investigates the principle of least effort in language and speech, Chapter Three considers the articulatory aspect of effort. It reviews the notion of easy and difficult sounds and discusses the concept of articulatory effort, overviewing its literature as well as various understandings in a chronological fashion. The chapter also presents the concept of articulatory gestures within the framework of Articulatory Phonology. The thesis’ aim is to investigate the principle of least effort on the basis of external evidence, therefore Chapters Four and Six provide evidence in terms of three experiments, text message studies (Chapter Four) and phonological processes in English (Chapter Six). Chapter Four contains evidence for the principle of least effort in articulation on the basis of experiments. It describes the experiments in terms of their predictions and methodology. In particular, it discusses the adopted measure of effort established by means of the effort parameters as well as their status. The statistical methods of the experiments are also clarified. The chapter reports on the results of the experiments, presenting them in a graphical way and discusses their relation to the tested predictions. Chapter Four establishes a hierarchy of speakers’ preferences with reference to articulatory effort (Figures 30, 31). The thesis investigates the principle of least effort in phonological processes, thus Chapter Five is devoted to the discussion of phonological processes in Natural Phonology. The chapter explains the general nature and motivation of processes as well as the development of processes in child language. It also discusses the organization of processes in terms of their typology as well as the order in which processes apply. The chapter characterizes the semantic properties of processes and overviews Luschützky’s (1997) contribution to NP with respect to processes in terms of their typology and incorporation of articulatory gestures in the concept of a process. Chapter Six investigates phonological processes. In particular, it identifies the issues of lenition/fortition definition and process typology by presenting the current approaches to process definitions and their typology. Since the chapter concludes that no coherent definition of lenition/fortition exists, it develops alternative lenition/fortition definitions. The chapter also revises the typology of phonological processes under effort management, which is an extended version of the principle of least effort. Chapter Seven concludes the thesis with a list of the concepts discussed in the thesis, enumerates the proposals made by the thesis in discussing the concepts and presents some questions for future research which have emerged in the course of investigation. The chapter also specifies the extent to which the investigation of the principle of least effort is a meaningful contribution to phonology.
Resumo:
Tämän työn tavoitteena oli selvittää tietojohtamisen eri käytäntöjen vaikutusta oppimiseen, uudistumiseen sekä yrityksen innovaatiokyvykkyyteen. Työssä on keskitytty erityisesti sellaisiin tietojohtamisen käytäntöihin, jotka edistävät oppimista ja uusiutumista yrityksissä. Työssä on käytetty tilastollisia menetelmiä, muun muassa faktorianalyysia, korrelaatioanalyysia sekä regressiota, analysoitaessa 259 suomalaisesta yrityksestä kerättyä kyselydataa niiden tietojohtamisen käytöntöihin ja aineettomaan pääomaan liittyen. Analyysi osoittaa, että useat tietojohtamisen käytännöt vaikuttavat positiivisesti yrityksen uudistumiseen ja sitä kautta innovaatiokyvykkyyteen. Henkilöstön kouluttaminen sekä parhaiden käytäntöjen kerääminen ja soveltaminen yrityksessä ovat positiivisesti yhteydessä innovaatiokyvykkyyteen. Henkilöstön kouluttamisella on merkittävin suora vaikutus innovaatiokyvykkyyteen ja tässä työssä on esitetty, että koulutuksen tarjoamisen suurin vaikutus on oppimismyönteisen kulttuurin kehittyminen yrityksiin sen sijaan, että koulutuksella pyrittäisiin vain parantamaan tehtäväkenttään liittyviä taitoja ja tietoja. Henkilöstön kouluttaminen, parhaat käytännöt sekä sosialisaatiossa tapahtuva tiedon vaihto ja suhteiden solmiminen vaikuttavat positiivisesti uudistumispääomaan. Työn tulosten perusteella uudistumispääomalla on merkittävä rooli innovaatioiden syntymisessä yrityksissä. Uudistumispääoma medioi koulutuksen, parhaiden käytäntöjen ja mahdollisesti myös sosialisaation vaikutusta innovaatiokyvykkyyteen ja on näin merkittävä osa innovaatioiden syntyä yrityksissä. Innovaatiokyvykkyyden osatekijöiden ymmärtäminen voi auttaa johtajia ja esimiehiä keskittämään huomionsa tiettyihin tietojohtamisen käytäntöihin edistääkseen innovaatioiden syntymistä yrityksessä sen sijaan, että he pyrkisivät vain vaikuttamaan innovaatioprosessiin.
Resumo:
The design demands on water and sanitation engineers are rapidly changing. The global population is set to rise from 7 billion to 10 billion by 2083. Urbanisation in developing regions is increasing at such a rate that a predicted 56% of the global population will live in an urban setting by 2025. Compounding these problems, the global water and energy crises are impacting the Global North and South alike. High-rate anaerobic digestion offers a low-cost, low-energy treatment alternative to the energy intensive aerobic technologies used today. Widespread implementation however is hindered by the lack of capacity to engineer high-rate anaerobic digestion for the treatment of complex wastes such as sewage. This thesis utilises the Expanded Granular Sludge Bed bioreactor (EGSB) as a model system in which to study the ecology, physiology and performance of high-rate anaerobic digestion of complex wastes. The impacts of a range of engineered parameters including reactor geometry, wastewater type, operating temperature and organic loading rate are systematically investigated using lab-scale EGSB bioreactors. Next generation sequencing of 16S amplicons is utilised as a means of monitoring microbial ecology. Microbial community physiology is monitored by means of specific methanogenic activity testing and a range of physical and chemical methods are applied to assess reactor performance. Finally, the limit state approach is trialled as a method for testing the EGSB and is proposed as a standard method for biotechnology testing enabling improved process control at full-scale. The arising data is assessed both qualitatively and quantitatively. Lab-scale reactor design is demonstrated to significantly influence the spatial distribution of the underlying ecology and community physiology in lab-scale reactors, a vital finding for both researchers and full-scale plant operators responsible for monitoring EGSB reactors. Recurrent trends in the data indicate that hydrogenotrophic methanogenesis dominates in high-rate anaerobic digestion at both full- and lab-scale when subject to engineered or operational stresses including low-temperature and variable feeding regimes. This is of relevance for those seeking to define new directions in fundamental understanding of syntrophic and competitive relations in methanogenic communities and also to design engineers in determining operating parameters for full-scale digesters. The adoption of the limit state approach enabled identification of biological indicators providing early warning of failure under high-solids loading, a vital insight for those currently working empirically towards the development of new biotechnologies at lab-scale.
Resumo:
The fish meat has a particular chemical composition which gives its high nutritional value. However, this food is identified for being highly perishable and this aspect is often named as a barrier to fish consumption. The southwestern Paraná region, parallel to the country's reality, it is characterized by low fish consumption; and one of the strategies aimed at increasing the consumption of this important protein source is encouraging the production of other species besides tilapia. Within this context, it is necessary to know about the meat characteristics. In this sense, the objective of this study was to evaluate the technological potential of pacu, grass carp and catfish species. To do so, at first, it was discussed the chemical and biometric assessment under two distinct descriptive statistical methods, of the three species; and it was also evaluated the discriminating capacity of the study. In a second moment, an evaluation of effects done by two different processes of washing (acid and alkaline) regarding the removal of nitrogen compounds, pigments and the emulsifying ability of the proteins contained in the protein base obtained. Finally, in the third phase, it was aimed to realize the methodology optimization in GC-MS for the analysis geosmin and MIB (2-metilisoborneol) compounds that are responsible for taste/smell of soil and mold in freshwater fish. The results showed a high protein and low lipid content for the three species. The comparison between means and medians revealed symmetry only for protein values and biometric measurements. Lipids, when evaluated only by the means, overestimate the levels for all species. Correlations between body measurements and fillet yield had low correlation, regardless of the species analyzed, and the best prediction equation relates the total weight and fillet weight. The biometric variables were the best discriminating among the species. The evaluation of the washings, it was found that the acidic and basic processes were equally (p ≥ 0.05) efficient (p ≤ 0.05) for the removal of nitrogen compounds on the fish pulps. Regarding the extraction of pigments, a removal efficiency was recorded only for the pacu species, the data were assessed by the parameters L *, a *, b *. When evaluated by the total color difference (ΔE) before and after washing for both processes (acid/alkaline) the ΔE proved feasible perceived by naked eye for all species. The catfish was characterized as the fish that presents the clearest meat with the basic washing considered the most effective in removing pigments for this species. Protein bases obtained by alkaline washes have higher emulsifying capacity (p ≤ 0.05) when compared to unwashed and washed in acid process pulps. The methodology applied for the quantification of MIB and geosmin, allowed to establish that the method of extraction and purification of analytes had low recovery and future studies should be developed for identification and quantification of MIB and geosmin on fish samples.
Resumo:
A oportunidade de produção de biomassa microalgal tem despertado interesse pelos diversos destinos que a mesma pode ter, seja na produção de bioenergia, como fonte de alimento ou servindo como produto da biofixação de dióxido de carbono. Em geral, a produção em larga escala de cianobactérias e microalgas é feita com acompanhamento através de análises físicoquímicas offline. Neste contexto, o objetivo deste trabalho foi monitorar a concentração celular em fotobiorreator raceway para produção de biomassa microalgal usando técnicas de aquisição digital de dados e controle de processos, pela aquisição de dados inline de iluminância, concentração de biomassa, temperatura e pH. Para tal fim foi necessário construir sensor baseado em software capaz de determinar a concentração de biomassa microalgal a partir de medidas ópticas de intensidade de radiação monocromática espalhada e desenvolver modelo matemático para a produção da biomassa microalgal no microcontrolador, utilizando algoritmo de computação natural no ajuste do modelo. Foi projetado, construído e testado durante cultivos de Spirulina sp. LEB 18, em escala piloto outdoor, um sistema autônomo de registro de informações advindas do cultivo. Foi testado um sensor de concentração de biomassa baseado na medição da radiação passante. Em uma segunda etapa foi concebido, construído e testado um sensor óptico de concentração de biomassa de Spirulina sp. LEB 18 baseado na medição da intensidade da radiação que sofre espalhamento pela suspensão da cianobactéria, em experimento no laboratório, sob condições controladas de luminosidade, temperatura e fluxo de suspensão de biomassa. A partir das medidas de espalhamento da radiação luminosa, foi construído um sistema de inferência neurofuzzy, que serve como um sensor por software da concentração de biomassa em cultivo. Por fim, a partir das concentrações de biomassa de cultivo, ao longo do tempo, foi prospectado o uso da plataforma Arduino na modelagem empírica da cinética de crescimento, usando a Equação de Verhulst. As medidas realizadas no sensor óptico baseado na medida da intensidade da radiação monocromática passante através da suspensão, usado em condições outdoor, apresentaram baixa correlação entre a concentração de biomassa e a radiação, mesmo para concentrações abaixo de 0,6 g/L. Quando da investigação do espalhamento óptico pela suspensão do cultivo, para os ângulos de 45º e 90º a radiação monocromática em 530 nm apresentou um comportamento linear crescente com a concentração, apresentando coeficiente de determinação, nos dois casos, 0,95. Foi possível construir um sensor de concentração de biomassa baseado em software, usando as informações combinadas de intensidade de radiação espalhada nos ângulos de 45º e 135º com coeficiente de determinação de 0,99. É factível realizar simultaneamente a determinação inline de variáveis do processo de cultivo de Spirulina e a modelagem cinética empírica do crescimento do micro-organismo através da equação de Verhulst, em microcontrolador Arduino.
Resumo:
International audience
Resumo:
Currently, the decision analysis in production processes involves a level of detail, in which the problem is subdivided to analyze it in terms of different and conflicting points of view. The multi-criteria analysis has been an important tool that helps assertive decisions related to the production process. This process of analysis has been incorporated into various areas of production engineering, by applying multi-criteria methods in solving the problems of the productive sector. This research presents a statistical study on the use of multi-criteria methods in the areas of Production Engineering, where 935 papers were filtered from 20.663 publications in scientific journals, considering a level of the publication quality based on the impact factor published by the JCR between 2010 and 2015. In this work, the descriptive statistics is used to represent some information and statistical analysis on the volume of applications methods. Relevant results were found with respect to the "amount of advanced methods that are being applied and in which areas related to Production Engineering." This information may provide support to researchers when preparing a multi-criteria application, whereupon it will be possible to check in which issues and how often the other authors have used multi-criteria methods.
Resumo:
When designing systems that are complex, dynamic and stochastic in nature, simulation is generally recognised as one of the best design support technologies, and a valuable aid in the strategic and tactical decision making process. A simulation model consists of a set of rules that define how a system changes over time, given its current state. Unlike analytical models, a simulation model is not solved but is run and the changes of system states can be observed at any point in time. This provides an insight into system dynamics rather than just predicting the output of a system based on specific inputs. Simulation is not a decision making tool but a decision support tool, allowing better informed decisions to be made. Due to the complexity of the real world, a simulation model can only be an approximation of the target system. The essence of the art of simulation modelling is abstraction and simplification. Only those characteristics that are important for the study and analysis of the target system should be included in the simulation model. The purpose of simulation is either to better understand the operation of a target system, or to make predictions about a target system’s performance. It can be viewed as an artificial white-room which allows one to gain insight but also to test new theories and practices without disrupting the daily routine of the focal organisation. What you can expect to gain from a simulation study is very well summarised by FIRMA (2000). His idea is that if the theory that has been framed about the target system holds, and if this theory has been adequately translated into a computer model this would allow you to answer some of the following questions: · Which kind of behaviour can be expected under arbitrarily given parameter combinations and initial conditions? · Which kind of behaviour will a given target system display in the future? · Which state will the target system reach in the future? The required accuracy of the simulation model very much depends on the type of question one is trying to answer. In order to be able to respond to the first question the simulation model needs to be an explanatory model. This requires less data accuracy. In comparison, the simulation model required to answer the latter two questions has to be predictive in nature and therefore needs highly accurate input data to achieve credible outputs. These predictions involve showing trends, rather than giving precise and absolute predictions of the target system performance. The numerical results of a simulation experiment on their own are most often not very useful and need to be rigorously analysed with statistical methods. These results then need to be considered in the context of the real system and interpreted in a qualitative way to make meaningful recommendations or compile best practice guidelines. One needs a good working knowledge about the behaviour of the real system to be able to fully exploit the understanding gained from simulation experiments. The goal of this chapter is to brace the newcomer to the topic of what we think is a valuable asset to the toolset of analysts and decision makers. We will give you a summary of information we have gathered from the literature and of the experiences that we have made first hand during the last five years, whilst obtaining a better understanding of this exciting technology. We hope that this will help you to avoid some pitfalls that we have unwittingly encountered. Section 2 is an introduction to the different types of simulation used in Operational Research and Management Science with a clear focus on agent-based simulation. In Section 3 we outline the theoretical background of multi-agent systems and their elements to prepare you for Section 4 where we discuss how to develop a multi-agent simulation model. Section 5 outlines a simple example of a multi-agent system. Section 6 provides a collection of resources for further studies and finally in Section 7 we will conclude the chapter with a short summary.
Resumo:
A principios de 1990, los documentalistas comienzan a interesarse en hacer aplicaciones matemáticas y estadísticas en las unidades bibliográficas. F. J. Coles y Nellie B. Eales en 1917 hicieron el primer estudio con un grupo de títulos de documentos cuyo análisis consideraba el país de origen (White, p. 35). En 1923, E. Wyndham Hulme fue la primera persona en usar el término "estadísticas bibliográficas".Y propuso la utilización de métodos estadísticos para tener parámetros que sirvan para conocer el proceso de la comunicación escrita y, la naturaleza y curso del desarrollo de una disciplina. Para lograr ese aspecto empezó contando un número de documentos y analizando varias facetas de la comunicación escrita empleada en ellos (Ferrante, p. 201). En un documento escrito en 1969, Alan Pritchard propuso el término bibliometría para reemplazar el término "estadísticas bibliográficas" empleado por Hulme, argumentando que el, término es ambiguo, no muy descriptivo y que puede ser confundido con las estadísticas puras o estadísticas de bibliografías. El definió el término bibliometría como la aplicación de la matemática y métodos estadísticos a los libros y otros documentos (p. 348-349). Y desde ese momento se ha utilizado este término.