983 resultados para Common Ectomycorrhizal Networks (cmns)
Resumo:
Abstract—In this paper we explore how recent technologies can improve the security of optical networks. In particular, we study how to use quantum key distribution(QKD) in common optical network infrastructures and propose a method to overcome its distance limitations. QKD is the first technology offering information theoretic secretkey distribution that relies only on the fundamental principles of quantum physics. Point-to-point QKDdevices have reached a mature industrial state; however, these devices are severely limited in distance, since signals at the quantum level (e.g., single photons) are highly affected by the losses in the communication channel and intermediate devices. To overcome this limitation, intermediate nodes (i.e., repeaters) are used. Both quantum-regime and trusted, classical repeaters have been proposed in the QKD literature, but only the latter can be implemented in practice. As a novelty, we propose here a new QKD network model based on the use of not fully trusted intermediate nodes, referred to as weakly trusted repeaters. This approach forces the attacker to simultaneously break several paths to get access to the exchanged key, thus improving significantly the security of the network. We formalize the model using network codes and provide real scenarios that allow users to exchange secure keys over metropolitan optical networks using only passive components. Moreover, the theoretical framework allows one to extend these scenarios not only to accommodate more complex trust constraints, but also to consider robustness and resiliency constraints on the network.
Resumo:
La posibilidad de utilizar sistemas cuánticos para procesar y transmitir información ha impulsado la aparición de tecnologías de información cuántica, p. ej., distribución cuántica de claves. Aunque prometedoras, su uso fuera del laboratorio es actualmente demasiado costoso y complicado. En este trabajo mostramos como utilizarlas en redes ópticas de telecomunicaciones. Al utilizar una infraestructura existente y pervasiva, y compartirla con otras señales, tanto clásicas como cuánticas, el coste se reduce drásticamente y llega a un mayor público. Comenzamos integrando señales cuánticas en los tipos más utilizados de redes ópticas pasivas, por su simplicidad y alcance a usuarios finales. Luego ampliamos este estudio, proponiendo un diseño de red óptica metropolitana basado en la división en longitud de onda para multiplexar y direccionar las señales. Verificamos su funcionamiento con un prototipo. Posteriormente, estudiamos la distribución de pares de fotones entrelazados entre los usuarios de dicha red con el objetivo de abarcar más tecnologías. Para ampliar la capacidad de usuarios, rediseñamos la red troncal, cambiando tanto la topología como la tecnología utilizada en los nodos. El resultado es una red metropolitana cuántica que escala a cualquier cantidad de usuarios, a costa de una mayor complejidad y coste. Finalmente, tratamos el problema de la limitación en distancia. La solución propuesta está basada en codificación de red y permite, mediante el uso de varios caminos y nodos, modular la cantidad de información que tiene cada nodo, y así, la confianza depositada en él. ABSTRACT The potential use of quantum systems to process and transmit information has impulsed the emergence of quantum information technologies such as quantum key distribution. Despite looking promising, their use out of the laboratory is limited since they are a very delicate technology due to the need of working at the single quantum level. In this work we show how to use them in optical telecommunication networks. Using an existing infrastructure and sharing it with other signals, both quantum and conventional, reduces dramatically the cost and allows to reach a large group of users. In this work, we will first integrate quantum signals in the most common passive optical networks, for their simplicity and reach to final users. Then, we extend this study by proposing a quantum metropolitan optical network based on wavelength-division multiplexing and wavelengthaddressing, verifying its operation mode in a testbed. Later, we study the distribution of entangled photon-pairs between the users of the network with the objective of covering as much different technologies as possible. We further explore other network architectures, changing the topology and the technology used at the nodes. The resulting network scales better at the cost of a more complex and expensive infrastructure. Finally, we tackle the distance limitation problem of quantum communications. The solution offered is based on networkcoding and allows, using multiple paths and nodes, to modulate the information leaked to each node, and thus, the degree of trust placed in them.
Resumo:
Current QKD designs try to keep the quantum channel as error free as possible by using a separate physical medium for this purpose. In the most common case, this means the exclusive use of an optical fiber for the quantum channel, precluding its use for any other purpose. In current optical networks, the fiber is the single most expensive element and this poses a major problem from a cost and availability point of view. Sharing the fiber is thus mandatory for the widespread adoption of QKD. The objective of this communication is to propose a general scheme and present some preliminary measurements of a metropolitan area network (MAN) designed to multiplex of the order of 64 addressable quantum channels and the associated QKD classical service signals on a single dark fibre. It uses as much existing components and infraestructure as possible in an attempt to simultaneously lower most of the practical barriers for the adoption of QKD.
Resumo:
We study how to use quantum key distribution (QKD) in common optical network infrastructures and propose a method to overcome its distance limitations. QKD is the first technology offering information theoretic secret-key distribution that relies only on the fundamental principles of quantum physics. Point-to-point QKD devices have reached a mature industrial state; however, these devices are severely limited in distance, since signals at the quantum level (e.g. single photons) are highly affected by the losses in the communication channel and intermediate devices. To overcome this limitation, intermediate nodes (i.e. repeaters) are used. Both, quantum-regime and trusted, classical, repeaters have been proposed in the QKD literature, but only the latter can be implemented in practice. As a novelty, we propose here a new QKD network model based on the use of not fully trusted intermediate nodes, referred as weakly trusted repeaters. This approach forces the attacker to simultaneously break several paths to get access to the exchanged key, thus improving significantly the security of the network. We formalize the model using network codes and provide real scenarios that allow users to exchange secure keys over metropolitan optical networks using only passive components.
Resumo:
Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.
Resumo:
The appearance of large geolocated communication datasets has recently increased our understanding of how social networks relate to their physical space. However, many recurrently reported properties, such as the spatial clustering of network communities, have not yet been systematically tested at different scales. In this work we analyze the social network structure of over 25 million phone users from three countries at three different scales: country, provinces and cities. We consistently find that this last urban scenario presents significant differences to common knowledge about social networks. First, the emergence of a giant component in the network seems to be controlled by whether or not the network spans over the entire urban border, almost independently of the population or geographic extension of the city. Second, urban communities are much less geographically clustered than expected. These two findings shed new light on the widely-studied searchability in self-organized networks. By exhaustive simulation of decentralized search strategies we conclude that urban networks are searchable not through geographical proximity as their country-wide counterparts, but through an homophily-driven community structure.
Resumo:
The optimal integration of work and its interaction with heat can represent large energy savings in industrial plants. This paper introduces a new optimization model for the simultaneous synthesis of work exchange networks (WENs), with heat integration for the optimal pressure recovery of process gaseous streams. The proposed approach for the WEN synthesis is analogous to the well-known problem of synthesis of heat exchanger networks (HENs). Thus, there is work exchange between high-pressure (HP) and low-pressure (LP) streams, achieved by pressure manipulation equipment running on common axes. The model allows the use of several units of single-shaft-turbine-compressor (SSTC), as well as stand-alone compressors, turbines and valves. Helper motors and generators are used to respond to any demand and excess of energy. Moreover, between the WEN stages the streams are sent to the HEN to promote thermal recovery, aiming to enhance the work integration. A multi-stage superstructure is proposed to represent the process. The WEN superstructure is optimized in a mixed-integer nonlinear programming (MINLP) formulation and solved with the GAMS software, with the goal of minimizing the total annualized cost. Three examples are conducted to verify the accuracy of the proposed method. In all case studies, the heat integration between WEN stages is essential to improve the pressure recovery, and to reduce the total costs involved in the process.
Resumo:
This paper introduces a new optimization model for the simultaneous synthesis of heat and work exchange networks. The work integration is performed in the work exchange network (WEN), while the heat integration is carried out in the heat exchanger network (HEN). In the WEN synthesis, streams at high-pressure (HP) and low-pressure (LP) are subjected to pressure manipulation stages, via turbines and compressors running on common shafts and stand-alone equipment. The model allows the use of several units of single-shaft-turbine-compressor (SSTC), as well as helper motors and generators to respond to any shortage and/or excess of energy, respectively, in the SSTC axes. The heat integration of the streams occurs in the HEN between each WEN stage. Thus, as the inlet and outlet streams temperatures in the HEN are dependent of the WEN design, they must be considered as optimization variables. The proposed multi-stage superstructure is formulated in mixed-integer nonlinear programming (MINLP), in order to minimize the total annualized cost composed by capital and operational expenses. A case study is conducted to verify the accuracy of the proposed approach. The results indicate that the heat integration between the WEN stages is essential to enhance the work integration, and to reduce the total cost of process due the need of a smaller amount of hot and cold utilities.
Resumo:
Food policy is one the most regulated policy fields at the EU level. ‘Unholy alliances’ are collaborative patterns that temporarily bring together antagonistic stakeholders behind a common cause. This paper deals with such ‘transversal’ co-operations between citizens’ groups (NGOs, consumers associations…) and economic stakeholders (food industries, retailers…), focusing on their ambitions and consequences. This paper builds on two case studies that enable a more nuanced view on the perspectives for the development of transversal networks at the EU level. The main findings are that (i) the rationale behind the adoption of collaborative partnerships actually comes from a case-by-case cost/benefit analysis leading to hopes of improved access to institutions; (ii) membership of a collaborative network leads to a learning process closely linked to the network’s performance; and (iii) coalitions can have a better reception — rather than an automatic better access — depending on several factors independent of the stakeholders themselves.
Resumo:
Includes bibliographical references (p. 27).
Resumo:
Networks of interactions evolve in many different domains. They tend to have topological characteristics in common, possibly due to common factors in the way the networks grow and develop. It has been recently suggested that one such common characteristic is the presence of a hierarchically modular organization. In this paper, we describe a new algorithm for the detection and quantification of hierarchical modularity, and demonstrate that the yeast protein-protein interaction network does have a hierarchically modular organization. We further show that such organization is evident in artificial networks produced by computational evolution using a gene duplication operator, but not in those developing via preferential attachment of new nodes to highly connected existing nodes. (C) 2004 Elsevier Ireland Ltd. All rights reserved.
Resumo:
The generating functional method is employed to investigate the synchronous dynamics of Boolean networks, providing an exact result for the system dynamics via a set of macroscopic order parameters. The topology of the networks studied and its constituent Boolean functions represent the system's quenched disorder and are sampled from a given distribution. The framework accommodates a variety of topologies and Boolean function distributions and can be used to study both the noisy and noiseless regimes; it enables one to calculate correlation functions at different times that are inaccessible via commonly used approximations. It is also used to determine conditions for the annealed approximation to be valid, explore phases of the system under different levels of noise and obtain results for models with strong memory effects, where existing approximations break down. Links between Boolean networks and general Boolean formulas are identified and results common to both system types are highlighted. © 2012 Copyright Taylor and Francis Group, LLC.
Resumo:
Large monitoring networks are becoming increasingly common and can generate large datasets from thousands to millions of observations in size, often with high temporal resolution. Processing large datasets using traditional geostatistical methods is prohibitively slow and in real world applications different types of sensor can be found across a monitoring network. Heterogeneities in the error characteristics of different sensors, both in terms of distribution and magnitude, presents problems for generating coherent maps. An assumption in traditional geostatistics is that observations are made directly of the underlying process being studied and that the observations are contaminated with Gaussian errors. Under this assumption, sub–optimal predictions will be obtained if the error characteristics of the sensor are effectively non–Gaussian. One method, model based geostatistics, assumes that a Gaussian process prior is imposed over the (latent) process being studied and that the sensor model forms part of the likelihood term. One problem with this type of approach is that the corresponding posterior distribution will be non–Gaussian and computationally demanding as Monte Carlo methods have to be used. An extension of a sequential, approximate Bayesian inference method enables observations with arbitrary likelihoods to be treated, in a projected process kriging framework which is less computationally intensive. The approach is illustrated using a simulated dataset with a range of sensor models and error characteristics.
Resumo:
This thesis presents a thorough and principled investigation into the application of artificial neural networks to the biological monitoring of freshwater. It contains original ideas on the classification and interpretation of benthic macroinvertebrates, and aims to demonstrate their superiority over the biotic systems currently used in the UK to report river water quality. The conceptual basis of a new biological classification system is described, and a full review and analysis of a number of river data sets is presented. The biological classification is compared to the common biotic systems using data from the Upper Trent catchment. This data contained 292 expertly classified invertebrate samples identified to mixed taxonomic levels. The neural network experimental work concentrates on the classification of the invertebrate samples into biological class, where only a subset of the sample is used to form the classification. Other experimentation is conducted into the identification of novel input samples, the classification of samples from different biotopes and the use of prior information in the neural network models. The biological classification is shown to provide an intuitive interpretation of a graphical representation, generated without reference to the class labels, of the Upper Trent data. The selection of key indicator taxa is considered using three different approaches; one novel, one from information theory and one from classical statistical methods. Good indicators of quality class based on these analyses are found to be in good agreement with those chosen by a domain expert. The change in information associated with different levels of identification and enumeration of taxa is quantified. The feasibility of using neural network classifiers and predictors to develop numeric criteria for the biological assessment of sediment contamination in the Great Lakes is also investigated.
Resumo:
Common approaches to IP-traffic modelling have featured the use of stochastic models, based on the Markov property, which can be classified into black box and white box models based on the approach used for modelling traffic. White box models, are simple to understand, transparent and have a physical meaning attributed to each of the associated parameters. To exploit this key advantage, this thesis explores the use of simple classic continuous-time Markov models based on a white box approach, to model, not only the network traffic statistics but also the source behaviour with respect to the network and application. The thesis is divided into two parts: The first part focuses on the use of simple Markov and Semi-Markov traffic models, starting from the simplest two-state model moving upwards to n-state models with Poisson and non-Poisson statistics. The thesis then introduces the convenient to use, mathematically derived, Gaussian Markov models which are used to model the measured network IP traffic statistics. As one of the most significant contributions, the thesis establishes the significance of the second-order density statistics as it reveals that, in contrast to first-order density, they carry much more unique information on traffic sources and behaviour. The thesis then exploits the use of Gaussian Markov models to model these unique features and finally shows how the use of simple classic Markov models coupled with use of second-order density statistics provides an excellent tool for capturing maximum traffic detail, which in itself is the essence of good traffic modelling. The second part of the thesis, studies the ON-OFF characteristics of VoIP traffic with reference to accurate measurements of the ON and OFF periods, made from a large multi-lingual database of over 100 hours worth of VoIP call recordings. The impact of the language, prosodic structure and speech rate of the speaker on the statistics of the ON-OFF periods is analysed and relevant conclusions are presented. Finally, an ON-OFF VoIP source model with log-normal transitions is contributed as an ideal candidate to model VoIP traffic and the results of this model are compared with those of previously published work.