956 resultados para gossip, dissemination, network, algorithms
Resumo:
v.14=no.157-168 (1878)
Resumo:
v.17=no.193-204 (1881)
Resumo:
Abstract Clinical decision-making requires synthesis of evidence from literature reviews focused on a specific theme. Evidence synthesis is performed with qualitative assessments and systematic reviews of randomized clinical trials, typically covering statistical pooling with pairwise meta-analyses. These methods include adjusted indirect comparison meta-analysis, network meta-analysis, and mixed-treatment comparison. These tools allow synthesis of evidence and comparison of effectiveness in cardiovascular research.
Resumo:
Magdeburg, Univ., Fak. für Naturwiss., Diss., 2010
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2011
Resumo:
This work describes a test tool that allows to make performance tests of different end-to-end available bandwidth estimation algorithms along with their different implementations. The goal of such tests is to find the best-performing algorithm and its implementation and use it in congestion control mechanism for high-performance reliable transport protocols. The main idea of this paper is to describe the options which provide available bandwidth estimation mechanism for highspeed data transport protocols and to develop basic functionality of such test tool with which it will be possible to manage entities of test application on all involved testing hosts, aided by some middleware.
Resumo:
Magdeburg, Univ., Fak. für Maschinenbau, Diss., 2014
Resumo:
Telecommunications and network technology is now the driving force that ensures continued progress of world civilization. Design of new and expansion of existing network infrastructures requires improving the quality of service(QoS). Modeling probabilistic and time characteristics of telecommunication systems is an integral part of modern algorithms of administration of quality of service. At present, for the assessment of quality parameters except simulation models analytical models in the form of systems and queuing networks are widely used. Because of the limited mathematical tools of models of these classes the corresponding parameter estimation of parameters of quality of service are inadequate by definition. Especially concerning the models of telecommunication systems with packet transmission of multimedia real-time traffic.
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniques for maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables, and an approach for performing parallel addition of N input symbols.
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniquesfor maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables,and an approach for performing parallel addition of N input symbols.
Resumo:
Wireless mesh networks present an attractive communication solution for various research and industrial projects. However, in many cases, the appropriate preliminary calculations which allow predicting the network behavior have to be made before the actual deployment. For such purposes, network simulation environments emulating the real network operation are often used. Within this paper, a behavior comparison of real wireless mesh network (based on 802.11s amendment) and the simulated one has been performed. The main objective of this work is to measure performance parameters of a real 802.11s wireless mesh network (average UDP throughput and average one-way delay) and compare the derived results with characteristics of a simulated wireless mesh network created with the NS-3 network simulation tool. Then, the results from both networks are compared and the corresponding conclusion is made. The corresponding results were derived from simulation model and real-worldtest-bed, showing that the behavior of both networks is similar. It confirms that the NS-3 simulation model is accurate and can be used in further research studies.
Resumo:
Magdeburg, Univ., Fak. für Wirtschaftswiss., Diss., 2014
Resumo:
Some practical aspects of Genetic algorithms’ implementation regarding to life cycle management of electrotechnical equipment are considered.
Resumo:
A theory of network-entrepreneurs or "spin-off system" is presented in this paper for the creation of firms based on the community’s social governance. It is argued that firm’s capacity for accumulation depends on the presence of employees belonging to the same social/ethnic group with expectations of "inheriting" the firm and becoming entrepreneurs once they have been selected for their merits and loyalty towards their patrons. Such accumulation is possible because of the credibility of the patrons’ promises of supporting newcomers due to high social cohesion and specific social norms prevailing in the community. This theory is exemplified through the case of the Barcelonnettes, a group of immigrants from the Alps in the South of France (Provence) who came to Mexico in the XIX Century.
Resumo:
It is common to find in experimental data persistent oscillations in the aggregate outcomes and high levels of heterogeneity in individual behavior. Furthermore, it is not unusual to find significant deviations from aggregate Nash equilibrium predictions. In this paper, we employ an evolutionary model with boundedly rational agents to explain these findings. We use data from common property resource experiments (Casari and Plott, 2003). Instead of positing individual-specific utility functions, we model decision makers as selfish and identical. Agent interaction is simulated using an individual learning genetic algorithm, where agents have constraints in their working memory, a limited ability to maximize, and experiment with new strategies. We show that the model replicates most of the patterns that can be found in common property resource experiments.