929 resultados para pacs: computer networks and technology
Resumo:
World textile abstracts
Resumo:
Mode of access: Internet.
Resumo:
Includes bibliographical references (p. 48-49).
Resumo:
"B-229223"--p. [1]
Resumo:
Includes bibliographical references (p. 27).
Resumo:
Corpus Linguistics is a young discipline. The earliest work was done in the 1960s, but corpora only began to be widely used by lexicographers and linguists in the late 1980s, by language teachers in the late 1990s, and by language students only very recently. This course in corpus linguistics was held at the Departamento de Linguistica Aplicada, E.T.S.I. de Minas, Universidad Politecnica de Madrid from June 15-19 1998. About 45 teachers registered for the course. 30% had PhDs in linguistics, 20% in literature, and the rest were doctorandi or qualified English teachers. The course was designed to introduce the use of corpora and other computational resources in teaching and research, with special reference to scientific and technological discourse in English. Each participant had a computer networked with the lecturer’s machine, whose display could be projected onto a large screen. Application programs were loaded onto the central server, and telnet and a web browser were available. COBUILD gave us permission to access the 323 million word Bank of English corpus, Mike Scott allowed us to use his Wordsmith Tools software, and Tim Johns gave us a copy of his MicroConcord program.
Resumo:
A method is proposed to offer privacy in computer communications, using symmetric product block ciphers. The security protocol involved a cipher negotiation stage, in which two communicating parties select privately a cipher from a public cipher space. The cipher negotiation process includes an on-line cipher evaluation stage, in which the cryptographic strength of the proposed cipher is estimated. The cryptographic strength of the ciphers is measured by confusion and diffusion. A method is proposed to describe quantitatively these two properties. For the calculation of confusion and diffusion a number of parameters are defined, such as the confusion and diffusion matrices and the marginal diffusion. These parameters involve computationally intensive calculations that are performed off-line, before any communication takes place. Once they are calculated, they are used to obtain estimation equations, which are used for on-line, fast evaluation of the confusion and diffusion of the negotiated cipher. A technique proposed in this thesis describes how to calculate the parameters and how to use the results for fast estimation of confusion and diffusion for any cipher instance within the defined cipher space.
Resumo:
Technology is a key part of organisational knowledge and gives its owners their distinctive capabilities and competitive advantages. However, to best use these assets technology often needs to be transferred and shared with others through a form of technology collaboration. This raises the important question of how technology should be valued when it is being transferred. Technology valuation has become a critical issue in most transfer transactions. Transfer arrangements and terms of payment have a significant effect on the generation and sharing of joint benefits in commercial, technical and strategic aspects. In this paper the concept of “owner's value” is explored by highlighting its structure and components and assessing the importance of factors affecting value. The influence on technology valuation of the transfer arrangement, the associated terms of payment and the interaction between the shared benefits, cost and risks are discussed.
Resumo:
Computer networks are a critical factor for the performance of a modern company. Managing networks is as important as managing any other aspect of the company’s performance and security. There are many tools and appliances for monitoring the traffic and analyzing the network flow security. They use different approaches and rely on a variety of characteristics of the network flows. Network researchers are still working on a common approach for security baselining that might enable early watch alerts. This research focuses on the network security models, particularly the Denial-of-Services (DoS) attacks mitigation, based on a network flow analysis using the flows measurements and the theory of Markov models. The content of the paper comprises the essentials of the author’s doctoral thesis.
Resumo:
The main focus of this paper is on mathematical theory and methods which have a direct bearing on problems involving multiscale phenomena. Modern technology is refining measurement and data collection to spatio-temporal scales on which observed geophysical phenomena are displayed as intrinsically highly variable and intermittant heirarchical structures,e.g. rainfall, turbulence, etc. The heirarchical structure is reflected in the occurence of a natural separation of scales which collectively manifest at some basic unit scale. Thus proper data analysis and inference require a mathematical framework which couples the variability over multiple decades of scale in which basic theoretical benchmarks can be identified and calculated. This continues the main theme of the research in this area of applied probability over the past twenty years.
Resumo:
The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.
Resumo:
Postprint
Resumo:
Stealthy attackers move patiently through computer networks - taking days, weeks or months to accomplish their objectives in order to avoid detection. As networks scale up in size and speed, monitoring for such attack attempts is increasingly a challenge. This paper presents an efficient monitoring technique for stealthy attacks. It investigates the feasibility of proposed method under number of different test cases and examines how design of the network affects the detection. A methodological way for tracing anonymous stealthy activities to their approximate sources is also presented. The Bayesian fusion along with traffic sampling is employed as a data reduction method. The proposed method has the ability to monitor stealthy activities using 10-20% size sampling rates without degrading the quality of detection.