380 resultados para ARPANET (Computer network)
Resumo:
With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.
Resumo:
The Streaming SIMD extension (SSE) is a special feature that is available in the Intel Pentium III and P4 classes of microprocessors. As its name implies, SSE enables the execution of SIMD (Single Instruction Multiple Data) operations upon 32-bit floating-point data therefore, performance of floating-point algorithms can be improved. In electrified railway system simulation, the computation involves the solving of a huge set of simultaneous linear equations, which represent the electrical characteristic of the railway network at a particular time-step and a fast solution for the equations is desirable in order to simulate the system in real-time. In this paper, we present how SSE is being applied to the railway network simulation.
Resumo:
Abstract Computer simulation is a versatile and commonly used tool for the design and evaluation of systems with different degrees of complexity. Power distribution systems and electric railway network are areas for which computer simulations are being heavily applied. A dominant factor in evaluating the performance of a software simulator is its processing time, especially in the cases of real-time simulation. Parallel processing provides a viable mean to reduce the computing time and is therefore suitable for building real-time simulators. In this paper, we present different issues related to solving the power distribution system with parallel computing based on a multiple-CPU server and we will concentrate, in particular, on the speedup performance of such an approach.
Resumo:
Parallel computing is currently used in many engineering problems. However, because of limitations in curriculum design, it is not always possible to offer students specific formal teaching in this topic. Furthermore, parallel machines are still too expensive for many institutions. The latest microprocessors, such as Intel’s Pentium III and IV, embody single instruction multiple-data (SIMD) type parallel features, which makes them a viable solution for introducing parallel computing concepts to students. Final year projects have been initiated utilizing SSE (streaming SIMD extensions) features and it has been observed that students can easily learn parallel programming concepts after going through some programming exercises. They can now experiment with parallel algorithms on their own PCs at home. Keywords
Resumo:
Secret-sharing schemes describe methods to securely share a secret among a group of participants. A properly constructed secret-sharing scheme guarantees that the share belonging to one participant does not reveal anything about the shares of others or even the secret itself. Besides being used to distribute a secret, secret-sharing schemes have also been used in secure multi-party computations and redundant residue number systems for error correction codes. In this paper, we propose that the secret-sharing scheme be used as a primitive in a Network-based Intrusion Detection System (NIDS) to detect attacks in encrypted Networks. Encrypted networks such as Virtual Private Networks (VPNs) fully encrypt network traffic which can include both malicious and non-malicious traffic. Traditional NIDS cannot monitor such encrypted traffic. We therefore describe how our work uses a combination of Shamir's secret-sharing scheme and randomised network proxies to enable a traditional NIDS to function normally in a VPN environment.
Resumo:
The advance of rapid prototyping techniques has significantly improved control over the pore network architecture of tissue engineering scaffolds. In this work we assessed the influence of scaffold pore architecture on cell seeding and static culturing, by comparing a computer‐designed gyroid architecture fabricated by stereolithography to a random‐pore architecture resulting from salt‐leaching. The scaffold types showed comparable porosity and pore size values, but the gyroid type showed a more than tenfold higher permeability due to the absence of size‐limiting pore interconnections. The higher permeability significantly improved the wetting properties of the hydrophobic scaffolds, and increased the settling speed of cells upon static seeding of immortalised mesenchymal stem cells. After dynamic seeding followed by 5 days of static culture, gyroid scaffolds showed large cell populations in the centre of the scaffold, while salt‐leached scaffolds were covered with a cell‐sheet on the outside and no cells were found in the scaffold centre. It was shown that interconnectivity of the pores and permeability of the scaffold prolongs the time of static culture before overgrowth of cells at the scaffold periphery occurs. Furthermore, novel scaffold designs are proposed to further improve the transport of oxygen and nutrients throughout the scaffolds, and to create tissue engineering grafts with designed, pre‐fabricated vasculature.
Resumo:
To date, biodegradable networks and particularly their kinetic chain lengths have been characterized by analysis of their degradation products in solution. We characterize the network itself by NMR analysis in the solvent-swollen state under magic angle spinning conditions. The networks were prepared by photoinitiated cross-linking of poly(dl-lactide)−dimethacrylate macromers (5 kg/mol) in the presence of an unreactive diluent. Using diffusion filtering and 2D correlation spectroscopy techniques, all network components are identified. By quantification of network-bound photoinitiator fragments, an average kinetic chain length of 9 ± 2 methacrylate units is determined. The PDLLA macromer solution was also used with a dye to prepare computer-designed structures by stereolithography. For these networks structures, the average kinetic chain length is 24 ± 4 methacrylate units. In all cases the calculated molecular weights of the polymethacrylate chains after degradation are maximally 8.8 kg/mol, which is far below the threshold for renal clearance. Upon incubation in phosphate buffered saline at 37 °C, the networks show a similar mass loss profile in time as linear high-molecular-weight PDLLA (HMW PDLLA). The mechanical properties are preserved longer for the PDLLA networks than for HMW PDLLA. The initial tensile strength of 47 ± 2 MPa does not decrease significantly for the first 15 weeks, while HMW PDLLA lost 85 ± 5% of its strength within 5 weeks. The physical properties, kinetic chain length, and degradation profile of these photo-cross-linked PDLLA networks make them most suited materials for orthopedic applications and use in (bone) tissue engineering.
Resumo:
In sustainable development projects, as well as other types of projects, knowledge transfer is important for the organisations managing the project. Nevertheless, knowledge transfer among employees does not happen automatically and it has been found that the lack of social networks and the lack of trust among employees are the major barriers to effective knowledge transfer. Social network analysis has been recognised as a very important tool for improving knowledge transfer in the project environment. Transfer of knowledge is more effective where it depends heavily on social networks and informal dialogue. Based on the theory of social capital, social capital consists of two parts: conduits network and resource exchange network. This research studies the relationships among performance, the resource exchange network (such as the knowledge network) and the relationship network (such as strong ties network, energy network, and trust network) at the individual and project levels. The aim of this chapter is to present an approach to overcoming the lack of social networks and lack of trust to improve knowledge transfer within project-based organisations. This is to be done by identifying the optimum structure of relationship networks and knowledge networks within small and medium projects. The optimal structure of the relationship networks and knowledge networks is measured using two dimensions: intra-project and inter-project. This chapter also outlines an extensive literature review in the areas of social capital, knowledge management and project management, and presents the conceptual model of the research approach.
Resumo:
This paper presents a novel method for remaining useful life prediction using the Elliptical Basis Function (EBF) network and a Markov chain. The EBF structure is trained by a modified Expectation-Maximization (EM) algorithm in order to take into account the missing covariate set. No explicit extrapolation is needed for internal covariates while a Markov chain is constructed to represent the evolution of external covariates in the study. The estimated external and the unknown internal covariates constitute an incomplete covariate set which are then used and analyzed by the EBF network to provide survival information of the asset. It is shown in the case study that the method slightly underestimates the remaining useful life of an asset which is a desirable result for early maintenance decision and resource planning.
Resumo:
Given there is currently a migration trend from traditional electrical supervisory control and data acquisition (SCADA) systems towards a smart grid based approach to critical infrastructure management. This project provides an evaluation of existing and proposed implementations for both traditional electrical SCADA and smart grid based architectures, and proposals a set of reference requirements which test bed implementations should implement. A high-level design for smart grid test beds is proposed and initial implementation performed, based on the proposed design, using open source and freely available software tools. The project examines the move towards smart grid based critical infrastructure management and illustrates the increased security requirements. The implemented test bed provides a basic framework for testing network requirements in a smart grid environment, as well as a platform for further research and development. Particularly to develop, implement and test network security related disturbances such as intrusion detection and network forensics. The project undertaken proposes and develops an architecture of the emulation of some smart grid functionality. The Common Open Research Emulator (CORE) platform was used to emulate the communication network of the smart grid. Specifically CORE was used to virtualise and emulate the TCP/IP networking stack. This is intended to be used for further evaluation and analysis, for example the analysis of application protocol messages, etc. As a proof of concept, software libraries were designed, developed and documented to enable and support the design and development of further smart grid emulated components, such as reclosers, switches, smart meters, etc. As part of the testing and evaluation a Modbus based smart meter emulator was developed to provide basic functionality of a smart meter. Further code was developed to send Modbus request messages to the emulated smart meter and receive Modbus responses from it. Although the functionality of the emulated components were limited, it does provide a starting point for further research and development. The design is extensible to enable the design and implementation of additional SCADA protocols. The project also defines an evaluation criteria for the evaluation of the implemented test bed, and experiments are designed to evaluate the test bed according to the defined criteria. The results of the experiments are collated and presented, and conclusions drawn from the results to facilitate discussion on the test bed implementation. The discussion undertaken also present possible future work.
Resumo:
The South Asia Infant Feeding Research Network (SAIFRN) was established in 2007 to foster and coordinate a research partnership among South Asian and international research groups interested in infant and young child feeding. SAIFRN has brought together a mix of researchers and program managers from Bangladesh, India, Nepal, Pakistan, and Sri Lanka together with international partners from Australia. As the first activity, SAIFRN conducted a series of analyses using Demographic and Health Surveys of Bangladesh, Nepal, and Sri Lanka and the National Family Health Survey of India. The results highlight that most indicators of infant and young child feeding in these four countries have not reached the targeted levels. The rates vary considerably by country, and the factors associated with poor feeding practices were not always consistent across countries. Driven by the ultimate goal of improved child survival in the region, SAIFRN wishes to expand its partnerships with governmental and nongovernmental organizations that share common interests both within and outside the South Asia region. In the future, SAIFRN hopes to provide more opportunities to researchers in the region to improve their skills by participating in capacity-building programs in collaboration with international partner institutions, and looks forward to liaising with potential donors to support such activities.
Resumo:
Purpose - This paper seeks to examine the complex relationships between urban planning, infrastructure management, sustainable urban development, and to illustrate why there is an urgent need for local governments to develop a robust planning support system which integrates with advance urban computer modelling tools to facilitate better infrastructure management and improve knowledge sharing between the community, urban planners, engineers and decision makers. Design/methodology/approach - The methods used in this paper includes literature review and practical project case observations. Originality/value - This paper provides an insight of how the Brisbane's planning support system established by Brisbane City Council has significantly improved the effectiveness of urban planning, infrastructure management and community engagement through better knowledge management processes. Practical implications - This paper presents a practical framework for setting up a functional planning support system within local government. The integration of the Brisbane Urban Growth model, Virtual Brisbane and the Brisbane Economic Activity Monitoring (BEAM) database have proven initially successful to provide a dynamic platform to assist elected officials, planners and engineers to understand the limitations of the local environment, its urban systems and the planning implications on a city. With the Brisbane's planning support system, planners and decision makers are able to provide better planning outcomes, policy and infrastructure that adequately address the local needs and achieve sustainable spatial forms.
Resumo:
Network has emerged from a contempory worldwide phenomenon, culturally manifested as a consequence of globalization and the knowledge economy. It is in this context that the internet revolution has prompted a radical re-ordering of social and institutional relations and the associated structures, processes and places which support them. Within the duality of virtual space and the augmentation of traditional notions of physical place, the organizational structures pose new challenges for the design professions. Technological developments increasingly permit communication anytime and anywhere, and provide the opportunity for both synchronous and asynchronous collaboration. The resultant ecology formed through the network enterprise has resulted in an often convolted and complex world wherein designers are forced to consider the relevance and meaning of this new context. The role of technology and that of space are thus interwined in the relation between the network and the individual workplace. This paper explores a way to inform the interior desgn process for contemporary workplace environments. It reports on both theoretical and practical outcomes through an Australia-wide case study of three collaborating, yet independent business entities. It further suggests the link between workplace design and successful business innovation being realized between partnering organizations in Great Britain. Evidence presented indicates that, for architects and interior designers, the scope of the problem has widened, the depth of knowledge required to provide solutions has increased, and the rules of engagement are required to change. The ontological and epistemological positions adopted in the study enabled the spatial dimensions to be examined from both within and beyond the confines of a traditional design only viewpoint. Importantly it highlights the significance of a trans-disiplinary collaboration in dealing with the multiple layers and complexity of the contemporary social and business world, from both a research and practice perspective.
Resumo:
The Guardian reportage of the United Kingdom Member of Parliament (MP) expenses scandal of 2009 used crowdsourcing and computational journalism techniques. Computational journalism can be broadly defined as the application of computer science techniques to the activities of journalism. Its foundation lies in computer assisted reporting techniques and its importance is increasing due to the: (a) increasing availability of large scale government datasets for scrutiny; (b) declining cost, increasing power and ease of use of data mining and filtering software; and Web 2.0; and (c) explosion of online public engagement and opinion.. This paper provides a case study of the Guardian MP expenses scandal reportage and reveals some key challenges and opportunities for digital journalism. It finds journalists may increasingly take an active role in understanding, interpreting, verifying and reporting clues or conclusions that arise from the interrogations of datasets (computational journalism). Secondly a distinction should be made between information reportage and computational journalism in the digital realm, just as a distinction might be made between citizen reporting and citizen journalism. Thirdly, an opportunity exists for online news providers to take a ‘curatorial’ role, selecting and making easily available the best data sources for readers to use (information reportage). These activities have always been fundamental to journalism, however the way in which they are undertaken may change. Findings from this paper may suggest opportunities and challenges for the implementation of computational journalism techniques in practice by digital Australian media providers, and further areas of research.
Resumo:
This article reports on a research program that has developed new methodologies for mapping the Australian blogosphere and tracking how information is disseminated across it. The authors improve on conventional web crawling methodologies in a number of significant ways: First, the authors track blogging activity as it occurs, by scraping new blog posts when such posts are announced through Really Simple Syndication (RSS) feeds. Second, the authors use custom-made tools that distinguish between the different types of content and thus allow us to analyze only the salient discursive content provided by bloggers. Finally, the authors are able to examine these better quality data using both link network mapping and textual analysis tools, to produce both cumulative longer term maps of interlinkages and themes, and specific shorter term snapshots of current activity that indicate current clusters of heavy interlinkage and highlight their key themes. In this article, the authors discuss findings from a yearlong observation of the Australian political blogosphere, suggesting that Australian political bloggers consistently address current affairs, but interpret them differently from mainstream news outlets. The article also discusses the next stage of the project, which extends this approach to an examination of other social networks used by Australians, including Twitter, YouTube, and Flickr. This adaptation of our methodology moves away from narrow models of political communication, and toward an investigation of everyday and popular communication, providing a more inclusive and detailed picture of the Australian networked public sphere.