936 resultados para Pulaski (Steam-packet)
Resumo:
Packet forwarding is a memory-intensive application requiring multiple accesses through a trie structure. With the requirement to process packets at line rates, high-performance routers need to forward millions of packets every second with each packet needing up to seven memory accesses. Earlier work shows that a single cache for the nodes of a trie can reduce the number of external memory accesses. It is observed that the locality characteristics of the level-one nodes of a trie are significantly different from those of lower level nodes. Hence, we propose a heterogeneously segmented cache architecture (HSCA) which uses separate caches for level-one and lower level nodes, each with carefully chosen sizes. Besides reducing misses, segmenting the cache allows us to focus on optimizing the more frequently accessed level-one node segment. We find that due to the nonuniform distribution of nodes among cache sets, the level-one nodes cache is susceptible t high conflict misses. We reduce conflict misses by introducing a novel two-level mapping-based cache placement framework. We also propose an elegant way to fit the modified placement function into the cache organization with minimal increase in access time. Further, we propose an attribute preserving trace generation methodology which emulates real traces and can generate traces with varying locality. Performanc results reveal that our HSCA scheme results in a 32 percent speedup in average memory access time over a unified nodes cache. Also, HSC outperforms IHARC, a cache for lookup results, with as high as a 10-fold speedup in average memory access time. Two-level mappin further enhances the performance of the base HSCA by up to 13 percent leading to an overall improvement of up to 40 percent over the unified scheme.
Resumo:
The results of drying trials show that vacuum drying produces material of the same or better quality than is currently being produced by conventional methods within 41 to 66 % of the drying time, depending on the species. Economic analysis indicates positive or negative results depending on the species and the size of drying operation. Definite economic benefits exist by vacuum drying over conventional drying for all operation sizes, in terms of drying quality, time and economic viability, for E. marginata and E. pilularis. The same applies for vacuum drying C. citriodora and E. obliqua in larger drying operations (kiln capacity 50 m3 or above), but not for smaller operations at this stage. Further schedule refinement has the ability to reduce drying times further and may improve the vacuum drying viability of the latter species in smaller operations.
Resumo:
Environmental changes have put great pressure on biological systems leading to the rapid decline of biodiversity. To monitor this change and protect biodiversity, animal vocalizations have been widely explored by the aid of deploying acoustic sensors in the field. Consequently, large volumes of acoustic data are collected. However, traditional manual methods that require ecologists to physically visit sites to collect biodiversity data are both costly and time consuming. Therefore it is essential to develop new semi-automated and automated methods to identify species in automated audio recordings. In this study, a novel feature extraction method based on wavelet packet decomposition is proposed for frog call classification. After syllable segmentation, the advertisement call of each frog syllable is represented by a spectral peak track, from which track duration, dominant frequency and oscillation rate are calculated. Then, a k-means clustering algorithm is applied to the dominant frequency, and the centroids of clustering results are used to generate the frequency scale for wavelet packet decomposition (WPD). Next, a new feature set named adaptive frequency scaled wavelet packet decomposition sub-band cepstral coefficients is extracted by performing WPD on the windowed frog calls. Furthermore, the statistics of all feature vectors over each windowed signal are calculated for producing the final feature set. Finally, two well-known classifiers, a k-nearest neighbour classifier and a support vector machine classifier, are used for classification. In our experiments, we use two different datasets from Queensland, Australia (18 frog species from commercial recordings and field recordings of 8 frog species from James Cook University recordings). The weighted classification accuracy with our proposed method is 99.5% and 97.4% for 18 frog species and 8 frog species respectively, which outperforms all other comparable methods.
Resumo:
Digital image
Resumo:
Network data packet capture and replay capabilities are basic requirements for forensic analysis of faults and security-related anomalies, as well as for testing and development. Cyber-physical networks, in which data packets are used to monitor and control physical devices, must operate within strict timing constraints, in order to match the hardware devices' characteristics. Standard network monitoring tools are unsuitable for such systems because they cannot guarantee to capture all data packets, may introduce their own traffic into the network, and cannot reliably reproduce the original timing of data packets. Here we present a high-speed network forensics tool specifically designed for capturing and replaying data traffic in Supervisory Control and Data Acquisition systems. Unlike general-purpose "packet capture" tools it does not affect the observed network's data traffic and guarantees that the original packet ordering is preserved. Most importantly, it allows replay of network traffic precisely matching its original timing. The tool was implemented by developing novel user interface and back-end software for a special-purpose network interface card. Experimental results show a clear improvement in data capture and replay capabilities over standard network monitoring methods and general-purpose forensics solutions.
Resumo:
Previous studies have shown that buffering packets in DRAM is a performance bottleneck. In order to understand the impediments in accessing the DRAM, we developed a detailed Petri net model of IP forwarding application on IXP2400 that models the different levels of the memory hierarchy. The cell based interface used to receive and transmit packets in a network processor leads to some small size DRAM accesses. Such narrow accesses to the DRAM expose the bank access latency, reducing the bandwidth that can be realized. With real traces up to 30% of the accesses are smaller than the cell size, resulting in 7.7% reduction in DRAM bandwidth. To overcome this problem, we propose buffering these small chunks of data in the on chip scratchpad memory. This scheme also exploits greater degree of parallelism between different levels of the memory hierarchy. Using real traces from the internet, we show that the transmit rate can be improved by an average of 21% over the base scheme without the use of additional hardware. Further, the impact of different traffic patterns on the network processor resources is studied. Under real traffic conditions, we show that the data bus which connects the off-chip packet buffer to the micro-engines, is the obstacle in achieving higher throughput.
Resumo:
We propose a solution based on message passing bipartite networks, for deep packet inspection, which addresses both speed and memory issues, which are limiting factors in current solutions. We report on a preliminary implementation and propose a parallel architecture.
Resumo:
Deep packet inspection is a technology which enables the examination of the content of information packets being sent over the Internet. The Internet was originally set up using “end-to-end connectivity” as part of its design, allowing nodes of the network to send packets to all other nodes of the network, without requiring intermediate network elements to maintain status information about the transmission. In this way, the Internet was created as a “dumb” network, with “intelligent” devices (such as personal computers) at the end or “last mile” of the network. The dumb network does not interfere with an application's operation, nor is it sensitive to the needs of an application, and as such it treats all information sent over it as (more or less) equal. Yet, deep packet inspection allows the examination of packets at places on the network which are not endpoints, In practice, this permits entities such as Internet service providers (ISPs) or governments to observe the content of the information being sent, and perhaps even manipulate it. Indeed, the existence and implementation of deep packet inspection may challenge profoundly the egalitarian and open character of the Internet. This paper will firstly elaborate on what deep packet inspection is and how it works from a technological perspective, before going on to examine how it is being used in practice by governments and corporations. Legal problems have already been created by the use of deep packet inspection, which involve fundamental rights (especially of Internet users), such as freedom of expression and privacy, as well as more economic concerns, such as competition and copyright. These issues will be considered, and an assessment of the conformity of the use of deep packet inspection with law will be made. There will be a concentration on the use of deep packet inspection in European and North American jurisdictions, where it has already provoked debate, particularly in the context of discussions on net neutrality. This paper will also incorporate a more fundamental assessment of the values that are desirable for the Internet to respect and exhibit (such as openness, equality and neutrality), before concluding with the formulation of a legal and regulatory response to the use of this technology, in accordance with these values.
Resumo:
In the wake of an almost decade long economic downturn and increasing competition from developing economies, a new agenda in the Australian Government for science, technology, engineering, and mathematics (STEM) education and research has emerged as a national priority. However, to art and design educators, the pervasiveness and apparent exclusivity of STEM can be viewed as another instance of art and design education being relegated to the margins of curriculum (Greene, 1995). In the spirit of interdisciplinarity, there have been some recent calls to expand STEM education to include the arts and design, transforming STEM into STEAM in education (Maeda, 2013). As with STEM, STEAM education emphasises the connections between previously disparate disciplines, meaning that education has been conceptualised in different ways, such as focusing on the creative design thinking process that is fundamental to engineering and art (Bequette & Bequette, 2012). In this article, we discuss divergent creative design thinking process and metacognitive skills, how, and why they may enhance learning in STEM and STEAM.
Resumo:
The widespread deployment of commercial-scale cellulosic ethanol currently hinges on developing and evaluating scalable processes whilst broadening feedstock options. This study investigates whole Eucalyptus grandis trees as a potential feedstock and demonstrates dilute acid pre-treatment (with steam explosion) followed by pre-saccharification simultaneous saccharification fermentation process (PSSF) as a suitable, scalable strategy for the production of bioethanol. Biomass was pre-treated in dilute H2SO4 at laboratory scale (0.1 kg) and pilot scale (10 kg) to evaluate the effect of combined severity factor (CSF) on pre-treatment effectiveness. Subsequently, pilot-scale pre-treated residues (15 wt.%) were converted to ethanol in a PSSF process at 2 L and 300 L scales. Good polynomial correlations (n = 2) of CSF with hemicellulose removal and glucan digestibility with a minimum R2 of 0.91 were recorded. The laboratory-scale 72 h glucan digestibility and glucose yield was 68.0% and 51.3%, respectively, from biomass pre-treated at 190 °C /15 min/ 4.8 wt.% H2SO4. Pilot-scale pre-treatment (180 °C/ 15 min/2.4 wt.% H2SO4 followed by steam explosion) delivered higher glucan digestibility (71.8%) and glucose yield (63.6%). However, the ethanol yields using PSSF were calculated at 82.5 and 113 kg/ton of dry biomass for the pilot and the laboratory scales, respectively. © 2016 Society of Chemical Industry and John Wiley & Sons, Ltd
Resumo:
A link failure in the path of a virtual circuit in a packet data network will lead to premature disconnection of the circuit by the end-points. A soft failure will result in degraded throughput over the virtual circuit. If these failures can be detected quickly and reliably, then appropriate rerouteing strategies can automatically reroute the virtual circuits that use the failed facility. In this paper, we develop a methodology for analysing and designing failure detection schemes for digital facilities. Based on errored second data, we develop a Markov model for the error and failure behaviour of a T1 trunk. The performance of a detection scheme is characterized by its false alarm probability and the detection delay. Using the Markov model, we analyse the performance of detection schemes that use physical layer or link layer information. The schemes basically rely upon detecting the occurrence of severely errored seconds (SESs). A failure is declared when a counter, that is driven by the occurrence of SESs, reaches a certain threshold.For hard failures, the design problem reduces to a proper choice;of the threshold at which failure is declared, and on the connection reattempt parameters of the virtual circuit end-point session recovery procedures. For soft failures, the performance of a detection scheme depends, in addition, on how long and how frequent the error bursts are in a given failure mode. We also propose and analyse a novel Level 2 detection scheme that relies only upon anomalies observable at Level 2, i.e. CRC failures and idle-fill flag errors. Our results suggest that Level 2 schemes that perform as well as Level 1 schemes are possible.
Resumo:
The paper addresses certain issues pertaining to the technology of lime-stabilised steam-cured blocks used for masonry construction. Properties of lime-stabilised steam-cured blocks using expansive soils and tank bed soils have been examined. Influence of parameters like steam curing period, lime content and fly ash content on wet strength of blocks is studied. Steam curing of lime stabilised blocks at 80degreesC for about 20 hours at atmospheric pressure leads to considerably higher strengths when compared with curing under wet cloth at ambient temperatures. Clay-fly ash fractions of the mix control the optimum lime content yielding maximum strength. Long-term strength behaviour of steam-cured blocks has been monitored. The results indicate a favourable lime-clay ratio for stable long-term strength. A small-scale steam cured block production system has been designed and implemented to construct a load bearing masonry structure, thus demonstrating the potential of steam-cured block as a material for masonry construction.
Resumo:
Network processors today consist of multiple parallel processors (micro engines) with support for multiple threads to exploit packet level parallelism inherent in network workloads. With such concurrency, packet ordering at the output of the network processor cannot be guaranteed. This paper studies the effect of concurrency in network processors on packet ordering. We use a validated Petri net model of a commercial network processor, Intel IXP 2400, to determine the extent of packet reordering for IPv4 forwarding application. Our study indicates that in addition to the parallel processing in the network processor, the allocation scheme for the transmit buffer also adversely impacts packet ordering. In particular, our results reveal that these packet reordering results in a packet retransmission rate of up to 61%. We explore different transmit buffer allocation schemes namely, contiguous, strided, local, and global which reduces the packet retransmission to 24%. We propose an alternative scheme, packet sort, which guarantees complete packet ordering while achieving a throughput of 2.5 Gbps. Further, packet sort outperforms the in-built packet ordering schemes in the IXP processor by up to 35%.