936 resultados para Pulaski (Steam-packet)
Resumo:
My original contribution to knowledge is the creation of a WSN system that further improves the functionality of existing technology, whilst achieving improved power consumption and reliability. This thesis concerns the development of industrially applicable wireless sensor networks that are low-power, reliable and latency aware. This work aims to improve upon the state of the art in networking protocols for low-rate multi-hop wireless sensor networks. Presented is an application-driven co-design approach to the development of such a system. Starting with the physical layer, hardware was designed to meet industry specified requirements. The end system required further investigation of communications protocols that could achieve the derived application-level system performance specifications. A CSMA/TDMA hybrid MAC protocol was developed, leveraging numerous techniques from the literature and novel optimisations. It extends the current art with respect to power consumption for radio duty-cycled applications, and reliability, in dense wireless sensor networks, whilst respecting latency bounds. Specifically, it provides 100% packet delivery for 11 concurrent senders transmitting towards a single radio duty cycled sink-node. This is representative of an order of magnitude improvement over the comparable art, considering MAC-only mechanisms. A novel latency-aware routing protocol was developed to exploit the developed hardware and MAC protocol. It is based on a new weighted objective function with multiple fail safe mechanisms to ensure extremely high reliability and robustness. The system was empirically evaluated on two hardware platforms. These are the application-specific custom 868 MHz node and the de facto community-standard TelosB. Extensive empirical comparative performance analyses were conducted against the relevant art to demonstrate the advances made. The resultant system is capable of exceeding 10-year battery life, and exhibits reliability performance in excess of 99.9%.
Resumo:
Absorption heat transformers are thermodynamic systems which are capable of recycling industrial waste heat energy by increasing its temperature. Triple stage heat transformers (TAHTs) can increase the temperature of this waste heat by up to approximately 145˚C. The principle factors influencing the thermodynamic performance of a TAHT and general points of operating optima were identified using a multivariate statistical analysis, prior to using heat exchange network modelling techniques to dissect the design of the TAHT and systematically reassemble it in order to minimise internal exergy destruction within the unit. This enabled first and second law efficiency improvements of up to 18.8% and 31.5% respectively to be achieved compared to conventional TAHT designs. The economic feasibility of such a thermodynamically optimised cycle was investigated by applying it to an oil refinery in Ireland, demonstrating that in general the capital cost of a TAHT makes it difficult to achieve acceptable rates of return. Decreasing the TAHT's capital cost may be achieved by redesigning its individual pieces of equipment and reducing their size. The potential benefits of using a bubble column absorber were therefore investigated in this thesis. An experimental bubble column was constructed and used to track the collapse of steam bubbles being absorbed into a hotter lithium bromide salt solution. Extremely high mass transfer coefficients of approximately 0.0012m/s were observed, showing significant improvements over previously investigated absorbers. Two separate models were developed, namely a combined heat and mass transfer model describing the rate of collapse of the bubbles, and a stochastic model describing the hydrodynamic motion of the collapsing vapour bubbles taking into consideration random fluctuations observed in the experimental data. Both models showed good agreement with the collected data, and demonstrated that the difference between the solution's temperature and its boiling temperature is the primary factor influencing the absorber's performance.
Resumo:
Recent years have witnessed a rapid growth in the demand for streaming video over the Internet, exposing challenges in coping with heterogeneous device capabilities and varying network throughput. When we couple this rise in streaming with the growing number of portable devices (smart phones, tablets, laptops) we see an ever-increasing demand for high-definition videos online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide us with graceful changes in video quality, all while respecting our viewing satisfaction. In this context the use of well-known scalable media streaming techniques, commonly known as scalable coding, is an attractive solution and the focus of this thesis. In this thesis we investigate the transmission of existing scalable video models over a lossy network and determine how the variation in viewable quality is affected by packet loss. This work focuses on leveraging the benefits of scalable media, while reducing the effects of data loss on achievable video quality. The overall approach is focused on the strategic packetisation of the underlying scalable video and how to best utilise error resiliency to maximise viewable quality. In particular, we examine the manner in which scalable video is packetised for transmission over lossy networks and propose new techniques that reduce the impact of packet loss on scalable video by selectively choosing how to packetise the data and which data to transmit. We also exploit redundancy techniques, such as error resiliency, to enhance the stream quality by ensuring a smooth play-out with fewer changes in achievable video quality. The contributions of this thesis are in the creation of new segmentation and encapsulation techniques which increase the viewable quality of existing scalable models by fragmenting and re-allocating the video sub-streams based on user requirements, available bandwidth and variations in loss rates. We offer new packetisation techniques which reduce the effects of packet loss on viewable quality by leveraging the increase in the number of frames per group of pictures (GOP) and by providing equality of data in every packet transmitted per GOP. These provide novel mechanisms for packetizing and error resiliency, as well as providing new applications for existing techniques such as Interleaving and Priority Encoded Transmission. We also introduce three new scalable coding models, which offer a balance between transmission cost and the consistency of viewable quality.
Infant milk formula manufacture: process and compositional interactions in high dry matter wet-mixes
Resumo:
Infant milk formula (IMF) is fortified milk with composition based on the nutrient content in human mother's milk, 0 to 6 months postpartum. Extensive medical and clinical research has led to advances in the nutritional quality of infant formula; however, relatively few studies have focused on interactions between nutrients and the manufacturing process. The objective of this research was to investigate the impact of composition and processing parameters on physical behaviour of high dry matter (DM) IMF systems with a view to designing more sustainable manufacturing processes. The study showed that commercial IMF, with similar compositions, manufactured by different processes, had markedly different physical properties in dehydrated or reconstituted state. Commercial products made with hydrolysed protein were more heat stable compared to products made with intact protein, however, emulsion quality was compromised. Heat-induced denaturation of whey proteins resulted in increased viscosity of wet-mixes, an effect that was dependant on both whey concentration and interactions with lactose and caseins. Expanding on fundamental laboratory studies, a novel high velocity steam injection process was developed whereby high DM (60%) wet-mixes with lower denaturation/viscosity compared to conventional processes could be achieved; powders produced using this process were of similar quality to those manufactured conventionally. Hydrolysed proteins were also shown to be an effective way of reducing viscosity in heat-treated high DM wet-mixes. In particular, using a whey protein concentrate whereby β-Lactoglobulin was selectively hydrolysed, i.e., α-Lactalbumin remained intact, reduced viscosity of wet-mixes during processing while still providing good emulsification. The thesis provides new insights into interactions between nutrients and/or processing which influence physical stability of IMF both in concentrated liquid and powdered form. The outcomes of the work have applications in such areas as; increasing the DM content of spray drier feeds in order to save energy, and, controlling final powder quality.
Resumo:
Video compression techniques enable adaptive media streaming over heterogeneous links to end-devices. Scalable Video Coding (SVC) and Multiple Description Coding (MDC) represent well-known techniques for video compression with distinct characteristics in terms of bandwidth efficiency and resiliency to packet loss. In this paper, we present Scalable Description Coding (SDC), a technique to compromise the tradeoff between bandwidth efficiency and error resiliency without sacrificing user-perceived quality. Additionally, we propose a scheme that combines network coding and SDC to further improve the error resiliency. SDC yields upwards of 25% bandwidth savings over MDC. Additionally, our scheme features higher quality for longer durations even at high packet loss rates.
Resumo:
Recent years have witnessed a rapid growth in the demand for streaming video over the Internet and mobile networks, exposes challenges in coping with heterogeneous devices and varying network throughput. Adaptive schemes, such as scalable video coding, are an attractive solution but fare badly in the presence of packet losses. Techniques that use description-based streaming models, such as multiple description coding (MDC), are more suitable for lossy networks, and can mitigate the effects of packet loss by increasing the error resilience of the encoded stream, but with an increased transmission byte cost. In this paper, we present our adaptive scalable streaming technique adaptive layer distribution (ALD). ALD is a novel scalable media delivery technique that optimises the tradeoff between streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data are spread amongst all packets, thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the resiliency of the scalable video. The Subjective testing results illustrate that our techniques and models were able to provide levels of consistent high-quality viewing, with lower transmission cost, relative to MDC, irrespective of clip type. This highlights the benefits of selective packetisation in addition to intuitive encoding and transmission.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
There is growing evidence that organo-nitrogen compounds may constitute a significant fraction of the aerosol nitrogen (N) budget. However, very little is known about the abundance and origin of this aerosol fraction. In this study, the concentration of organic nitrogen (ON) and major inorganic ions in PM2.5 aerosol were measured at the Duke Forest Research Facility near Chapel Hill, NC, during January and June of 2007. A novel on-line instrument was used, which is based on the Steam Jet Aerosol Collector (SJAC) coupled to an on-line total carbon/total nitrogen analyzer and two on-line ion chromatographs. The concentration of ON was determined by tracking the difference in concentrations of total nitrogen and of inorganic nitrogen (determined as the sum of N-ammonium and N-nitrate). The time resolution of the instrument was 30 min with a detection limit for major aerosol components of ∼0.1 mu;gm-3. Nitrogen in organic compounds contributed ∼33% on average to the total nitrogen concentration in PM2.5, illustrating the importance of this aerosol component. Absolute concentrations of ON, however, were relatively low (lt;1.0 mu;gm-3) with an average of 0.16 mu;gm-3. The absolute and relative contribution of ON to the total aerosol nitrogen budget was practically the same in January and June. In January, the concentration of ON tended to be higher during the night and early morning, while in June it tended to be higher during the late afternoon and evening. Back-trajectories and correlation with wind direction indicate that higher concentrations of ON occur in air masses originating over the continental US, while marine air masses are characterized by lower ON concentrations. The data presented in this study suggests that ON has a variety of sources, which are very difficult to quantify without information on chemical composition of this important aerosol fraction.
Resumo:
We propose a novel data-delivery method for delay-sensitive traffic that significantly reduces the energy consumption in wireless sensor networks without reducing the number of packets that meet end-to-end real-time deadlines. The proposed method, referred to as SensiQoS, leverages the spatial and temporal correlation between the data generated by events in a sensor network and realizes energy savings through application-specific in-network aggregation of the data. SensiQoS maximizes energy savings by adaptively waiting for packets from upstream nodes to perform in-network processing without missing the real-time deadline for the data packets. SensiQoS is a distributed packet scheduling scheme, where nodes make localized decisions on when to schedule a packet for transmission to meet its end-to-end real-time deadline and to which neighbor they should forward the packet to save energy. We also present a localized algorithm for nodes to adapt to network traffic to maximize energy savings in the network. Simulation results show that SensiQoS improves the energy savings in sensor networks where events are sensed by multiple nodes, and spatial and/or temporal correlation exists among the data packets. Energy savings due to SensiQoS increase with increase in the density of the sensor nodes and the size of the sensed events. © 2010 Harshavardhan Sabbineni and Krishnendu Chakrabarty.
Resumo:
The performance of the register insertion protocol for mixed voice-data traffic is investigated by simulation. The simulation model incorporates a common insertion buffer for station and ring packets. Bandwidth allocation is achieved by imposing a queue limit at each node. A simple priority scheme is introduced by allowing the queue limit to vary from node to node. This enables voice traffic to be given priority over data. The effect on performance of various operational and design parameters such as ratio of voice to data traffic, queue limit and voice packet size is investigated. Comparisons are made where possible with related work on other protocols proposed for voice-data integration. The main conclusions are: (a) there is a general degradation of performance as the ratio of voice traffic to data traffic increases, (b) substantial improvement in performance can be achieved by restricting the queue length at data nodes and (c) for a given ring utilisation, smaller voice packets result in lower delays for both voice and data traffic.
Resumo:
In 1750 the lower Medway Valley, the area between the towns of Maidstone and Rochester, was firmly part of Kent's 'Garden of England'. A century later, this tranquil, agrarian landscape had been transformed into a hive of industry and commerce, through the emergence of papermaking, cement manufacture, brickmaking, brewing, ship and barge building, seed crushing and engineering. The lower Medway Valley became synonymous with the production of Portland cement, stock bricks and the steam engines of Aveling and Porter, yet, by the end of the Second World War, much of this industry was gone. "The Medway Valley: A Kent Landscape Transformed", the first Victoria County History publication in Kent for over 75 years, charts this cyclical story of landscape change. It explores how the quiet, rural landscape of a collection of eight riverside parishes around Rochester was dramatically transformed during industrialization, before returning to its formal rural state. This volume traces the impact of industrial development and decline on the valley and its people. It details changing patterns of work and society, the creation of new settlements and the pivotal role of the river in all aspects of village life reflecting two centuries of change and upheaval.
Resumo:
The extent and gravity of the environmental degradation of the water resources in Dhaka due to untreated industrial waste is not fully recognised in international discourse. Pollution levels affect vast numbers, but the poor and the vulnerable are the worst affected. For example, rice productivity, the mainstay of poor farmers, in the Dhaka watershed has declined by 40% over a period of ten years. The study found significant correlations between water pollution and diseases such as jaundice, diarrhoea and skin problems. It was reported that the cost of treatment of one episode of skin disease could be as high as 29% of the weekly earnings of some of the poorest households. The dominant approach to deal with pollution in the SMEs is technocratic. Given the magnitude of the problem this paper argues that to control industrial pollution by SMEs and to enhance their compliance it is necessary to move from the technocratic approach to one which can also address the wider institutional and attitudinal issues. Underlying this shift is the need to adopt the appropriate methodology. The multi-stakeholder analysis enables an understanding of the actors, their influence, their capacity to participate in, or oppose change, and the existing and embedded incentive structures which allow them to pursue interests which are generally detrimental to environmental good. This enabled core and supporting strategies to be developed around three types of actors in industrial pollution, i.e., (i) principal actors, who directly contribute to industrial pollution; (ii) stakeholders who exacerbate the situation; and (iii) potential actors in mitigation. Within a carrot-and-stick framework, the strategies aim to improve environmental governance and transparency, set up a packet to incentive for industry and increase public awareness.
Resumo:
Part 1 covers the North Sea fisheries, a voyage on a steam trawler, an outline of the rise of trawling in the North Sea and the introduction of trawling at a northern fishing station and its influence on the fishery.
Resumo:
Architectures and methods for the rapid design of silicon cores for implementing discrete wavelet transforms over a wide range of specifications are described. These architectures are efficient, modular, scalable, and cover orthonormal and biorthogonal wavelet transform families. They offer efficient hardware utilization by exploiting a number of core wavelet filter properties and allow the creation of silicon designs that are highly parameterized, including in terms of wavelet type and wordlengths. Control circuitry is embedded within these systems allowing them to be cascaded for any desired level of decomposition without any interface glue logic. The time to produce chip designs for a specific wavelet application is typically less than a day and these are comparable in area and performance to handcrafted designs. They are also portable across a wide range of silicon foundries and suitable for field programmable gate array and programmable logic data implementation. The approach described has also been extended to wavelet packet transforms.
Resumo:
The paper describes the design and analysis of a packet scheduler intended to operate over wireless channels with spatially selective error bursts. A particularly innovative aspect in the design is the optimization of the scheduler algorithm to minimize the worst-case fairness index (WFI) for real-time IP traffic.