231 resultados para Ramp rate limits
Resumo:
This paper introduces an energy-efficient Rate Adaptive MAC (RA-MAC) protocol for long-lived Wireless Sensor Networks (WSN). Previous research shows that the dynamic and lossy nature of wireless communication is one of the major challenges to reliable data delivery in a WSN. RA-MAC achieves high link reliability in such situations by dynamically trading off radio bit rate for signal processing gain. This extra gain reduces the packet loss rate which results in lower energy expenditure by reducing the number of retransmissions. RA-MAC selects the optimal data rate based on channel conditions with the aim of minimizing energy consumption. We have implemented RA-MAC in TinyOS on an off-the-shelf sensor platform (TinyNode), and evaluated its performance by comparing RA-MAC with state-ofthe- art WSN MAC protocol (SCP-MAC) by experiments.
Resumo:
High-rate flooding attacks (aka Distributed Denial of Service or DDoS attacks) continue to constitute a pernicious threat within the Internet domain. In this work we demonstrate how using packet source IP addresses coupled with a change-point analysis of the rate of arrival of new IP addresses may be sufficient to detect the onset of a high-rate flooding attack. Importantly, minimizing the number of features to be examined, directly addresses the issue of scalability of the detection process to higher network speeds. Using a proof of concept implementation we have shown how pre-onset IP addresses can be efficiently represented using a bit vector and used to modify a “white list” filter in a firewall as part of the mitigation strategy.
Resumo:
Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.
Resumo:
Persistent use of safety restraints prevents deaths and reduces the severity and number of injuries resulting from motor vehicle crashes. However, safety-restraint use rates in the United States have been below those of other nations with safety-restraint enforcement laws. With a better understanding of the relationship between safety-restraint law enforcement and safety-restraint use, programs can be implemented to decrease the number of deaths and injuries resulting from motor vehicle crashes. Does safety-restraint use increase as enforcement increases? Do motorists increase their safety-restraint use in response to the general presence of law enforcement or to targeted law enforcement efforts? Does a relationship between enforcement and restraint use exist at the countywide level? A logistic regression model was estimated by using county-level safety-restraint use data and traffic citation statistics collected in 13 counties within the state of Florida in 1997. The model results suggest that safety-restraint use is positively correlated with enforcement intensity, is negatively correlated with safety-restraint enforcement coverage (in lanemiles of enforcement coverage), and is greater in urban than rural areas. The quantification of these relationships may assist Florida and other law enforcement agencies in raising safety-restraint use rates by allocating limited funds more efficiently either by allocating additional time for enforcement activities of the existing force or by increasing enforcement staff. In addition, the research supports a commonsense notion that enforcement activities do result in behavioral response.
Resumo:
Nitrous oxide (N2O) is a potent agricultural greenhouse gas (GHG). More than 50% of the global anthropogenic N2O flux is attributable to emissions from soil, primarily due to large fertilizer nitrogen (N) applications to corn and other non-leguminous crops. Quantification of the trade–offs between N2O emissions, fertilizer N rate, and crop yield is an essential requirement for informing management strategies aiming to reduce the agricultural sector GHG burden, without compromising productivity and producer livelihood. There is currently great interest in developing and implementing agricultural GHG reduction offset projects for inclusion within carbon offset markets. Nitrous oxide, with a global warming potential (GWP) of 298, is a major target for these endeavours due to the high payback associated with its emission prevention. In this paper we use robust quantitative relationships between fertilizer N rate and N2O emissions, along with a recently developed approach for determining economically profitable N rates for optimized crop yield, to propose a simple, transparent, and robust N2O emission reduction protocol (NERP) for generating agricultural GHG emission reduction credits. This NERP has the advantage of providing an economic and environmental incentive for producers and other stakeholders, necessary requirements in the implementation of agricultural offset projects.
Resumo:
Deformation Behaviour of microcrystalline (mc) and nanocrystalline (nc) Mg-5%Al alloys produced by hot extrusion of ball-milled powders were investigated using instrumented indentation tests. The hardness values of the mc and nc metals exhibited indentation size effect (ISE), with nc alloys showing weaker ISE. The highly localized dislocation activities resulted in a small activation volume, hence enhanced strain rate sensitivity. Relative higher strain rate sensitivity and the negative Hall-Petch Relationship suggested the increasingly important role of grain boundary mediated mechanisms when the grain size decreased to nanometer region.
Resumo:
Purpose: To compare subjective blur limits for cylinder and defocus. ---------- Method: Blur was induced with a deformable, adaptive-optics mirror when either the subjects’ own astigmatisms were corrected or when both astigmatisms and higher-order aberrations were corrected. Subjects were cyclopleged and had 5 mm artificial pupils. Black letter targets (0.1, 0.35 and 0.6 logMAR) were presented on white backgrounds. Results: For ten subjects, blur limits were approximately 50% greater for cylinder than for defocus (in diopters). While there were considerable effects of axis for individuals, overall this was not strong, with the 0° (or 180°) axis having about 20% greater limits than oblique axes. In a second experiment with text (equivalent in angle to N10 print at 40 cm distance), cylinder blur limits for 6 subjects were approximately 30% greater than those for defocus; this percentage was slightly smaller than for the three letters. Blur limits of the text were intermediate between those of 0.35 logMAR and 0.6 logMAR letters. Extensive blur limit measurements for one subject with single letters did not show expected interactions between target detail orientation and cylinder axis. ---------- Conclusion: Subjective blur limits for cylinder are 30%-50% greater than those for defocus, with the overall influence of cylinder axis being 20%.
Resumo:
Purpose: We compared subjective blur limits for defocus and the higher-order aberrations of coma, trefoil, and spherical aberration. ---------- Methods: Spherical aberration was presented in both Zernike and Seidel forms. Black letter targets (0.1, 0.35, and 0.6 logMAR) on white backgrounds were blurred using an adaptive optics system for six subjects under cycloplegia with 5 mm artificial pupils. Three blur criteria of just noticeable, just troublesome, and just objectionable were used.---------- Results: When expressed as wave aberration coefficients, the just noticeable blur limits for coma and trefoil were similar to those for defocus, whereas the just noticeable limits for Zernike spherical aberration and Seidel spherical aberration (the latter given as an “rms equivalent”) were considerably smaller and larger, respectively, than defocus limits.---------- Conclusions: Blur limits increased more quickly for the higher order aberrations than for defocus as the criterion changed from just noticeable to just troublesome and then to just objectionable.
Resumo:
In the rate-based flow control for ATM Available Bit Rate service, fairness is an important requirement, i.e. each flow should be allocated a fair share of the available bandwidth in the network. Max–min fairness, which is widely adopted in ATM, is appropriate only when the minimum cell rates (MCRs) of the flows are zero or neglected. Generalised max–min (GMM) fairness extends the principle of the max–min fairness to accommodate MCR. In this paper, we will discuss the formulation of the GMM fair rate allocation, propose a centralised algorithm, analyse its bottleneck structure and develop an efficient distributed explicit rate allocation algorithm to achieve the GMM fairness in an ATM network. The study in this paper addresses certain theoretical and practical issues of the GMM fair rate allocation.
Resumo:
In this paper, weighted fair rate allocation for ATM available bit rate (ABR) service is discussed with the concern of the minimum cell rate (MCR). Weighted fairness with MCR guarantee has been discussed recently in the literature. In those studies, each ABR virtual connection (VC) is first allocated its MCR, then the remaining available bandwidth is further shared among ABR VCs according to their weights. For the weighted fairness defined in this paper, the bandwidth is first allocated according to each VC's weight; if a VC's weighted share is less than its MCR, it should be allocated its MCR instead of the weighted share. This weighted fairness with MCR guarantee is referred to as extended weighted (EXW) fairness. Certain theoretical issues related to EXW, such as its global solution and bottleneck structure, are first discussed in the paper. A distributed explicit rate allocation algorithm is then proposed to achieve EXW fairness in ATM networks. The algorithm is a general-purpose explicit rate algorithm in the sense that it can realise almost all the fairness principles proposed for ABR so far whilst only minor modifications may be needed.
Resumo:
A high performance, low computational complexity rate-based flow control algorithm which can avoid congestion and achieve fairness is important to ATM available bit rate service. The explicit rate allocation algorithm proposed by Kalampoukas et al. is designed to achieve max–min fairness in ATM networks. It has several attractive features, such as a fixed computational complexity of O(1) and the guaranteed convergence to max–min fairness. In this paper, certain drawbacks of the algorithm, such as the severe overload of an outgoing link during transient period and the non-conforming use of the current cell rate field in a resource management cell, have been identified and analysed; a new algorithm which overcomes these drawbacks is proposed. The proposed algorithm simplifies the rate computation as well. Compared with Kalampoukas's algorithm, it has better performance in terms of congestion avoidance and smoothness of rate allocation.
Resumo:
We alternately measured on-road and in-vehicle ultrafine (<100 nm) particle (UFP) concentration for 5 passenger vehicles that comprised an age range of 18 years. A range of cabin ventilation settings were assessed during 301 trips through a 4 km road tunnel in Sydney, Australia. Outdoor airflow(ventilation) rates under these settings were quantified on open roads using tracer gas techniques. Significant variability in tunnel trip average median in-cabin/on-road (I/O) UFP ratios was observed (0.08 to ∼1.0). Based on data spanning all test automobiles and ventilation settings, a positive linear relationship was found between outdoor air flow rate and I/O ratio, with the former accounting for a substantial proportion of variation in the latter (R2 ) 0.81). UFP concentrations recorded in cabin during tunnel travel were significantly higher than those reported by comparable studies performed on open roadways. A simple mathematical model afforded the ability to predict tunnel trip average in-cabin UFP concentrations with good accuracy. Our data indicate that under certain conditions, in-cabin UFP exposures incurred during tunnel travel may contribute significantly to daily exposure. The UFP exposure of automobile occupants appears strongly related to their choice of ventilation setting and vehicle.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.