908 resultados para Static-order-trade-off
Resumo:
It is well known that millimetre waves can pass through clothing. In short range applications such as in the scanning of people for security purposes, operating at W band can be an advantage. The size of the equipment is decreased when compared to operation at Ka band and the equipments have similar performance.
In this paper a W band mechanically scanned imager designed for imaging weapons and contraband hidden under clothing is discussed. This imager is based on a modified folded conical scan technology previously reported. In this design an additional optical element is added to give a Cassegrain configuration in image space. This increases the effective focal length and enables improved sampling of the image and provides more space for the receivers. This imager is constructed from low cost materials such as polystyrene, polythene and printed circuit board materials. The trade off between image spatial resolution and thermal sensitivity is discussed.
Resumo:
A new model to explain animal spacing, based on a trade-off between foraging efficiency and predation risk, is derived from biological principles. The model is able to explain not only the general tendency for animal groups to form, but some of the attributes of real groups. These include the independence of mean animal spacing from group population, the observed variation of animal spacing with resource availability and also with the probability of predation, and the decline in group stability with group size. The appearance of "neutral zones" within which animals are not motivated to adjust their relative positions is also explained. The model assumes that animals try to minimize a cost potential combining the loss of intake rate due to foraging interference and the risk from exposure to predators. The cost potential describes a hypothetical field giving rise to apparent attractive and repulsive forces between animals. Biologically based functions are given for the decline in interference cost and increase in the cost of predation risk with increasing animal separation. Predation risk is calculated from the probabilities of predator attack and predator detection as they vary with distance. Using example functions for these probabilities and foraging interference, we calculate the minimum cost potential for regular lattice arrangements of animals before generalizing to finite-sized groups and random arrangements of animals, showing optimal geometries in each case and describing how potentials vary with animal spacing. (C) 1999 Academic Press.</p>
Resumo:
The decision on when to emerge from the safety of a roost and forage for prey is thought to be a result of the trade off between peak insect abundance and predation pressure for bats. In this study we show that the velvety free-tailed bat Molossus molossus emerges just after sunset and just before sunrise for very short foraging bouts (average 82.2 min foraging per night). Contrary to previous studies, bats remain inactive in their roost between activity patterns. Activity was measured over two complete lunar cycles and there was no indication that phase of the moon had an influence on emergence time or the numbers of bats that emerged from the roost. This data suggests that M. molossus represents an example of an aerial hawking bat whose foraging behaviour is in fact adapted to the compromise between the need to exploit highest prey availability and the need to avoid predation.
Resumo:
In this paper, a new reconfigurable multi-standard architecture is introduced for integer-pixel motion estimation and a standard-cell based chip design study is presented. This has been designed to cover most of the common block-based video compression standards, including MPEG-2, MPEG-4, H.263, H.264, AVS and WMV-9. The architecture exhibits simpler control, high throughput and relative low hardware cost and highly competitive when compared with excising designs for specific video standards. It can also, through the use of control signals, be dynamically reconfigured at run-time to accommodate different system constraint such as the trade-off in power dissipation and video-quality. The computational rates achieved make the circuit suitable for high end video processing applications. Silicon design studies indicate that circuits based on this approach incur only a relatively small penalty in terms of power dissipation and silicon area when compared with implementations for specific standards.
Resumo:
Animals often show behavioural plasticity with respect to predation risk but also show behavioural syndromes in terms of consistency of responses to different stimuli. We examine these features in the freshwater pearl mussel. These bivalves often aggregate presumably to reduce predation risk to each individual. Predation risk, however, will be higher in the presence of predator cues. Here we use dimming light, vibration and touch as novel stimuli to examine the trade-off between motivation to feed and motivation to avoid predation. We present two experiments that each use three sequential novel stimuli to cause the mussels to close their valves and hence cease feeding. We find that mussels within a group showed shorter closure times than solitary mussels, consistent with decreased vulnerability to predation in group-living individuals. Mussels exposed to the odour of a predatory crayfish showed longer closures than control mussels, highlighting the predator assessment abilities of this species. However, individuals showed significant consistency in their closure responses across the trial series, in line with behavioural syndrome theory. Our results show that bivalves trade-off feeding and predator avoidance according to predation risk but the degree to which this is achieved is constrained by behavioural consistency. © 2011 Elsevier B.V.
Resumo:
Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).
We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.
An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.
Resumo:
Incorporating ecological processes and animal behaviour into Species Distribution Models (SDMs) is difficult. In species with a central resting or breeding place, there can be conflict between the environmental requirements of the 'central place' and foraging habitat. We apply a multi-scale SDM to examine habitat trade-offs between the central place, roost sites, and foraging habitat in . Myotis nattereri. We validate these derived associations using habitat selection from behavioural observations of radio-tracked bats. A Generalised Linear Model (GLM) of roost occurrence using land cover variables with mixed spatial scales indicated roost occurrence was positively associated with woodland on a fine scale and pasture on a broad scale. Habitat selection of radio-tracked bats mirrored the SDM with bats selecting for woodland in the immediate vicinity of individual roosts but avoiding this habitat in foraging areas, whilst pasture was significantly positively selected for in foraging areas. Using habitat selection derived from radio-tracking enables a multi-scale SDM to be interpreted in a behavioural context. We suggest that the multi-scale SDM of . M. nattereri describes a trade-off between the central place and foraging habitat. Multi-scale methods provide a greater understanding of the ecological processes which determine where species occur and allow integration of behavioural processes into SDMs. The findings have implications when assessing the resource use of a species at a single point in time. Doing so could lead to misinterpretation of habitat requirements as these can change within a short time period depending on specific behaviour, particularly if detectability changes depending on behaviour. © 2011 Gesellschaft für ökologie.
Resumo:
Efficacy of inverse planning is becoming increasingly important for advanced radiotherapy techniques. This study's aims were to validate multicriteria optimization (MCO) in RayStation (v2.4, RaySearch Laboratories, Sweden) against standard intensity-modulated radiation therapy (IMRT) optimization in Oncentra (v4.1, Nucletron BV, the Netherlands) and characterize dose differences due to conversion of navigated MCO plans into deliverable multileaf collimator apertures. Step-and-shoot IMRT plans were created for 10 patients with localized prostate cancer using both standard optimization and MCO. Acceptable standard IMRT plans with minimal average rectal dose were chosen for comparison with deliverable MCO plans. The trade-off was, for the MCO plans, managed through a user interface that permits continuous navigation between fluence-based plans. Navigated MCO plans were made deliverable at incremental steps along a trajectory between maximal target homogeneity and maximal rectal sparing. Dosimetric differences between navigated and deliverable MCO plans were also quantified. MCO plans, chosen as acceptable under navigated and deliverable conditions resulted in similar rectal sparing compared with standard optimization (33.7 ± 1.8Gy vs 35.5 ± 4.2Gy, p = 0.117). The dose differences between navigated and deliverable MCO plans increased as higher priority was placed on rectal avoidance. If the best possible deliverable MCO was chosen, a significant reduction in rectal dose was observed in comparison with standard optimization (30.6 ± 1.4Gy vs 35.5 ± 4.2Gy, p = 0.047). Improvements were, however, to some extent, at the expense of less conformal dose distributions, which resulted in significantly higher doses to the bladder for 2 of the 3 tolerance levels. In conclusion, similar IMRT plans can be created for patients with prostate cancer using MCO compared with standard optimization. Limitations exist within MCO regarding conversion of navigated plans to deliverable apertures, particularly for plans that emphasize avoidance of critical structures. Minimizing these differences would result in better quality treatments for patients with prostate cancer who were treated with radiotherapy using MCO plans.
Resumo:
Fibre-Reinforced Plastics (FRPs) have been used in civil aerospace vehicles for decades. The current state-of-the-art in airframe design and manufacture results in approximately half the airframe mass attributable to FRP materials. The continual increase in the use of FRP materials over metallic alloys is attributable to the material's superior specific strength and stiffness, fatigue performance and corrosion resistance. However, the full potential of these materials has yet to be exploited as analysis methods to predict physical failure with equal accuracy and robustness are not yet available. The result is a conservative approach to design, but one that can bring benefit via increased inspection intervals and reduced cost over the vehicle life. The challenge is that the methods used in practice are based on empirical tests and real relationships and drivers are difficult to see in this complex process and so the trade-off decision is challenging and uncertain. The aim of this feasibility study was to scope a viable process which could help develop some rules and relationships based on the fundamental mechanics of composite material and the economics of production and operation, which would enhance understanding of the role and impact of design allowables across the life of a composite structure.
Resumo:
When implementing autonomic management of multiple non-functional concerns a trade-off must be found between the ability to develop independently management of the individual concerns (following the separation of concerns principle) and the detection and resolution of conflicts that may arise when combining the independently developed management code. Here we discuss strategies to establish this trade-off and introduce a model checking based methodology aimed at simplifying the discovery and handling of conflicts arising from deployment-within the same parallel application-of independently developed management policies. Preliminary results are shown demonstrating the feasibility of the approach.
Resumo:
Monte Carlo calculations of quantum yield in PtSi/p-Si infrared detectors are carried out taking into account the presence of a spatially distributed barrier potential. In the 1-4 mu m wavelength range it is found that the spatial inhomogeneity of the barrier has no significant effect on the overall device photoresponse. However, above lambda = 4.0 mu m and particularly as the cut-off wavelength (lambda approximate to 5.5 mu m) is approached, these calculations reveal a difference between the homogeneous and inhomogeneous barrier photoresponse which becomes increasingly significant and exceeds 50% at lambda = 5.3 mu m. It is, in fact, the inhomogeneous barrier which displays an increased photoyield, a feature that is confirmed by approximate analytical calculations assuming a symmetric Gaussian spatial distribution of the barrier. Furthermore, the importance of the silicide layer thickness in optimizing device efficiency is underlined as a trade-off between maximizing light absorption in the silicide layer and optimizing the internal yield. The results presented here address important features which determine the photoyield of PtSi/Si Schottky diodes at energies below the Si absorption edge and just above the Schottky barrier height in particular.
Resumo:
Continuous research endeavors on hard turning (HT), both on machine tools and cutting tools, have made the previously reported daunting limits easily attainable in the modern scenario. This presents an opportunity for a systematic investigation on finding the current attainable limits of hard turning using a CNC turret lathe. Accordingly, this study aims to contribute to the existing literature by providing the latest experimental results of hard turning of AISI 4340 steel (69 HRC) using a CBN cutting tool. An orthogonal array was developed using a set of judiciously chosen cutting parameters. Subsequently, the longitudinal turning trials were carried out in accordance with a well-designed full factorial-based Taguchi matrix. The speculation indeed proved correct as a mirror finished optical quality machined surface (an average surface roughness value of 45 nm) was achieved by the conventional cutting method. Furthermore, Signal-to-noise (S/N) ratio analysis, Analysis of variance (ANOVA), and Multiple regression analysis were carried out on the experimental datasets to assert the dominance of each machining variable in dictating the machined surface roughness and to optimize the machining parameters. One of the key findings was that when feed rate during hard turning approaches very low (about 0.02mm/rev), it could alone be most significant (99.16%) parameter in influencing the machined surface roughness (Ra). This has, however also been shown that low feed rate results in high tool wear, so the selection of machining parameters for carrying out hard turning must be governed by a trade-off between the cost and quality considerations.
Resumo:
A credal network is a graph-theoretic model that represents imprecision in joint probability distributions. An inference in a credal net aims at computing an interval for the probability of an event of interest. Algorithms for inference in credal networks can be divided into exact and approximate. The selection of an algorithm is based on a trade off that ponders how much time someone wants to spend in a particular calculation against the quality of the computed values. This paper presents an algorithm, called IDS, that combines exact and approximate methods for computing inferences in polytree-shaped credal networks. The algorithm provides an approach to trade time and precision when making inferences in credal nets
Resumo:
In multi-terminal high voltage direct current (HVDC) grids, the widely deployed droop control strategies will cause a non-uniform voltage deviation on the power flow, which is determined by the network topology and droop settings. This voltage deviation results in an inconsistent power flow pattern when the dispatch references are changed, which could be detrimental to the operation and seamless integration of HVDC grids. In this paper, a novel droop setting design method is proposed to address this problem for a more precise power dispatch. The effects of voltage deviations on the power sharing accuracy and transmission loss are analysed. This paper shows that there is a trade-off between minimizing the voltage deviation, ensuring a proper power delivery and reducing the total transmission loss in the droop setting design. The efficacy of the proposed method is confirmed by simulation studies.
Resumo:
Mixed flow turbines can offer improvements over typical radial turbines used in automotive turbochargers, with regards to transient performance and low velocity ratio efficiency. Turbine rotor mass dominates the rotating inertia of the turbocharger, and any reductions of mass in the outer radii of the wheel, including the rotor back-disk, can significantly reduce this inertia and improve the acceleration of the assembly. Off-design, low velocity ratio conditions are typified by highly tangential flow at the rotor inlet and a non-zero inlet blade angle is preferred for such operating conditions. This is achievable in a Mixed Flow Turbine without increasing bending stresses within the rotor blade, which is beneficial in high speed and high inlet temperature turbine design. A range of mixed flow turbine rotors was designed with varying cone angle and inlet blade angle and each was assessed at a number of operating points. These rotors were based on an existing radial flow turbine, and both the hub and shroud contours and exducer geometry were maintained. The inertia of each rotor was also considered. The results indicated that there was a trade-off between efficiency and inertia for the rotors and certain designs may be beneficial for the transient performance of downsized, turbocharged engines.