50 resultados para Bounds
Resumo:
The Ziegler Reservoir fossil site near Snowmass Village, Colorado, provides a unique opportunity to reconstruct high-altitude paleoenvironmental conditions in the Rocky Mountains during the last interglacial period. We used four different techniques to establish a chronological framework for the site. Radiocarbon dating of lake organics, bone collagen, and shell carbonate, and in situ cosmogenic Be and Al ages on a boulder on the crest of a moraine that impounded the lake suggest that the ages of the sediments that hosted the fossils are between ~ 140 ka and > 45 ka. Uranium-series ages of vertebrate remains generally fall within these bounds, but extremely low uranium concentrations and evidence of open-system behavior limit their utility. Optically stimulated luminescence (OSL) ages (n = 18) obtained from fine-grained quartz maintain stratigraphic order, were replicable, and provide reliable ages for the lake sediments. Analysis of the equivalent dose (D) dispersion of the OSL samples showed that the sediments were fully bleached prior to deposition and low scatter suggests that eolian processes were likely the dominant transport mechanism for fine-grained sediments into the lake. The resulting ages show that the fossil-bearing sediments span the latest part of marine isotope stage (MIS) 6, all of MIS 5 and MIS 4, and the earliest part of MIS 3.
Resumo:
Molecular communication is set to play an important role in the design of complex biological and chemical systems. An important class of molecular communication systems is based on the timing channel, where information is encoded in the delay of the transmitted molecule - a synchronous approach. At present, a widely used modeling assumption is the perfect synchronization between the transmitter and the receiver. Unfortunately, this assumption is unlikely to hold in most practical molecular systems. To remedy this, we introduce a clock into the model - leading to the molecular timing channel with synchronization error. To quantify the behavior of this new system, we derive upper and lower bounds on the variance-constrained capacity, which we view as the step between the mean-delay and the peak-delay constrained capacity. By numerically evaluating our bounds, we obtain a key practical insight: the drift velocity of the clock links does not need to be significantly larger than the drift velocity of the information link, in order to achieve the variance-constrained capacity with perfect synchronization.
Resumo:
We present results from SEPPCoN, an on-going Survey of the Ensemble Physical Properties of Cometary Nuclei. In this report we discuss mid-infrared measurements of the thermal emission from 89 nuclei of Jupiter-family comets (JFCs). All data were obtained in 2006 and 2007 using imaging capabilities of the Spitzer Space Telescope. The comets were typically 4-5 AU from the Sun when observed and most showed only a point-source with little or no extended emission from dust. For those comets showing dust, we used image processing to photometrically extract the nuclei. For all 89 comets, we present new effective radii, and for 57 comets we present beaming parameters. Thus our survey provides the largest compilation of radiometrically-derived physical properties of nuclei to date. We have six main conclusions: (a) The average beaming parameter of the JFC population is 1.03 ± 0.11, consistent with unity; coupled with the large distance of the nuclei from the Sun, this indicates that most nuclei have Tempel 1-like thermal inertia. Only two of the 57 nuclei had outlying values (in a statistical sense) of infrared beaming. (b) The known JFC population is not complete even at 3 km radius, and even for comets that approach to ˜2 AU from the Sun and so ought to be more discoverable. Several recently-discovered comets in our survey have small perihelia and large (above ˜2 km) radii. (c) With our radii, we derive an independent estimate of the JFC nuclear cumulative size distribution (CSD), and we find that it has a power-law slope of around -1.9, with the exact value depending on the bounds in radius. (d) This power-law is close to that derived by others from visible-wavelength observations that assume a fixed geometric albedo, suggesting that there is no strong dependence of geometric albedo with radius. (e) The observed CSD shows a hint of structure with an excess of comets with radii 3-6 km. (f) Our CSD is consistent with the idea that the intrinsic size distribution of the JFC population is not a simple power-law and lacks many sub-kilometer objects.
Resumo:
Thermal barrier coatings (TBCs) are widely adopted to protect mechanical components in gas turbine engines operating at high temperature. Basically, the surface temperature of these components must be low enough to retain material properties within acceptable bounds and to extend component life. From this standpoint, air plasma-sprayed (APS) ceria and yttria co-stabilized zirconia (CYSZ) is particularly promising because it provides enhanced thermal insulation capabilities and resistance to hot corrosion. However, essential mechanical properties, such as hardness and Young's modulus, have been less thoroughly investigated. Knowledge of Young's modulus is of concern because it has a significant effect on strain tolerance and stress level and, hence, on durability. The focus of the present study was to determine the mechanical properties of APS CYSZ coatings. In particular, X-ray diffraction (XRD) is adopted for phase analysis of powders and as-sprayed coatings. In addition, scanning electron microscopy (SEM) and image analysis (IA) are employed to explore coating microstructure and porosity. Finally, the Young's modulus of the coating is determined using nanoindentation and a resonant method. The results obtained are then discussed and a cross-check on their consistency is carried out by resorting to a micromechanical model. © 2010 Blackwell Publishing Ltd.
Resumo:
Hardware impairments in physical transceivers are known to have a deleterious effect on communication systems; however, very few contributions have investigated their impact on relaying. This paper quantifies the impact of transceiver impairments in a two-way amplify-and-forward configuration. More specifically, the effective signal-to-noise-and-distortion ratios at both transmitter nodes are obtained. These are used to deduce exact and asymptotic closed-form expressions for the outage probabilities (OPs), as well as tractable formulations for the symbol error rates (SERs). It is explicitly shown that non-zero lower bounds on the OP and SER exist in the high-power regime---this stands in contrast to the special case of ideal hardware, where the OP and SER go asymptotically to zero.
Resumo:
We analyze the performance of dual-hop two-way amplify-and-forward relaying in the presence of in-phase and quadrature-phase imbalance (IQI) at the relay node. In particular, two power allocation schemes, namely, fixed power allocation and instantaneous power allocation, are proposed to improve the system reliability and robustness against IQI under a total transmit power constraint. For each proposed scheme, the outage probability is investigated over independent, non-identically distributed Nakagami- m fading channels, and exact closed-form expressions and bounds are derived. Our theoretical analysis indicates that, without IQI compensation, IQI can create fundamental performance limits on two-way relaying. However, these limits can be avoided by performing IQI compensation at source nodes. Compared with the equal power allocation scheme, our numerical results show that the two proposed power allocation schemes can significantly improve the outage performance, thus reducing the IQI effects, particularly when the total power budget is large.
Resumo:
In this paper, weconsider switch-and-stay combining (SSC) in two-way relay systems with two amplify-and-forward relays, one of which is activated to assist the information exchange between the two sources. The system operates in either analog network coding (ANC) protocol where the communication is only achieved with the help of the active relay or timedivision broadcast (TDBC) protocol where the direct link between two sources can be utilized to exploit more diversity gain. In both cases, we study the outage probability and bit error rate (BER) for Rayleigh fading channels. In particular, we derive closed-form lower bounds for the outage probability and the average BER, which remain tight for different fading conditions. We also present asymptotic analysis for both the outage probability and the average BER at high signalto-noise ratio. It is shown that SSC can achieve the full diversity order in two-way relay systems for both ANC and TDBC protocols with proper switching thresholds. Copyright © 2014 John Wiley & Sons, Ltd.
Resumo:
The Computational Fluid Dynamic (CFD) toolbox OpenFOAM is used to assess the applicability of Reynolds-Averaged Navier-Stokes (RANS) solvers to the simulation of Oscillating Wave Surge Converters (OWSC) in significant waves. Simulation of these flap type devices requires the solution of the equations of motion and the representation of the OWSC’s motion in a moving mesh. A new way to simulate the sea floor inside a section of the moving mesh with a moving dissipation zone is presented. To assess the accuracy of the new solver, experiments are conducted in regular and irregular wave traces for a full three dimensional model. Results of acceleration and flow features are presented for numerical and experimental data. It is found that the new numerical model reproduces experimental results within the bounds of experimental accuracy.
Resumo:
Credal networks are graph-based statistical models whose parameters take values in a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The computational complexity of inferences on such models depends on the irrelevance/independence concept adopted. In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence. We show that inferences under strong independence are NP-hard even in trees with binary variables except for a single ternary one. We prove that under epistemic irrelevance the polynomial-time complexity of inferences in credal trees is not likely to extend to more general models (e.g., singly connected topologies). These results clearly distinguish networks that admit efficient inferences and those where inferences are most likely hard, and settle several open questions regarding their computational complexity. We show that these results remain valid even if we disallow the use of zero probabilities. We also show that the computation of bounds on the probability of the future state in a hidden Markov model is the same whether we assume epistemic irrelevance or strong independence, and we prove an analogous result for inference in Naive Bayes structures. These inferential equivalences are important for practitioners, as hidden Markov models and Naive Bayes networks are used in real applications of imprecise probability.
Resumo:
Credal nets are probabilistic graphical models which extend Bayesian nets to cope with sets of distributions. This feature makes the model particularly suited for the implementation of classifiers and knowledge-based systems. When working with sets of (instead of single) probability distributions, the identification of the optimal option can be based on different criteria, some of them eventually leading to multiple choices. Yet, most of the inference algorithms for credal nets are designed to compute only the bounds of the posterior probabilities. This prevents some of the existing criteria from being used. To overcome this limitation, we present two simple transformations for credal nets which make it possible to compute decisions based on the maximality and E-admissibility criteria without any modification in the inference algorithms. We also prove that these decision problems have the same complexity of standard inference, being NP^PP-hard for general credal nets and NP-hard for polytrees.
Resumo:
This paper presents a new anytime algorithm for the marginal MAP problem in graphical models of bounded treewidth. We show asymptotic convergence and theoretical error bounds for any fixed step. Experiments show that it compares well to a state-of-the-art systematic search algorithm.
Resumo:
This paper presents new results on the complexity of graph-theoretical models that represent probabilities (Bayesian networks) and that represent interval and set valued probabilities (credal networks). We define a new class of networks with bounded width, and introduce a new decision problem for Bayesian networks, the maximin a posteriori. We present new links between the Bayesian and credal networks, and present new results both for Bayesian networks (most probable explanation with observations, maximin a posteriori) and for credal networks (bounds on probabilities a posteriori, most probable explanation with and without observations, maximum a posteriori).
Resumo:
A credal network is a graphical tool for representation and manipulation of uncertainty, where probability values may be imprecise or indeterminate. A credal network associates a directed acyclic graph with a collection of sets of probability measures; in this context, inference is the computation of tight lower and upper bounds for conditional probabilities. In this paper we present new algorithms for inference in credal networks based on multilinear programming techniques. Experiments indicate that these new algorithms have better performance than existing ones, in the sense that they can produce more accurate results in larger networks.
Resumo:
Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This article focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. In particular, the seemingly obvious lower bounds of Ω(m) messages, where m is the number of edges in the network, and Ω(D) time, where D is the network diameter, are nontrivial to show for randomized (Monte Carlo) algorithms. (Recent results, showing that even Ω(n), where n is the number of nodes in the network, is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms, except for the restricted case of comparison algorithms, where it was also required that nodes may not wake up spontaneously and that D and n were not known. We establish these fundamental lower bounds in this article for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (namely, algorithms that work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time leader election algorithm is known. A slight adaptation of our lower bound technique gives rise to an Ω(m) message lower bound for randomized broadcast algorithms.
An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. The answer is known to be negative in the deterministic setting. We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that tradeoff messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.
Resumo:
Demand Response (DR) algorithms manipulate the energy consumption schedules of controllable loads so as to satisfy grid objectives. Implementation of DR algorithms using a centralised agent can be problematic for scalability reasons, and there are issues related to the privacy of data and robustness to communication failures. Thus it is desirable to use a scalable decentralised algorithm for the implementation of DR. In this paper, a hierarchical DR scheme is proposed for Peak Minimisation (PM) based on Dantzig-Wolfe Decomposition (DWD). In addition, a Time Weighted Maximisation option is included in the cost function which improves the Quality of Service for devices seeking to receive their desired energy sooner rather than later. The paper also demonstrates how the DWD algorithm can be implemented more efficiently through the calculation of the upper and lower cost bounds after each DWD iteration.