964 resultados para Point Data
Resumo:
A microbeam testing geometry is designed to study the variation in fracture toughness across a compositionally graded NiAl coating on a superalloy substrate. A bi-material analytical model of fracture is used to evaluate toughness by deconvoluting load-displacement data generated in a three-point bending test. It is shown that the surface layers of a diffusion bond coat can be much more brittle than the interior despite the fact that elastic modulus and hardness do not display significant variations. Such a gradient in toughness allows stable crack propagation in a test that would normally lead to unstable fracture in a homogeneous, brittle material. As the crack approaches the interface, plasticity due to the presence of Ni3Al leads to gross bending and crack bifurcation.
Resumo:
Dispersing a data object into a set of data shares is an elemental stage in distributed communication and storage systems. In comparison to data replication, data dispersal with redundancy saves space and bandwidth. Moreover, dispersing a data object to distinct communication links or storage sites limits adversarial access to whole data and tolerates loss of a part of data shares. Existing data dispersal schemes have been proposed mostly based on various mathematical transformations on the data which induce high computation overhead. This paper presents a novel data dispersal scheme where each part of a data object is replicated, without encoding, into a subset of data shares according to combinatorial design theory. Particularly, data parts are mapped to points and data shares are mapped to lines of a projective plane. Data parts are then distributed to data shares using the point and line incidence relations in the plane so that certain subsets of data shares collectively possess all data parts. The presented scheme incorporates combinatorial design theory with inseparability transformation to achieve secure data dispersal at reduced computation, communication and storage costs. Rigorous formal analysis and experimental study demonstrate significant cost-benefits of the presented scheme in comparison to existing methods.
Resumo:
The problem of identification of stiffness, mass and damping properties of linear structural systems, based on multiple sets of measurement data originating from static and dynamic tests is considered. A strategy, within the framework of Kalman filter based dynamic state estimation, is proposed to tackle this problem. The static tests consists of measurement of response of the structure to slowly moving loads, and to static loads whose magnitude are varied incrementally; the dynamic tests involve measurement of a few elements of the frequency response function (FRF) matrix. These measurements are taken to be contaminated by additive Gaussian noise. An artificial independent variable τ, that simultaneously parameterizes the point of application of the moving load, the magnitude of the incrementally varied static load and the driving frequency in the FRFs, is introduced. The state vector is taken to consist of system parameters to be identified. The fact that these parameters are independent of the variable τ is taken to constitute the set of ‘process’ equations. The measurement equations are derived based on the mechanics of the problem and, quantities, such as displacements and/or strains, are taken to be measured. A recursive algorithm that employs a linearization strategy based on Neumann’s expansion of structural static and dynamic stiffness matrices, and, which provides posterior estimates of the mean and covariance of the unknown system parameters, is developed. The satisfactory performance of the proposed approach is illustrated by considering the problem of the identification of the dynamic properties of an inhomogeneous beam and the axial rigidities of members of a truss structure.
Multi-GNSS precise point positioning with raw single-frequency and dual-frequency measurement models
Resumo:
The emergence of multiple satellite navigation systems, including BDS, Galileo, modernized GPS, and GLONASS, brings great opportunities and challenges for precise point positioning (PPP). We study the contributions of various GNSS combinations to PPP performance based on undifferenced or raw observations, in which the signal delays and ionospheric delays must be considered. A priori ionospheric knowledge, such as regional or global corrections, strengthens the estimation of ionospheric delay parameters. The undifferenced models are generally more suitable for single-, dual-, or multi-frequency data processing for single or combined GNSS constellations. Another advantage over ionospheric-free PPP models is that undifferenced models avoid noise amplification by linear combinations. Extensive performance evaluations are conducted with multi-GNSS data sets collected from 105 MGEX stations in July 2014. Dual-frequency PPP results from each single constellation show that the convergence time of undifferenced PPP solution is usually shorter than that of ionospheric-free PPP solutions, while the positioning accuracy of undifferenced PPP shows more improvement for the GLONASS system. In addition, the GLONASS undifferenced PPP results demonstrate performance advantages in high latitude areas, while this impact is less obvious in the GPS/GLONASS combined configuration. The results have also indicated that the BDS GEO satellites have negative impacts on the undifferenced PPP performance given the current “poor” orbit and clock knowledge of GEO satellites. More generally, the multi-GNSS undifferenced PPP results have shown improvements in the convergence time by more than 60 % in both the single- and dual-frequency PPP results, while the positioning accuracy after convergence indicates no significant improvements for the dual-frequency PPP solutions, but an improvement of about 25 % on average for the single-frequency PPP solutions.
Resumo:
The critical behavior of osmotic susceptibility in an aqueous electrolyte mixture 1-propanol (1P)+water (W)+potassium chloride is reported. This mixture exhibits re-entrant phase transitions and has a nearly parabolic critical line with its apex representing a double critical point (DCP). The behavior of the susceptibility exponent is deduced from static light-scattering measurements, on approaching the lower critical solution temperatures (TL’s) along different experimental paths (by varying t) in the one-phase region. The light-scattering data analysis substantiates the existence of a nonmonotonic crossover behavior of the susceptibility exponent in this mixture. For the TL far away from the DCP, the effective susceptibility exponent γeff as a function of t displays a nonmonotonic crossover from its single limit three-dimensional (3D)-Ising value ( ∼ 1.24) toward its mean-field value with increase in t. While for that closest to the DCP, γeff displays a sharp, nonmonotonic crossover from its nearly doubled 3D-Ising value toward its nearly doubled mean-field value with increase in t. The renormalized Ising regime extends over a relatively larger t range for the TL closest to the DCP, and a trend toward shrinkage in the renormalized Ising regime is observed as TL shifts away from the DCP. Nevertheless, the crossover to the mean-field limit extends well beyond t>10−2 for the TL’s studied. The observed crossover behavior is attributed to the presence of strong ion-induced clustering in this mixture, as revealed by various structure probing techniques. As far as the critical behavior in complex or associating mixtures with special critical points (like the DCP) is concerned, our results indicate that the influence of the DCP on the critical behavior must be taken into account not only on the renormalization of the critical exponent but also on the range of the Ising regime, which can shrink with decrease in the influence of the DCP and with the extent of structuring in the system. The utility of the field variable tUL in analyzing re-entrant phase transitions is demonstrated. The effective susceptibility exponent as a function of tUL displays a nonmonotonic crossover from its asymptotic 3D-Ising value toward a value slightly lower than its nonasymptotic mean-field value of 1. This behavior in the nonasymptotic, high tUL region is interpreted in terms of the possibility of a nonmonotonic crossover to the mean-field value from lower values, as foreseen earlier in micellar systems.
Resumo:
Statistical learning algorithms provide a viable framework for geotechnical engineering modeling. This paper describes two statistical learning algorithms applied for site characterization modeling based on standard penetration test (SPT) data. More than 2700 field SPT values (N) have been collected from 766 boreholes spread over an area of 220 sqkm area in Bangalore. To get N corrected value (N,), N values have been corrected (Ne) for different parameters such as overburden stress, size of borehole, type of sampler, length of connecting rod, etc. In three-dimensional site characterization model, the function N-c=N-c (X, Y, Z), where X, Y and Z are the coordinates of a point corresponding to N, value, is to be approximated in which N, value at any half-space point in Bangalore can be determined. The first algorithm uses least-square support vector machine (LSSVM), which is related to aridge regression type of support vector machine. The second algorithm uses relevance vector machine (RVM), which combines the strengths of kernel-based methods and Bayesian theory to establish the relationships between a set of input vectors and a desired output. The paper also presents the comparative study between the developed LSSVM and RVM model for site characterization. Copyright (C) 2009 John Wiley & Sons,Ltd.
Resumo:
The electrical capacitance and resistance of the binary liquid mixture cyclohexane + acetonitrile are measured in the one phase and two phase regions at spot frequencies between 5 kHz and 100 kHz. This sample has a very low gravity affected (∼0.6 mK) region. In one phase region the capacitance data show a sharp, ∼0.7% increase above background within 0.5 degrees of Tc whereas the resistance has a smooth increase of ∼1.5% above background in a (T−Tc) range of 4 degrees. Two phase values of capacitance and resistance from the coexisting phases are used to determine the critical parameters Tc (critical temperature), Rc (resistance at Tc) and Cc (capacitance at Tc). A precise knowledge of these parameters reduces the uncertainty on the critical exponent 0 for C and R. The one phase capacitance data fit to an (1 - α) exponent in a limited temperature range of 0.2 degrees. Resistance data strongly support an (1 - α) exponent over the entire 5 degree range.
Resumo:
To investigate the nature of the curve of critical exponents (as a function of the distance from a double critical point), we have combined our measurements of the osmotic compressibility with all published data for quasibinary liquid mixtures. This curve has a parabolic shape. An explanation of this result is advanced in terms of the geometry of the coexistence dome, which is contained in a triangular prism.
Explicit and Optimal Exact-Regenerating Codes for the Minimum-Bandwidth Point in Distributed Storage
Resumo:
In the distributed storage setting that we consider, data is stored across n nodes in the network such that the data can be recovered by connecting to any subset of k nodes. Additionally, one can repair a failed node by connecting to any d nodes while downloading beta units of data from each. Dimakis et al. show that the repair bandwidth d beta can be considerably reduced if each node stores slightly more than the minimum required and characterize the tradeoff between the amount of storage per node and the repair bandwidth. In the exact regeneration variation, unlike the functional regeneration, the replacement for a failed node is required to store data identical to that in the failed node. This greatly reduces the complexity of system maintenance. The main result of this paper is an explicit construction of codes for all values of the system parameters at one of the two most important and extreme points of the tradeoff - the Minimum Bandwidth Regenerating point, which performs optimal exact regeneration of any failed node. A second result is a non-existence proof showing that with one possible exception, no other point on the tradeoff can be achieved for exact regeneration.
Resumo:
Our ability to infer the protein quaternary structure automatically from atom and lattice information is inadequate, especially for weak complexes, and heteromeric quaternary structures. Several approaches exist, but they have limited performance. Here, we present a new scheme to infer protein quaternary structure from lattice and protein information, with all-around coverage for strong, weak and very weak affinity homomeric and heteromeric complexes. The scheme combines naive Bayes classifier and point group symmetry under Boolean framework to detect quaternary structures in crystal lattice. It consistently produces >= 90% coverage across diverse benchmarking data sets, including a notably superior 95% coverage for recognition heteromeric complexes, compared with 53% on the same data set by current state-of-the-art method. The detailed study of a limited number of prediction-failed cases offers interesting insights into the intriguing nature of protein contacts in lattice. The findings have implications for accurate inference of quaternary states of proteins, especially weak affinity complexes.
Resumo:
Measured health signals incorporate significant details about any malfunction in a gas turbine. The attenuation of noise and removal of outliers from these health signals while preserving important features is an important problem in gas turbine diagnostics. The measured health signals are a time series of sensor measurements such as the low rotor speed, high rotor speed, fuel flow, and exhaust gas temperature in a gas turbine. In this article, a comparative study is done by varying the window length of acausal and unsymmetrical weighted recursive median filters and numerical results for error minimization are obtained. It is found that optimal filters exist, which can be used for engines where data are available slowly (three-point filter) and rapidly (seven-point filter). These smoothing filters are proposed as preprocessors of measurement delta signals before subjecting them to fault detection and isolation algorithms.
Resumo:
A link failure in the path of a virtual circuit in a packet data network will lead to premature disconnection of the circuit by the end-points. A soft failure will result in degraded throughput over the virtual circuit. If these failures can be detected quickly and reliably, then appropriate rerouteing strategies can automatically reroute the virtual circuits that use the failed facility. In this paper, we develop a methodology for analysing and designing failure detection schemes for digital facilities. Based on errored second data, we develop a Markov model for the error and failure behaviour of a T1 trunk. The performance of a detection scheme is characterized by its false alarm probability and the detection delay. Using the Markov model, we analyse the performance of detection schemes that use physical layer or link layer information. The schemes basically rely upon detecting the occurrence of severely errored seconds (SESs). A failure is declared when a counter, that is driven by the occurrence of SESs, reaches a certain threshold.For hard failures, the design problem reduces to a proper choice;of the threshold at which failure is declared, and on the connection reattempt parameters of the virtual circuit end-point session recovery procedures. For soft failures, the performance of a detection scheme depends, in addition, on how long and how frequent the error bursts are in a given failure mode. We also propose and analyse a novel Level 2 detection scheme that relies only upon anomalies observable at Level 2, i.e. CRC failures and idle-fill flag errors. Our results suggest that Level 2 schemes that perform as well as Level 1 schemes are possible.
Resumo:
This paper examines the effect of substitution of water by heavy water in a polymer solution of polystyrene (molecular weight = 13000) and acetone. A critical double point (CDP), at which the upper and the lower partially-miscible regions merge, occurs at nearly the same coordinates as for the system [polystyrene + acetone + water]. The shape of the critical line for [polystyrene + acetone + heavy water] is highly asymmetric. An explanation for the occurrence of the water-induced CDP in [polystyrene + acetone] is advanced in terms of the interplay between contact energy dissimilarity and free-volume disparity of the polymer and the solvent. The question of the possible existence of a one-phase hole in an hourglass phase diagram is addressed in [polystyrene + acetone + water]. Our data exclude such a possibility.
Resumo:
Common water ice (ice I-h) is an unusual solid-the oxygen atoms form a periodic structure but the hydrogen atoms are highly disordered due to there being two inequivalent O-H bond lengths'. Pauling showed that the presence of these two bond lengths leads to a macroscopic degeneracy of possible ground states(2,3), such that the system has finite entropy as the temperature tends towards zero. The dynamics associated with this degeneracy are experimentally inaccessible, however, as ice melts and the hydrogen dynamics cannot be studied independently of oxygen motion(4). An analogous system(5) in which this degeneracy can be studied is a magnet with the pyrochlore structure-termed 'spin ice'-where spin orientation plays a similar role to that of the hydrogen position in ice I-h. Here we present specific-heat data for one such system, Dy2Ti2O7, from which we infer a total spin entropy of 0.67Rln2. This is similar to the value, 0.71Rln2, determined for ice I-h, SO confirming the validity of the correspondence. We also find, through application of a magnetic field, behaviour not accessible in water ice-restoration of much of the ground-state entropy and new transitions involving transverse spin degrees of freedom.
Resumo:
We consider a setting in which several operators offer downlink wireless data access services in a certain geographical region. Each operator deploys several base stations or access points, and registers some subscribers. In such a situation, if operators pool their infrastructure, and permit the possibility of subscribers being served by any of the cooperating operators, then there can be overall better user satisfaction, and increased operator revenue. We use coalitional game theory to investigate such resource pooling and cooperation between operators.We use utility functions to model user satisfaction, and show that the resulting coalitional game has the property that if all operators cooperate (i.e., form a grand coalition) then there is an operating point that maximizes the sum utility over the operators while providing the operators revenues such that no subset of operators has an incentive to break away from the coalition. We investigate whether such operating points can result in utility unfairness between users of the various operators. We also study other revenue sharing concepts, namely, the nucleolus and the Shapely value. Such investigations throw light on criteria for operators to accept or reject subscribers, based on the service level agreements proposed by them. We also investigate the situation in which only certain subsets of operators may be willing to cooperate.