904 resultados para BARTLETT CORRECTION
Resumo:
Regenerating codes and codes with locality are two coding schemes that have recently been proposed, which in addition to ensuring data collection and reliability, also enable efficient node repair. In a situation where one is attempting to repair a failed node, regenerating codes seek to minimize the amount of data downloaded for node repair, while codes with locality attempt to minimize the number of helper nodes accessed. This paper presents results in two directions. In one, this paper extends the notion of codes with locality so as to permit local recovery of an erased code symbol even in the presence of multiple erasures, by employing local codes having minimum distance >2. An upper bound on the minimum distance of such codes is presented and codes that are optimal with respect to this bound are constructed. The second direction seeks to build codes that combine the advantages of both codes with locality as well as regenerating codes. These codes, termed here as codes with local regeneration, are codes with locality over a vector alphabet, in which the local codes themselves are regenerating codes. We derive an upper bound on the minimum distance of vector-alphabet codes with locality for the case when their constituent local codes have a certain uniform rank accumulation property. This property is possessed by both minimum storage regeneration (MSR) and minimum bandwidth regeneration (MBR) codes. We provide several constructions of codes with local regeneration which achieve this bound, where the local codes are either MSR or MBR codes. Also included in this paper, is an upper bound on the minimum distance of a general vector code with locality as well as the performance comparison of various code constructions of fixed block length and minimum distance.
Resumo:
In this study, we applied the integration methodology developed in the companion paper by Aires (2014) by using real satellite observations over the Mississippi Basin. The methodology provides basin-scale estimates of the four water budget components (precipitation P, evapotranspiration E, water storage change Delta S, and runoff R) in a two-step process: the Simple Weighting (SW) integration and a Postprocessing Filtering (PF) that imposes the water budget closure. A comparison with in situ observations of P and E demonstrated that PF improved the estimation of both components. A Closure Correction Model (CCM) has been derived from the integrated product (SW+PF) that allows to correct each observation data set independently, unlike the SW+PF method which requires simultaneous estimates of the four components. The CCM allows to standardize the various data sets for each component and highly decrease the budget residual (P - E - Delta S - R). As a direct application, the CCM was combined with the water budget equation to reconstruct missing values in any component. Results of a Monte Carlo experiment with synthetic gaps demonstrated the good performances of the method, except for the runoff data that has a variability of the same order of magnitude as the budget residual. Similarly, we proposed a reconstruction of Delta S between 1990 and 2002 where no Gravity Recovery and Climate Experiment data are available. Unlike most of the studies dealing with the water budget closure at the basin scale, only satellite observations and in situ runoff measurements are used. Consequently, the integrated data sets are model independent and can be used for model calibration or validation.
Resumo:
We consider conformal field theories in 1 + 1 dimensions with W-algebra symmetries, deformed by a chemical potential mu for the spin-three current. We show that the order mu(2) correction to the Renyi and entanglement entropies of a single interval in the deformed theory, on the infinite spatial line and at finite temperature, is universal. The correction is completely determined by the operator product expansion of two spin-three currents, and by the expectation values of the stress tensor, its descendants and its composites, evaluated on the n-sheeted Riemann surface branched along the interval. This explains the recently found agreement of the order mu(2) correction across distinct free field CFTs and higher spin black hole solutions holographically dual to CFTs with W symmetry.
Resumo:
A new class of exact-repair regenerating codes is constructed by stitching together shorter erasure correction codes, where the stitching pattern can be viewed as block designs. The proposed codes have the help-by-transfer property where the helper nodes simply transfer part of the stored data directly, without performing any computation. This embedded error correction structure makes the decoding process straightforward, and in some cases the complexity is very low. We show that this construction is able to achieve performance better than space-sharing between the minimum storage regenerating codes and the minimum repair-bandwidth regenerating codes, and it is the first class of codes to achieve this performance. In fact, it is shown that the proposed construction can achieve a nontrivial point on the optimal functional-repair tradeoff, and it is asymptotically optimal at high rate, i.e., it asymptotically approaches the minimum storage and the minimum repair-bandwidth simultaneously.
Resumo:
The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.
Resumo:
In this paper, a pressure correction algorithm for computing incompressible flows is modified and implemented on unstructured Chimera grid. Schwarz method is used to couple the solutions of different sub-domains. A new interpolation to ensure consistency between primary variables and auxiliary variables is proposed. Other important issues such as global mass conservation and order of accuracy in the interpolations are also discussed. Two numerical simulations are successfully performed. They include one steady case, the lid-driven cavity and one unsteady case, the flow around a circular cylinder. The results demonstrate a very good performance of the proposed scheme on unstructured Chimera grids. It prevents the decoupling of pressure field in the overlapping region and requires only little modification to the existing unstructured Navier–Stokes (NS) solver. The numerical experiments show the reliability and potential of this method in applying to practical problems.
Resumo:
For simulating multi-scale complex flow fields like turbulent flows, the high order accurate schemes are preferred. In this paper, a scheme construction with numerical flux residual correction (NFRC) is presented. Any order accurate difference approximation can be obtained with the NFRC. To improve the resolution of the shock, the constructed schemes are modified with group velocity control (GVC) and weighted group velocity control (WGVC). The method of scheme construction is simple, and it is used to solve practical problems.
Resumo:
In this paper, a beamforming correction for identifying dipole sources by means of phased microphone array measurements is presented and implemented numerically and experimentally. Conventional beamforming techniques, which are developed for monopole sources, can lead to significant errors when applied to reconstruct dipole sources. A previous correction technique to microphone signals is extended to account for both source location and source power for two-dimensional microphone arrays. The new dipole-beamforming algorithm is developed by modifying the basic source definition used for beamforming. This technique improves the previous signal correction method and yields a beamformer applicable to sources which are suspected to be dipole in nature. Numerical simulations are performed, which validate the capability of this beamformer to recover ideal dipole sources. The beamforming correction is applied to the identification of realistic aeolian-tone dipoles and shows an improvement of array performance on estimating dipole source powers. © 2008 Acoustical Society of America.
Resumo:
In this paper, an unstructured Chimera mesh method is used to compute incompressible flow around a rotating body. To implement the pressure correction algorithm on unstructured overlapping sub-grids, a novel interpolation scheme for pressure correction is proposed. This indirect interpolation scheme can ensure a tight coupling of pressure between sub-domains. A moving-mesh finite volume approach is used to treat the rotating sub-domain and the governing equations are formulated in an inertial reference frame. Since the mesh that surrounds the rotating body undergoes only solid body rotation and the background mesh remains stationary, no mesh deformation is encountered in the computation. As a benefit from the utilization of an inertial frame, tensorial transformation for velocity is not needed. Three numerical simulations are successfully performed. They include flow over a fixed circular cylinder, flow over a rotating circular cylinder and flow over a rotating elliptic cylinder. These numerical examples demonstrate the capability of the current scheme in handling moving boundaries. The numerical results are in good agreement with experimental and computational data in literature. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Building on Item Response Theory we introduce students’ optimal behavior in multiple-choice tests. Our simulations indicate that the optimal penalty is relatively high, because although correction for guessing discriminates against risk-averse subjects, this effect is small compared with the measurement error that the penalty prevents. This result obtains when knowledge is binary or partial, under different normalizations of the score, when risk aversion is related to knowledge and when there is a pass-fail break point. We also find that the mean degree of difficulty should be close to the mean level of knowledge and that the variance of difficulty should be high.