878 resultados para Topographic correction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the context of SPH-based simulations of impact dynamics, an optimised and automated form of the acceleration correction algorithm (Shaw and Reid, 2009a) is developed so as to remove spurious high frequency oscillations in computed responses whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. A rational framework for an insightful characterisation of the erstwhile acceleration correction method is first set up. This is followed by the proposal of an optimised version of the method, wherein the strength of the correction term in the momentum balance and energy equations is optimised. For the first time, this leads to an automated procedure to arrive at the artificial viscosity term. In particular, this is achieved by taking a spatially varying response-dependent support size for the kernel function through which the correction term is computed. The optimum value of the support size is deduced by minimising the (spatially localised) total variation of the high oscillation in the acceleration term with respect to its (local) mean. The derivation of the method, its advantages over the heuristic method and issues related to its numerical implementation are discussed in detail. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivated by applications to distributed storage, Gopalan et al recently introduced the interesting notion of information-symbol locality in a linear code. By this it is meant that each message symbol appears in a parity-check equation associated with small Hamming weight, thereby enabling recovery of the message symbol by examining a small number of other code symbols. This notion is expanded to the case when all code symbols, not just the message symbols, are covered by such ``local'' parity. In this paper, we extend the results of Gopalan et. al. so as to permit recovery of an erased code symbol even in the presence of errors in local parity symbols. We present tight bounds on the minimum distance of such codes and exhibit codes that are optimal with respect to the local error-correction property. As a corollary, we obtain an upper bound on the minimum distance of a concatenated code.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using a Girsanov change of measures, we propose novel variations within a particle-filtering algorithm, as applied to the inverse problem of state and parameter estimations of nonlinear dynamical systems of engineering interest, toward weakly correcting for the linearization or integration errors that almost invariably occur whilst numerically propagating the process dynamics, typically governed by nonlinear stochastic differential equations (SDEs). Specifically, the correction for linearization, provided by the likelihood or the Radon-Nikodym derivative, is incorporated within the evolving flow in two steps. Once the likelihood, an exponential martingale, is split into a product of two factors, correction owing to the first factor is implemented via rejection sampling in the first step. The second factor, which is directly computable, is accounted for via two different schemes, one employing resampling and the other using a gain-weighted innovation term added to the drift field of the process dynamics thereby overcoming the problem of sample dispersion posed by resampling. The proposed strategies, employed as add-ons to existing particle filters, the bootstrap and auxiliary SIR filters in this work, are found to non-trivially improve the convergence and accuracy of the estimates and also yield reduced mean square errors of such estimates vis-a-vis those obtained through the parent-filtering schemes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neutral and niche theories give contrasting explanations for the maintenance of tropical tree species diversity. Both have some empirical support, but methods to disentangle their effects have not yet been developed. We applied a statistical measure of spatial structure to data from 14 large tropical forest plots to test a prediction of niche theory that is incompatible with neutral theory: that species in heterogeneous environments should separate out in space according to their niche preferences. We chose plots across a range of topographic heterogeneity, and tested whether pairwise spatial associations among species were more variable in more heterogeneous sites. We found strong support for this prediction, based on a strong positive relationship between variance in the spatial structure of species pairs and topographic heterogeneity across sites. We interpret this pattern as evidence of pervasive niche differentiation, which increases in importance with increasing environmental heterogeneity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

General circulation models (GCMs) are routinely used to simulate future climatic conditions. However, rainfall outputs from GCMs are highly uncertain in preserving temporal correlations, frequencies, and intensity distributions, which limits their direct application for downscaling and hydrological modeling studies. To address these limitations, raw outputs of GCMs or regional climate models are often bias corrected using past observations. In this paper, a methodology is presented for using a nested bias-correction approach to predict the frequencies and occurrences of severe droughts and wet conditions across India for a 48-year period (2050-2099) centered at 2075. Specifically, monthly time series of rainfall from 17 GCMs are used to draw conclusions for extreme events. An increasing trend in the frequencies of droughts and wet events is observed. The northern part of India and coastal regions show maximum increase in the frequency of wet events. Drought events are expected to increase in the west central, peninsular, and central northeast regions of India. (C) 2013 American Society of Civil Engineers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a strong relation between sparse signal recovery and error control coding. It is known that burst errors are block sparse in nature. So, here we attempt to solve burst error correction problem using block sparse signal recovery methods. We construct partial Fourier based encoding and decoding matrices using results on difference sets. These constructions offer guaranteed and efficient error correction when used in conjunction with reconstruction algorithms which exploit block sparsity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Propranolol, a beta-adrenergic receptor blocker, is presently considered to be a potential therapeutic intervention under investigation for its role in prevention and treatment of osteoporosis. However, no studies have compared the osteoprotective properties of propranolol with well accepted therapeu-tic interventions for the treatment of osteoporosis. To address this question, this study was designed to evaluate the bone protective effects of zoledronic acid, alfacalcidol and propranolol in an animal model of postmenopausal osteoporosis. Five days after ovariectomy, 36 ovariectomized (OVX) rats were divided in- to 6 equal groups, randomized to treatments zoledronic acid (100 μg/kg, intravenous single dose); alfacal-cidol (0.5 μg/kg, oral gauge daily); propranolol (0.1mg/kg, subcutaneously 5 days per week) for 12 weeks. Untreated OVX and sham OVX were used as controls. At the end of the study, rats were killed under anesthesia. For bone porosity evaluation, whole fourth lumbar vertebrae (LV4) were removed. LV4 were also used to measure bone mechanical propeties. Left femurs were used for bone histology. Propranolol showed a significant decrease in bone porosity in comparison to OVX control. Moreover, propranolol sig- nificantly improved bone mechanical properties and bone quality when compared with OVX control. The osteoprotective effect of propranolol was comparable with zoledronic acid and alfacalcidol. Based on this comparative study, the results strongly suggest that propranolol might be new therapeutic intervention for the management of postmenopausal osteoporosis in humans.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regenerating codes and codes with locality are two coding schemes that have recently been proposed, which in addition to ensuring data collection and reliability, also enable efficient node repair. In a situation where one is attempting to repair a failed node, regenerating codes seek to minimize the amount of data downloaded for node repair, while codes with locality attempt to minimize the number of helper nodes accessed. This paper presents results in two directions. In one, this paper extends the notion of codes with locality so as to permit local recovery of an erased code symbol even in the presence of multiple erasures, by employing local codes having minimum distance >2. An upper bound on the minimum distance of such codes is presented and codes that are optimal with respect to this bound are constructed. The second direction seeks to build codes that combine the advantages of both codes with locality as well as regenerating codes. These codes, termed here as codes with local regeneration, are codes with locality over a vector alphabet, in which the local codes themselves are regenerating codes. We derive an upper bound on the minimum distance of vector-alphabet codes with locality for the case when their constituent local codes have a certain uniform rank accumulation property. This property is possessed by both minimum storage regeneration (MSR) and minimum bandwidth regeneration (MBR) codes. We provide several constructions of codes with local regeneration which achieve this bound, where the local codes are either MSR or MBR codes. Also included in this paper, is an upper bound on the minimum distance of a general vector code with locality as well as the performance comparison of various code constructions of fixed block length and minimum distance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, we applied the integration methodology developed in the companion paper by Aires (2014) by using real satellite observations over the Mississippi Basin. The methodology provides basin-scale estimates of the four water budget components (precipitation P, evapotranspiration E, water storage change Delta S, and runoff R) in a two-step process: the Simple Weighting (SW) integration and a Postprocessing Filtering (PF) that imposes the water budget closure. A comparison with in situ observations of P and E demonstrated that PF improved the estimation of both components. A Closure Correction Model (CCM) has been derived from the integrated product (SW+PF) that allows to correct each observation data set independently, unlike the SW+PF method which requires simultaneous estimates of the four components. The CCM allows to standardize the various data sets for each component and highly decrease the budget residual (P - E - Delta S - R). As a direct application, the CCM was combined with the water budget equation to reconstruct missing values in any component. Results of a Monte Carlo experiment with synthetic gaps demonstrated the good performances of the method, except for the runoff data that has a variability of the same order of magnitude as the budget residual. Similarly, we proposed a reconstruction of Delta S between 1990 and 2002 where no Gravity Recovery and Climate Experiment data are available. Unlike most of the studies dealing with the water budget closure at the basin scale, only satellite observations and in situ runoff measurements are used. Consequently, the integrated data sets are model independent and can be used for model calibration or validation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider conformal field theories in 1 + 1 dimensions with W-algebra symmetries, deformed by a chemical potential mu for the spin-three current. We show that the order mu(2) correction to the Renyi and entanglement entropies of a single interval in the deformed theory, on the infinite spatial line and at finite temperature, is universal. The correction is completely determined by the operator product expansion of two spin-three currents, and by the expectation values of the stress tensor, its descendants and its composites, evaluated on the n-sheeted Riemann surface branched along the interval. This explains the recently found agreement of the order mu(2) correction across distinct free field CFTs and higher spin black hole solutions holographically dual to CFTs with W symmetry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large-scale estimates of the area of terrestrial surface waters have greatly improved over time, in particular through the development of multi-satellite methodologies, but the generally coarse spatial resolution (tens of kms) of global observations is still inadequate for many ecological applications. The goal of this study is to introduce a new, globally applicable downscaling method and to demonstrate its applicability to derive fine resolution results from coarse global inundation estimates. The downscaling procedure predicts the location of surface water cover with an inundation probability map that was generated by bagged derision trees using globally available topographic and hydrographic information from the SRTM-derived HydroSHEDS database and trained on the wetland extent of the GLC2000 global land cover map. We applied the downscaling technique to the Global Inundation Extent from Multi-Satellites (GIEMS) dataset to produce a new high-resolution inundation map at a pixel size of 15 arc-seconds, termed GIEMS-D15. GIEMS-D15 represents three states of land surface inundation extents: mean annual minimum (total area, 6.5 x 10(6) km(2)), mean annual maximum (12.1 x 10(6) km(2)), and long-term maximum (173 x 10(6) km(2)); the latter depicts the largest surface water area of any global map to date. While the accuracy of GIEMS-D15 reflects distribution errors introduced by the downscaling process as well as errors from the original satellite estimates, overall accuracy is good yet spatially variable. A comparison against regional wetland cover maps generated by independent observations shows that the results adequately represent large floodplains and wetlands. GIEMS-D15 offers a higher resolution delineation of inundated areas than previously available for the assessment of global freshwater resources and the study of large floodplain and wetland ecosystems. The technique of applying inundation probabilities also allows for coupling with coarse-scale hydro-climatological model simulations. (C) 2014 Elsevier Inc All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new class of exact-repair regenerating codes is constructed by stitching together shorter erasure correction codes, where the stitching pattern can be viewed as block designs. The proposed codes have the help-by-transfer property where the helper nodes simply transfer part of the stored data directly, without performing any computation. This embedded error correction structure makes the decoding process straightforward, and in some cases the complexity is very low. We show that this construction is able to achieve performance better than space-sharing between the minimum storage regenerating codes and the minimum repair-bandwidth regenerating codes, and it is the first class of codes to achieve this performance. In fact, it is shown that the proposed construction can achieve a nontrivial point on the optimal functional-repair tradeoff, and it is asymptotically optimal at high rate, i.e., it asymptotically approaches the minimum storage and the minimum repair-bandwidth simultaneously.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.