977 resultados para Full spatial domain computation
Resumo:
The pseudo-spectral time-domain (PSTD) method is an alternative time-marching method to classicalleapfrog finite difference schemes in the simulation of wave-like propagating phenomena. It is basedon the fundamentals of the Fourier transform to compute the spatial derivatives of hyperbolic differential equations. Therefore, it results in an isotropic operator that can be implemented in an efficient way for room acoustics simulations. However, one of the first issues to be solved consists on modeling wallabsorption. Unfortunately, there are no references in the technical literature concerning to that problem. In this paper, assuming real and constant locally reacting impedances, several proposals to overcome this problem are presented, validated and compared to analytical solutions in different scenarios.
Resumo:
The Pseudo-Spectral Time Domain (PSTD) method is an alternative time-marching method to classical leapfrog finite difference schemes inthe simulation of wave-like propagating phenomena. It is based on the fundamentals of the Fourier transform to compute the spatial derivativesof hyperbolic differential equations. Therefore, it results in an isotropic operator that can be implemented in an efficient way for room acousticssimulations. However, one of the first issues to be solved consists on modeling wall absorption. Unfortunately, there are no references in thetechnical literature concerning to that problem. In this paper, assuming real and constant locally reacting impedances, several proposals toovercome this problem are presented, validated and compared to analytical solutions in different scenarios.
Resumo:
The 2×2 MIMO profiles included in Mobile WiMAX specifications are Alamouti’s space-time code (STC) fortransmit diversity and spatial multiplexing (SM). The former hasfull diversity and the latter has full rate, but neither of them hasboth of these desired features. An alternative 2×2 STC, which is both full rate and full diversity, is the Golden code. It is the best known 2×2 STC, but it has a high decoding complexity. Recently, the attention was turned to the decoder complexity, this issue wasincluded in the STC design criteria, and different STCs wereproposed. In this paper, we first present a full-rate full-diversity2×2 STC design leading to substantially lower complexity ofthe optimum detector compared to the Golden code with only a slight performance loss. We provide the general optimized form of this STC and show that this scheme achieves the diversitymultiplexing frontier for square QAM signal constellations. Then, we present a variant of the proposed STC, which provides a further decrease in the detection complexity with a rate reduction of 25% and show that this provides an interesting trade-off between the Alamouti scheme and SM.
Resumo:
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge-Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid-solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently bench-marked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.
Resumo:
Several lines of evidences have suggested that T cell activation could be impaired in the tumor environment, a condition referred to as tumor-induced immunosuppression. We have previously shown that tenascin-C, an extracellular matrix protein highly expressed in the tumor stroma, inhibits T lymphocyte activation in vitro, raising the possibility that this molecule might contribute to tumor-induced immunosuppression in vivo. However, the region of the protein mediating this effect has remained elusive. Here we report the identification of the minimal region of tenascin-C that can inhibit T cell activation. Recombinant fragments corresponding to defined regions of the molecule were tested for their ability to inhibit in vitro activation of human peripheral blood T cells induced by anti-CD3 mAbs in combination with fibronectin or IL-2. A recombinant protein encompassing the alternatively spliced fibronectin type III domains of tenascin-C (TnFnIII A-D) vigorously inhibited both early and late lymphocyte activation events including activation-induced TCR/CD8 down-modulation, cytokine production, and DNA synthesis. In agreement with this, full length recombinant tenascin-C containing the alternatively spliced region suppressed T cell activation, whereas tenascin-C lacking this region did not. Using a series of smaller fragments and deletion mutants issued from this region, we have identified the TnFnIII A1A2 domain as the minimal region suppressing T cell activation. Single TnFnIII A1 or A2 domains were no longer inhibitory, while maximal inhibition required the presence of the TnFnIII A3 domain. Altogether, these data demonstrate that the TnFnIII A1A2 domain mediate the ability of tenascin-C to inhibit in vitro T cell activation and provide insights into the immunosuppressive activity of tenascin-C in vivo.
Resumo:
This paper presents a differential synthetic apertureradar (SAR) interferometry (DIFSAR) approach for investigatingdeformation phenomena on full-resolution DIFSAR interferograms.In particular, our algorithm extends the capabilityof the small-baseline subset (SBAS) technique that relies onsmall-baseline DIFSAR interferograms only and is mainly focusedon investigating large-scale deformations with spatial resolutionsof about 100 100 m. The proposed technique is implemented byusing two different sets of data generated at low (multilook data)and full (single-look data) spatial resolution, respectively. Theformer is used to identify and estimate, via the conventional SBAStechnique, large spatial scale deformation patterns, topographicerrors in the available digital elevation model, and possibleatmospheric phase artifacts; the latter allows us to detect, onthe full-resolution residual phase components, structures highlycoherent over time (buildings, rocks, lava, structures, etc.), as wellas their height and displacements. In particular, the estimation ofthe temporal evolution of these local deformations is easily implementedby applying the singular value decomposition technique.The proposed algorithm has been tested with data acquired by theEuropean Remote Sensing satellites relative to the Campania area(Italy) and validated by using geodetic measurements.
Resumo:
Among the types of remote sensing acquisitions, optical images are certainly one of the most widely relied upon data sources for Earth observation. They provide detailed measurements of the electromagnetic radiation reflected or emitted by each pixel in the scene. Through a process termed supervised land-cover classification, this allows to automatically yet accurately distinguish objects at the surface of our planet. In this respect, when producing a land-cover map of the surveyed area, the availability of training examples representative of each thematic class is crucial for the success of the classification procedure. However, in real applications, due to several constraints on the sample collection process, labeled pixels are usually scarce. When analyzing an image for which those key samples are unavailable, a viable solution consists in resorting to the ground truth data of other previously acquired images. This option is attractive but several factors such as atmospheric, ground and acquisition conditions can cause radiometric differences between the images, hindering therefore the transfer of knowledge from one image to another. The goal of this Thesis is to supply remote sensing image analysts with suitable processing techniques to ensure a robust portability of the classification models across different images. The ultimate purpose is to map the land-cover classes over large spatial and temporal extents with minimal ground information. To overcome, or simply quantify, the observed shifts in the statistical distribution of the spectra of the materials, we study four approaches issued from the field of machine learning. First, we propose a strategy to intelligently sample the image of interest to collect the labels only in correspondence of the most useful pixels. This iterative routine is based on a constant evaluation of the pertinence to the new image of the initial training data actually belonging to a different image. Second, an approach to reduce the radiometric differences among the images by projecting the respective pixels in a common new data space is presented. We analyze a kernel-based feature extraction framework suited for such problems, showing that, after this relative normalization, the cross-image generalization abilities of a classifier are highly increased. Third, we test a new data-driven measure of distance between probability distributions to assess the distortions caused by differences in the acquisition geometry affecting series of multi-angle images. Also, we gauge the portability of classification models through the sequences. In both exercises, the efficacy of classic physically- and statistically-based normalization methods is discussed. Finally, we explore a new family of approaches based on sparse representations of the samples to reciprocally convert the data space of two images. The projection function bridging the images allows a synthesis of new pixels with more similar characteristics ultimately facilitating the land-cover mapping across images.
Resumo:
We study dynamics of domain walls in pattern forming systems that are externally forced by a moving space-periodic modulation close to 2:1 spatial resonance. The motion of the forcing induces nongradient dynamics, while the wave number mismatch breaks explicitly the chiral symmetry of the domain walls. The combination of both effects yields an imperfect nonequilibrium Ising-Bloch bifurcation, where all kinks (including the Ising-like one) drift. Kink velocities and interactions are studied within the generic amplitude equation. For nonzero mismatch, a transition to traveling bound kink-antikink pairs and chaotic wave trains occurs.
Resumo:
We propose a method to display full complex Fresnel holograms by adding the information displayed on two analogue ferroelectric liquid crystal spatial light modulators. One of them works in real-only configuration and the other in imaginary-only mode. The Fresnel holograms are computed by backpropagating an object at a selected distance with the Fresnel transform. Then, displaying the real and imaginary parts on each panel, the object is reconstructed at that distance from the modulators by simple propagation of light. We present simulation results taking into account the specifications of the modulators as well as optical results. We have also studied the quality of reconstructions using only real, imaginary, amplitude or phase information. Although the real and imaginary reconstructions look acceptable for certain distances, full complex reconstruction is always better and is required when arbitrary distances are used.
Resumo:
An algorithm for computing correlation filters based on synthetic discriminant functions that can be displayed on current spatial light modulators is presented. The procedure is nondivergent, computationally feasible, and capable of producing multiple solutions, thus overcoming some of the pitfalls of previous methods.
Resumo:
We discuss the dynamics of the transient pattern formation process corresponding to the splay Fréedericksz transition. The emergence and subsequent evolution of the spatial periodicity is here described in terms of the temporal dependence of the wave numbers corresponding to the maxima of the structure factor. Situations of perpendicular as well as oblique field-induced stripes relative to the initial orientation of the director are both examined with explicit indications of the time scales needed for their appearance and posterior development.
Resumo:
A major issue in the application of waveform inversion methods to crosshole ground-penetrating radar (GPR) data is the accurate estimation of the source wavelet. Here, we explore the viability and robustness of incorporating this step into a recently published time-domain inversion procedure through an iterative deconvolution approach. Our results indicate that, at least in non-dispersive electrical environments, such an approach provides remarkably accurate and robust estimates of the source wavelet even in the presence of strong heterogeneity of both the dielectric permittivity and electrical conductivity. Our results also indicate that the proposed source wavelet estimation approach is relatively insensitive to ambient noise and to the phase characteristics of the starting wavelet. Finally, there appears to be little to no trade-off between the wavelet estimation and the tomographic imaging procedures.
Resumo:
The epithelial sodium channel (ENaC) is responsible for Na+ and fluid absorption across colon, kidney, and airway epithelia. We have previously identified SPLUNC1 as an autocrine inhibitor of ENaC. We have now located the ENaC inhibitory domain of SPLUNC1 to SPLUNC1's N terminus, and a peptide corresponding to this domain, G22-A39, inhibited ENaC activity to a similar degree as full-length SPLUNC1 (∼2.5 fold). However, G22-A39 had no effect on the structurally related acid-sensing ion channels, indicating specificity for ENaC. G22-A39 preferentially bound to the β-ENaC subunit in a glycosylation-dependent manner. ENaC hyperactivity is contributory to cystic fibrosis (CF) lung disease. Addition of G22-A39 to CF human bronchial epithelial cultures (HBECs) resulted in an increase in airway surface liquid height from 4.2±0.6 to 7.9±0.6 μm, comparable to heights seen in normal HBECs, even in the presence of neutrophil elastase. Our data also indicate that the ENaC inhibitory domain of SPLUNC1 may be cleaved away from the main molecule by neutrophil elastase, which suggests that it may still be active during inflammation or neutrophilia. Furthermore, the robust inhibition of ENaC by the G22-A39 peptide suggests that this peptide may be suitable for treating CF lung disease.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.