57 resultados para Spectral method with domain decomposition


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new spectral method for solving initial boundary value problems for linear and integrable nonlinear partial differential equations in two independent variables is applied to the nonlinear Schrödinger equation and to its linearized version in the domain {x≥l(t), t≥0}. We show that there exist two cases: (a) if l″(t)<0, then the solution of the linear or nonlinear equations can be obtained by solving the respective scalar or matrix Riemann-Hilbert problem, which is defined on a time-dependent contour; (b) if l″(t)>0, then the Riemann-Hilbert problem is replaced by a respective scalar or matrix problem on a time-independent domain. In both cases, the solution is expressed in a spectrally decomposed form.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Channel estimation method is a key issue in MIMO system. In recent years, a lot of papers on subspace(SS)-based blind channel estimation have been published, and in this paper, combining SS method with a space-time coding scheme, we proposed a novel blind channel estimation method in MIMO system. Simulation result demonstrates the effectiveness of this method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We use a spectral method to solve numerically two nonlocal, nonlinear, dispersive, integrable wave equations, the Benjamin-Ono and the Intermediate Long Wave equations. The proposed numerical method is able to capture well the dynamics of the solutions; we use it to investigate the behaviour of solitary wave solutions of the equations with special attention to those, among the properties usually connected with integrability, for which there is at present no analytic proof. Thus we study in particular the resolution property of arbitrary initial profiles into sequences of solitary waves for both equations and clean interaction of Benjamin-Ono solitary waves. We also verify numerically that the behaviour of the solution of the Intermediate Long Wave equation as the model parameter tends to the infinite depth limit is the one predicted by the theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze a fully discrete spectral method for the numerical solution of the initial- and periodic boundary-value problem for two nonlinear, nonlocal, dispersive wave equations, the Benjamin–Ono and the Intermediate Long Wave equations. The equations are discretized in space by the standard Fourier–Galerkin spectral method and in time by the explicit leap-frog scheme. For the resulting fully discrete, conditionally stable scheme we prove an L2-error bound of spectral accuracy in space and of second-order accuracy in time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we consider boundary integral methods applied to boundary value problems for the positive definite Helmholtz-type problem -DeltaU + alpha U-2 = 0 in a bounded or unbounded domain, with the parameter alpha real and possibly large. Applications arise in the implementation of space-time boundary integral methods for the heat equation, where alpha is proportional to 1/root deltat, and deltat is the time step. The corresponding layer potentials arising from this problem depend nonlinearly on the parameter alpha and have kernels which become highly peaked as alpha --> infinity, causing standard discretization schemes to fail. We propose a new collocation method with a robust convergence rate as alpha --> infinity. Numerical experiments on a model problem verify the theoretical results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the implementation of a 3D variational (3D-Var) data assimilation scheme for a morphodynamic model applied to Morecambe Bay, UK. A simple decoupled hydrodynamic and sediment transport model is combined with a data assimilation scheme to investigate the ability of such methods to improve the accuracy of the predicted bathymetry. The inverse forecast error covariance matrix is modelled using a Laplacian approximation which is calibrated for the length scale parameter required. Calibration is also performed for the Soulsby-van Rijn sediment transport equations. The data used for assimilation purposes comprises waterlines derived from SAR imagery covering the entire period of the model run, and swath bathymetry data collected by a ship-borne survey for one date towards the end of the model run. A LiDAR survey of the entire bay carried out in November 2005 is used for validation purposes. The comparison of the predictive ability of the model alone with the model-forecast-assimilation system demonstrates that using data assimilation significantly improves the forecast skill. An investigation of the assimilation of the swath bathymetry as well as the waterlines demonstrates that the overall improvement is initially large, but decreases over time as the bathymetry evolves away from that observed by the survey. The result of combining the calibration runs into a pseudo-ensemble provides a higher skill score than for a single optimized model run. A brief comparison of the Optimal Interpolation assimilation method with the 3D-Var method shows that the two schemes give similar results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently there are few observations of the urban wind field at heights other than rooftop level. Remote sensing instruments such as Doppler lidars provide wind speed data at many heights, which would be useful in determining wind loadings of tall buildings, and predicting local air quality. Studies comparing remote sensing with traditional anemometers carried out in flat, homogeneous terrain often use scan patterns which take several minutes. In an urban context the flow changes quickly in space and time, so faster scans are required to ensure little change in the flow over the scan period. We compare 3993 h of wind speed data collected using a three-beam Doppler lidar wind profiling method with data from a sonic anemometer (190 m). Both instruments are located in central London, UK; a highly built-up area. Based on wind profile measurements every 2 min, the uncertainty in the hourly mean wind speed due to the sampling frequency is 0.05–0.11 m s−1. The lidar tended to overestimate the wind speed by ≈0.5 m s−1 for wind speeds below 20 m s−1. Accuracy may be improved by increasing the scanning frequency of the lidar. This method is considered suitable for use in urban areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The absorption spectra of phytoplankton in the visible domain hold implicit information on the phytoplankton community structure. Here we use this information to retrieve quantitative information on phytoplankton size structure by developing a novel method to compute the exponent of an assumed power-law for their particle-size spectrum. This quantity, in combination with total chlorophyll-a concentration, can be used to estimate the fractional concentration of chlorophyll in any arbitrarily-defined size class of phytoplankton. We further define and derive expressions for two distinct measures of cell size of mixed populations, namely, the average spherical diameter of a bio-optically equivalent homogeneous population of cells of equal size, and the average equivalent spherical diameter of a population of cells that follow a power-law particle-size distribution. The method relies on measurements of two quantities of a phytoplankton sample: the concentration of chlorophyll-a, which is an operational index of phytoplankton biomass, and the total absorption coefficient of phytoplankton in the red peak of visible spectrum at 676 nm. A sensitivity analysis confirms that the relative errors in the estimates of the exponent of particle size spectra are reasonably low. The exponents of phytoplankton size spectra, estimated for a large set of in situ data from a variety of oceanic environments (~ 2400 samples), are within a reasonable range; and the estimated fractions of chlorophyll in pico-, nano- and micro-phytoplankton are generally consistent with those obtained by an independent, indirect method based on diagnostic pigments determined using high-performance liquid chromatography. The estimates of cell size for in situ samples dominated by different phytoplankton types (diatoms, prymnesiophytes, Prochlorococcus, other cyanobacteria and green algae) yield nominal sizes consistent with the taxonomic classification. To estimate the same quantities from satellite-derived ocean-colour data, we combine our method with algorithms for obtaining inherent optical properties from remote sensing. The spatial distribution of the size-spectrum exponent and the chlorophyll fractions of pico-, nano- and micro-phytoplankton estimated from satellite remote sensing are in agreement with the current understanding of the biogeography of phytoplankton functional types in the global oceans. This study contributes to our understanding of the distribution and time evolution of phytoplankton size structure in the global oceans.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a Galerkin method with piecewise polynomial continuous elements for fully nonlinear elliptic equations. A key tool is the discretization proposed in Lakkis and Pryer, 2011, allowing us to work directly on the strong form of a linear PDE. An added benefit to making use of this discretization method is that a recovered (finite element) Hessian is a byproduct of the solution process. We build on the linear method and ultimately construct two different methodologies for the solution of second order fully nonlinear PDEs. Benchmark numerical results illustrate the convergence properties of the scheme for some test problems as well as the Monge–Amp`ere equation and the Pucci equation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Autism Spectrum Disorder (ASD) is diagnosed on the basis of behavioral symptoms, but cognitive abilities may also be useful in characterizing individuals with ASD. One hundred seventy-eight high-functioning male adults, half with ASD and half without, completed tasks assessing IQ, a broad range of cognitive skills, and autistic and comorbid symptomatology. The aims of the study were, first, to determine whether significant differences existed between cases and controls on cognitive tasks, and whether cognitive profiles, derived using a multivariate classification method with data from multiple cognitive tasks, could distinguish between the two groups. Second, to establish whether cognitive skill level was correlated with degree of autistic symptom severity, and third, whether cognitive skill level was correlated with degree of comorbid psychopathology. Fourth, cognitive characteristics of individuals with Asperger Syndrome (AS) and high-functioning autism (HFA) were compared. After controlling for IQ, ASD and control groups scored significantly differently on tasks of social cognition, motor performance, and executive function (P's < 0.05). To investigate cognitive profiles, 12 variables were entered into a support vector machine (SVM), which achieved good classification accuracy (81%) at a level significantly better than chance (P < 0.0001). After correcting for multiple correlations, there were no significant associations between cognitive performance and severity of either autistic or comorbid symptomatology. There were no significant differences between AS and HFA groups on the cognitive tasks. Cognitive classification models could be a useful aid to the diagnostic process when used in conjunction with other data sources-including clinical history.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the prospect of exascale computing, computational methods requiring only local data become especially attractive. Consequently, the typical domain decomposition of atmospheric models means horizontally-explicit vertically-implicit (HEVI) time-stepping schemes warrant further attention. In this analysis, Runge-Kutta implicit-explicit schemes from the literature are analysed for their stability and accuracy using a von Neumann stability analysis of two linear systems. Attention is paid to the numerical phase to indicate the behaviour of phase and group velocities. Where the analysis is tractable, analytically derived expressions are considered. For more complicated cases, amplification factors have been numerically generated and the associated amplitudes and phase diagnosed. Analysis of a system describing acoustic waves has necessitated attributing the three resultant eigenvalues to the three physical modes of the system. To do so, a series of algorithms has been devised to track the eigenvalues across the frequency space. The result enables analysis of whether the schemes exactly preserve the non-divergent mode; and whether there is evidence of spurious reversal in the direction of group velocities or asymmetry in the damping for the pair of acoustic modes. Frequency ranges that span next-generation high-resolution weather models to coarse-resolution climate models are considered; and a comparison is made of errors accumulated from multiple stability-constrained shorter time-steps from the HEVI scheme with a single integration from a fully implicit scheme over the same time interval. Two schemes, “Trap2(2,3,2)” and “UJ3(1,3,2)”, both already used in atmospheric models, are identified as offering consistently good stability and representation of phase across all the analyses. Furthermore, according to a simple measure of computational cost, “Trap2(2,3,2)” is the least expensive.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of decaying organisms and death assemblages is referred to as forensic taphonomy, or more simply the study of graves. This field is dominated by the fields of entomology, anthropology and archaeology. Forensic taphonomy also includes the study of the ecology and chemistry of the burial environment. Studies in forensic taphonomy often require the use of analogues for human cadavers or their component parts. These might include animal cadavers or skeletal muscle tissue. However, sufficient supplies of cadavers or analogues may require periodic freezing of test material prior to experimental inhumation in the soil. This study was carried out to ascertain the effect of freezing on skeletal muscle tissue prior to inhumation and decomposition in a soil environment under controlled laboratory conditions. Changes in soil chemistry were also measured. In order to test the impact of freezing, skeletal muscle tissue (Sus scrofa) was frozen (−20 °C) or refrigerated (4 °C). Portions of skeletal muscle tissue (∼1.5 g) were interred in microcosms (72 mm diameter × 120 mm height) containing sieved (2 mm) soil (sand) adjusted to 50% water holding capacity. The experiment had three treatments: control with no skeletal muscle tissue, microcosms containing frozen skeletal muscle tissue and those containing refrigerated tissue. The microcosms were destructively harvested at sequential periods of 2, 4, 6, 8, 12, 16, 23, 30 and 37 days after interment of skeletal muscle tissue. These harvests were replicated 6 times for each treatment. Microbial activity (carbon dioxide respiration) was monitored throughout the experiment. At harvest the skeletal muscle tissue was removed and the detritosphere soil was sampled for chemical analysis. Freezing was found to have no significant impact on decomposition or soil chemistry compared to unfrozen samples in the current study using skeletal muscle tissue. However, the interment of skeletal muscle tissue had a significant impact on the microbial activity (carbon dioxide respiration) and chemistry of the surrounding soil including: pH, electroconductivity, ammonium, nitrate, phosphate and potassium. This is the first laboratory controlled study to measure changes in inorganic chemistry in soil associated with the decomposition of skeletal muscle tissue in combination with microbial activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AEA Technology has provided an assessment of the probability of α-mode containment failure for the Sizewell B PWR. After a preliminary review of the methodologies available it was decided to use the probabilistic approach described in the paper, based on an extension of the methodology developed by Theofanous et al. (Nucl. Sci. Eng. 97 (1987) 259–325). The input to the assessment is 12 probability distributions; the bases for the quantification of these distributions are discussed. The α-mode assessment performed for the Sizewell B PWR has demonstrated the practicality of the event-tree method with input data represented by probability distributions. The assessment itself has drawn attention to a number of topics, which may be plant and sequence dependent, and has indicated the importance of melt relocation scenarios. The α-mode failure probability following an accident that leads to core melt relocation to the lower head for the Sizewell B PWR has been assessed as a few parts in 10 000, on the basis of current information. This assessment has been the first to consider elevated pressures (6 MPa and 15 MPa) besides atmospheric pressure, but the results suggest only a modest sensitivity to system pressure.