27 resultados para Inf-convolution


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Image convolution is conventionally approximated by the LTI discrete model. It is well recognized that the higher the sampling rate, the better is the approximation. However sometimes images or 3D data are only available at a lower sampling rate due to physical constraints of the imaging system. In this paper, we model the under-sampled observation as the result of combining convolution and subsampling. Because the wavelet coefficients of piecewise smooth images tend to be sparse and well modelled by tree-like structures, we propose the L0 reweighted-L2 minimization (L0RL2 ) algorithm to solve this problem. This promotes model-based sparsity by minimizing the reweighted L2 norm, which approximates the L0 norm, and by enforcing a tree model over the weights. We test the algorithm on 3 examples: a simple ring, the cameraman image and a 3D microscope dataset; and show that good results can be obtained. © 2010 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The method of modeling ion implantation in a multilayer target using moments of a statistical distribution and numerical integration for dose calculation in each target layer is applied to the modelling of As+ in poly-Si/SiO2/Si. Good agreement with experiment is obtained. Copyright © 1985 by The Institute of Electrical and Electronics Engineers, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Statistical dependencies among wavelet coefficients are commonly represented by graphical models such as hidden Markov trees (HMTs). However, in linear inverse problems such as deconvolution, tomography, and compressed sensing, the presence of a sensing or observation matrix produces a linear mixing of the simple Markovian dependency structure. This leads to reconstruction problems that are non-convex optimizations. Past work has dealt with this issue by resorting to greedy or suboptimal iterative reconstruction methods. In this paper, we propose new modeling approaches based on group-sparsity penalties that leads to convex optimizations that can be solved exactly and efficiently. We show that the methods we develop perform significantly better in de-convolution and compressed sensing applications, while being as computationally efficient as standard coefficient-wise approaches such as lasso. © 2011 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tantalum-oxide thin films are shown to catalyse single- and multi-walled carbon nanotube growth by chemical vapour deposition. A low film thickness, the nature of the support material (best results with SiO2) and an atmospheric process gas pressure are of key importance for successful nanotube nucleation. Strong material interactions, such as silicide formation, inhibit nanotube growth. In situ X-ray photoelectron spectroscopy indicates that no catalyst reduction to Ta-metal or Ta-carbide occurs during our nanotube growth conditions and that the catalytically active phase is the Ta-oxide phase. Such a reduction-free oxide catalyst can be technologically advantageous. © 2013 The Royal Society of Chemistry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our nervous system can efficiently recognize objects in spite of changes in contextual variables such as perspective or lighting conditions. Several lines of research have proposed that this ability for invariant recognition is learned by exploiting the fact that object identities typically vary more slowly in time than contextual variables or noise. Here, we study the question of how this "temporal stability" or "slowness" approach can be implemented within the limits of biologically realistic spike-based learning rules. We first show that slow feature analysis, an algorithm that is based on slowness, can be implemented in linear continuous model neurons by means of a modified Hebbian learning rule. This approach provides a link to the trace rule, which is another implementation of slowness learning. Then, we show analytically that for linear Poisson neurons, slowness learning can be implemented by spike-timing-dependent plasticity (STDP) with a specific learning window. By studying the learning dynamics of STDP, we show that for functional interpretations of STDP, it is not the learning window alone that is relevant but rather the convolution of the learning window with the postsynaptic potential. We then derive STDP learning windows that implement slow feature analysis and the "trace rule." The resulting learning windows are compatible with physiological data both in shape and timescale. Moreover, our analysis shows that the learning window can be split into two functionally different components that are sensitive to reversible and irreversible aspects of the input statistics, respectively. The theory indicates that irreversible input statistics are not in favor of stable weight distributions but may generate oscillatory weight dynamics. Our analysis offers a novel interpretation for the functional role of STDP in physiological neurons.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The double-heterogeneity characterising pebble-bed high temperature reactors (HTRs) makes Monte Carlo based calculation tools the most suitable for detailed core analyses. These codes can be successfully used to predict the isotopic evolution during irradiation of the fuel of this kind of cores. At the moment, there are many computational systems based on MCNP that are available for performing depletion calculation. All these systems use MCNP to supply problem dependent fluxes and/or microscopic cross sections to the depletion module. This latter then calculates the isotopic evolution of the fuel resolving Bateman's equations. In this paper, a comparative analysis of three different MCNP-based depletion codes is performed: Montburns2.0, MCNPX2.6.0 and BGCore. Monteburns code can be considered as the reference code for HTR calculations, since it has been already verified during HTR-N and HTR-N1 EU project. All calculations have been performed on a reference model representing an infinite lattice of thorium-plutonium fuelled pebbles. The evolution of k-inf as a function of burnup has been compared, as well as the inventory of the important actinides. The k-inf comparison among the codes shows a good agreement during the entire burnup history with the maximum difference lower than 1%. The actinide inventory prediction agrees well. However significant discrepancy in Am and Cm concentrations calculated by MCNPX as compared to those of Monteburns and BGCore has been observed. This is mainly due to different Am-241 (n,γ) branching ratio utilized by the codes. The important advantage of BGCore is its significantly lower execution time required to perform considered depletion calculations. While providing reasonably accurate results BGCore runs depletion problem about two times faster than Monteburns and two to five times faster than MCNPX. © 2009 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we reported the results of the first stage of HTGR fuel element depletion benchmark obtained with BGCore and HELIOS depletion codes. The results of the k-inf are generally in good agreement. However, significant deviation in concentrations of several nuclides between MCNP based and HELIOS codes was observed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A thorium-based fuel cycle for light water reactors will reduce the plutonium generation rate and enhance the proliferation resistance of the spent fuel. However, priming the thorium cycle with 235U is necessary, and the 235U fraction in the uranium must be limited to below 20% to minimize proliferation concerns. Thus, a once-through thorium-uranium dioxide (ThO2-UO2) fuel cycle of no less than 25% uranium becomes necessary for normal pressurized water reactor (PWR) operating cycle lengths. Spatial separation of the uranium and thorium parts of the fuel can improve the achievable burnup of the thorium-uranium fuel designs through more effective breeding of 233U from the 232Th. Focus is on microheterogeneous fuel designs for PWRs, where the spatial separation of the uranium and thorium is on the order of a few millimetres to a few centimetres, including duplex pellet, axially microheterogeneous fuel, and a checkerboard of uranium and thorium pins. A special effort was made to understand the underlying reactor physics mechanisms responsible for enhancing the achievable burnup at spatial separation of the two fuels. The neutron spectral shift was identified as the primary reason for the enhancement of burnup capabilities. Mutual resonance shielding of uranium and thorium is also a factor; however, it is small in magnitude. It is shown that the microheterogeneous fuel can achieve higher burnups, by up to 15%, than the reference all-uranium fuel. However, denaturing of the 233U in the thorium portion of the fuel with small amounts of uranium significantly impairs this enhancement. The denaturing is also necessary to meet conventional PWR thermal limits by improving the power share of the thorium region at the beginning of fuel irradiation. Meeting thermal-hydraulic design requirements by some of the microheterogeneous fuels while still meeting or exceeding the burnup of the all-uranium case is shown to be potentially feasible. However, the large power imbalance between the uranium and thorium regions creates several design challenges, such as higher fission gas release and cladding temperature gradients. A reduction of plutonium generation by a factor of 3 in comparison with all-uranium PWR fuel using the same initial 235U content was estimated. In contrast to homogeneously mixed U-Th fuel, microheterogeneous fuel has a potential for economic performance comparable to the all-UO2 fuel provided that the microheterogeneous fuel incremental manufacturing costs are negligibly small.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A method is presented to predict the transient response of a structure at the driving point following an impact or a shock loading. The displacement and the contact force are calculated solving the discrete convolution between the impulse response and the contact force itself, expressed in terms of a nonlinear Hertzian contact stiffness. Application of random point process theory allows the calculation of the impulse response function from knowledge of the modal density and the geometric characteristics of the structure only. The theory is applied to a wide range of structures and results are experimentally verified for the case of a rigid object hitting a beam, a plate, a thin and a thick cylinder and for the impact between two cylinders. The modal density of the flexural modes for a thick slender cylinder is derived analytically. Good agreement is found between experimental, simulated and published results, showing the reliability of the method for a wide range of situations including impacts and pyroshock applications. © 2013 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work concerns the prediction of the response of an uncertain structure to a load of short duration. Assuming an ensemble of structures with small random variations about a nominal form, a mean impulse response can be found using only the modal density of the structure. The mean impulse response turns out to be the same as the response of an infinite structure: the response is calculated by taking into account the direct field only, without reflections. Considering the short duration of an impulsive loading, the approach is reasonable before the effect of the reverberant field becomes important. The convolution between the mean impulse response and the shock loading is solved in discrete time to calculate the response at the driving point and at remote points. Experimental and numerical examples are presented to validate the theory presented for simple structures such as beams, plates, and cylinders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The residual stresses in Pb(Zr0.3Ti0.7)O3thin films were measured by the \sin^{2}\Psi method using the normal X-ray incidence. The spacing of different planes (hkl) parallel to the film surface were converted to the spacing of a set of inclined planes (100). The angles between (100) and (hkl) were equivalent to the tilting angles of (100) from the normal of film surface. The residual stresses were extracted from the linear slope of the strain difference between the equivalent inclined direction and normal direction with respect to the \sin^{2}\Psi. The results were in consistency with that derived from the conventional \sin^{2}\Psi method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a fixed-grid finite element technique for fluid-structure interaction problems involving incompressible viscous flows and thin structures. The flow equations are discretised with isoparametric b-spline basis functions defined on a logically Cartesian grid. In addition, the previously proposed subdivision-stabilisation technique is used to ensure inf-sup stability. The beam equations are discretised with b-splines and the shell equations with subdivision basis functions, both leading to a rotation-free formulation. The interface conditions between the fluid and the structure are enforced with the Nitsche technique. The resulting coupled system of equations is solved with a Dirichlet-Robin partitioning scheme, and the fluid equations are solved with a pressure-correction method. Auxiliary techniques employed for improving numerical robustness include the level-set based implicit representation of the structure interface on the fluid grid, a cut-cell integration algorithm based on marching tetrahedra and the conservative data transfer between the fluid and structure discretisations. A number of verification and validation examples, primarily motivated by animal locomotion in air or water, demonstrate the robustness and efficiency of our approach. © 2013 John Wiley & Sons, Ltd.