941 resultados para Algebraic lattices
Resumo:
Pure alpha-Al2O3 exhibits a very high degree of thermodynamical stability among all metal oxides and forms an inert oxide scale in a range of structural alloys at high temperatures. We report that amorphous Al2O3 thin films sputter deposited over crystalline Si instead show a surprisingly active interface. On annealing, crystallization begins with nuclei of a phase closely resembling gamma-Alumina forming almost randomly in an amorphous matrix, and with increasing frequency near the substrate/film interface. This nucleation is marked by the signature appearance of sharp (400) and (440) reflections and the formation of a diffuse diffraction halo with an outer maximal radius of approximate to 0.23 nm enveloping the direct beam. The microstructure then evolves by a cluster-coalescence growth mechanism suggestive of swift nucleation and sluggish diffusional kinetics, while locally the Al ions redistribute slowly from chemisorbed and tetrahedral sites to higher anion coordinated sites. Chemical state plots constructed from XPS data and simple calculations of the diffraction patterns from hypothetically distorted lattices suggest that the true origins of the diffuse diffraction halo are probably related to a complex change in the electronic structure spurred by the a-gamma transformation rather than pure structural disorder. Concurrent to crystallization within the film, a substantially thick interfacial reaction zone also builds up at the film/substrate interface with the excess Al acting as a cationic source. (C) 2015 AIP Publishing LLC.
Resumo:
We consider the basic bidirectional relaying problem, in which two users in a wireless network wish to exchange messages through an intermediate relay node. In the compute-and-forward strategy, the relay computes a function of the two messages using the naturally occurring sum of symbols simultaneously transmitted by user nodes in a Gaussian multiple-access channel (MAC), and the computed function value is forwarded to the user nodes in an ensuing broadcast phase. In this paper, we study the problem under an additional security constraint, which requires that each user's message be kept secure from the relay. We consider two types of security constraints: 1) perfect secrecy, in which the MAC channel output seen by the relay is independent of each user's message and 2) strong secrecy, which is a form of asymptotic independence. We propose a coding scheme based on nested lattices, the main feature of which is that given a pair of nested lattices that satisfy certain goodness properties, we can explicitly specify probability distributions for randomization at the encoders to achieve the desired security criteria. In particular, our coding scheme guarantees perfect or strong secrecy even in the absence of channel noise. The noise in the channel only affects reliability of computation at the relay, and for Gaussian noise, we derive achievable rates for reliable and secure computation. We also present an application of our methods to the multihop line network in which a source needs to transmit messages to a destination through a series of intermediate relays.
Resumo:
We consider a quantum particle, moving on a lattice with a tight-binding Hamiltonian, which is subjected to measurements to detect its arrival at a particular chosen set of sites. The projective measurements are made at regular time intervals tau, and we consider the evolution of the wave function until the time a detection occurs. We study the probabilities of its first detection at some time and, conversely, the probability of it not being detected (i.e., surviving) up to that time. We propose a general perturbative approach for understanding the dynamics which maps the evolution operator, which consists of unitary transformations followed by projections, to one described by a non-Hermitian Hamiltonian. For some examples of a particle moving on one-and two-dimensional lattices with one or more detection sites, we use this approach to find exact expressions for the survival probability and find excellent agreement with direct numerical results. A mean-field model with hopping between all pairs of sites and detection at one site is solved exactly. For the one-and two-dimensional systems, the survival probability is shown to have a power-law decay with time, where the power depends on the initial position of the particle. Finally, we show an interesting and nontrivial connection between the dynamics of the particle in our model and the evolution of a particle under a non-Hermitian Hamiltonian with a large absorbing potential at some sites.
Resumo:
We demonstrate in here a powerful scalable technology to synthesize continuously high quality CdSe quantum dots (QDs) in supercritical hexane. Using a low cost, highly thermally stable Cd-precursor, cadmium deoxycholate, the continuous synthesis is performed in 400 mu m ID stainless steel capillaries resulting in CdSe QDs having sharp full-width-at-half-maxima (23 nm) and high photoluminescence quantum yields (45-55%). Transmission electron microscopy images show narrow particles sizes distribution (sigma <= 5%) with well-defined crystal lattices. Using two different synthesis temperatures (250 degrees C and 310 degrees C), it was possible to obtain zinc blende and wurtzite crystal structures of CdSe QDs, respectively. This synthetic approach allows achieving substantial production rates up to 200 mg of QDs per hour depending on the targeted size, and could be easily scaled to gram per hour.
Resumo:
Response analysis of a linear structure with uncertainties in both structural parameters and external excitation is considered here. When such an analysis is carried out using the spectral stochastic finite element method (SSFEM), often the computational cost tends to be prohibitive due to the rapid growth of the number of spectral bases with the number of random variables and the order of expansion. For instance, if the excitation contains a random frequency, or if it is a general random process, then a good approximation of these excitations using polynomial chaos expansion (PCE) involves a large number of terms, which leads to very high cost. To address this issue of high computational cost, a hybrid method is proposed in this work. In this method, first the random eigenvalue problem is solved using the weak formulation of SSFEM, which involves solving a system of deterministic nonlinear algebraic equations to estimate the PCE coefficients of the random eigenvalues and eigenvectors. Then the response is estimated using a Monte Carlo (MC) simulation, where the modal bases are sampled from the PCE of the random eigenvectors estimated in the previous step, followed by a numerical time integration. It is observed through numerical studies that this proposed method successfully reduces the computational burden compared with either a pure SSFEM of a pure MC simulation and more accurate than a perturbation method. The computational gain improves as the problem size in terms of degrees of freedom grows. It also improves as the timespan of interest reduces.
Resumo:
A ray tracing based path length calculation is investigated for polarized light transport in a pixel space. Tomographic imaging using polarized light transport is promising for applications in optical projection tomography of small animal imaging and turbid media with low scattering. Polarized light transport through a medium can have complex effects due to interactions such as optical rotation of linearly polarized light, birefringence, diattenuation and interior refraction. Here we investigate the effects of refraction of polarized light in a non-scattering medium. This step is used to obtain the initial absorption estimate. This estimate can be used as prior in Monte Carlo (MC) program that simulates the transport of polarized light through a scattering medium to assist in faster convergence of the final estimate. The reflectance for p-polarized (parallel) and s-polarized (perpendicular) are different and hence there is a difference in the intensities that reach the detector end. The algorithm computes the length of the ray in each pixel along the refracted path and this is used to build the weight matrix. This weight matrix with corrected ray path length and the resultant intensity reaching the detector for each ray is used in the algebraic reconstruction (ART) method. The proposed method is tested with numerical phantoms for various noise levels. The refraction errors due to regions of different refractive index are discussed, the difference in intensities with polarization is considered. The improvements in reconstruction using the correction so applied is presented. This is achieved by tracking the path of the ray as well as the intensity of the ray as it traverses through the medium.
Resumo:
The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.
Resumo:
Standard approaches for ellipse fitting are based on the minimization of algebraic or geometric distance between the given data and a template ellipse. When the data are noisy and come from a partial ellipse, the state-of-the-art methods tend to produce biased ellipses. We rely on the sampling structure of the underlying signal and show that the x- and y-coordinate functions of an ellipse are finite-rate-of-innovation (FRI) signals, and that their parameters are estimable from partial data. We consider both uniform and nonuniform sampling scenarios in the presence of noise and show that the data can be modeled as a sum of random amplitude-modulated complex exponentials. A low-pass filter is used to suppress noise and approximate the data as a sum of weighted complex exponentials. The annihilating filter used in FRI approaches is applied to estimate the sampling interval in the closed form. We perform experiments on simulated and real data, and assess both objective and subjective performances in comparison with the state-of-the-art ellipse fitting methods. The proposed method produces ellipses with lesser bias. Furthermore, the mean-squared error is lesser by about 2 to 10 dB. We show the applications of ellipse fitting in iris images starting from partial edge contours, and to free-hand ellipses drawn on a touch-screen tablet.
Resumo:
This paper presents a method for the calculation of two-dimensional elastic fields in a solid containing any number of inhomogeneities under arbitrary far field loadings. The method called 'pseudo-dislocations method', is illustrated for the solution of interacting elliptic inhomogeneities. It reduces the interacting inhomogeneities problem to a set of linear algebraic equations. Numerical results are presented for a variety of elliptic inhomogeneity arrangements, including the special cases of elliptic holes, cracks and circular inhomogeneities. All these complicated problems can be solved with high accuracy and efficiency.
Resumo:
Reynolds averaged Navier-Stokes model performances in the stagnation and wake regions for turbulent flows with relatively large Lagrangian length scales (generally larger than the scale of geometrical features) approaching small cylinders (both square and circular) is explored. The effective cylinder (or wire) diameter based Reynolds number, ReW ≤ 2.5 × 103. The following turbulence models are considered: a mixing-length; standard Spalart and Allmaras (SA) and streamline curvature (and rotation) corrected SA (SARC); Secundov's νt-92; Secundov et al.'s two equation νt-L; Wolfshtein's k-l model; the Explicit Algebraic Stress Model (EASM) of Abid et al.; the cubic model of Craft et al.; various linear k-ε models including those with wall distance based damping functions; Menter SST, k-ω and Spalding's LVEL model. The use of differential equation distance functions (Poisson and Hamilton-Jacobi equation based) for palliative turbulence modeling purposes is explored. The performance of SA with these distance functions is also considered in the sharp convex geometry region of an airfoil trailing edge. For the cylinder, with ReW ≈ 2.5 × 103 the mixing length and k-l models give strong turbulence production in the wake region. However, in agreement with eddy viscosity estimates, the LVEL and Secundov νt-92 models show relatively little cylinder influence on turbulence. On the other hand, two equation models (as does the one equation SA) suggest the cylinder gives a strong turbulence deficit in the wake region. Also, for SA, an order or magnitude cylinder diameter decrease from ReW = 2500 to 250 surprisingly strengthens the cylinder's disruptive influence. Importantly, results for ReW ≪ 250 are virtually identical to those for ReW = 250 i.e. no matter how small the cylinder/wire its influence does not, as it should, vanish. Similar tests for the Launder-Sharma k-ε, Menter SST and k-ω show, in accordance with physical reality, the cylinder's influence diminishing albeit slowly with size. Results suggest distance functions palliate the SA model's erroneous trait and improve its predictive performance in wire wake regions. Also, results suggest that, along the stagnation line, such functions improve the SA, mixing length, k-l and LVEL results. For the airfoil, with SA, the larger Poisson distance function increases the wake region turbulence levels by just under 5%. © 2007 Elsevier Inc. All rights reserved.
Resumo:
We present results on the stability of compressible inviscid swirling flows in an annular duct. Such flows are present in aeroengines, for example in the by-pass duct, and there are also similar flows in many aeroacoustic or aeronautical applications. The linearised Euler equations have a ('critical layer') singularity associated with pure convection of the unsteady disturbance by the mean flow, and we focus our attention on this region of the spectrum. By considering the critical layer singularity, we identify the continuous spectrum of the problem and describe how it contributes to the unsteady field. We find a very generic family of instability modes near to the continuous spectrum, whose eigenvalue wavenumbers form an infinite set and accumulate to a point in the complex plane. We study this accumulation process asymptotically, and find conditions on the flow to support such instabilities. It is also found that the continuous spectrum can cause a new type of instability, leading to algebraic growth with an exponent determined by the mean flow, given in the analysis. The exponent of algebraic growth can be arbitrarily large. Numerical demonstrations of the continuous spectrum instability, and also the modal instabilities are presented.
Resumo:
An algebraic unified second-order moment (AUSM) turbulence-chemistry model of char combustion is introduced in this paper, to calculate the effect of particle temperature fluctuation on char combustion. The AUSM model is used to simulate gas-particle flows, in coal combustion in a pulverized coal combustor, together with a full two-fluid model for reacting gas-particle flows and coal combustion, including the sub-models as the k-epsilon-k(p) two-phase turbulence niodel, the EBU-Arrhenius volatile and CO combustion model, and the six-flux radiation model. A new method for calculating particle mass flow rate is also used in this model to correct particle outflow rate and mass flow rate for inside sections, which can obey the principle of mass conservation for the particle phase and can also speed up the iterating convergence of the computation procedure effectively. The simulation results indicate that, the AUSM char combustion model is more preferable to the old char combustion model, since the later totally eliminate the influence of particle temperature fluctuation on char combustion rate.
Resumo:
We have fabricated using high-resolution electron beam lithography circular magnetic particles (nanomagnets) of diameter 60 nm and thickness 7 nm out of the common magnetic alloy supermalloy. The nanomagnets were arranged on rectangular lattices of different periods. A high-sensitivity magneto-optical method was used to measure the magnetic properties of each lattice. We show experimentally how the magnetic properties of a lattice of nanomagnets can be profoundly changed by the magnetostatic interactions between nanomagnets within the lattice. We find that simply reducing the lattice spacing in one direction from 180 nm down to 80 nm (leaving a gap of only 20 nm between edges) causes the lattice to change from a magnetically disordered state to an ordered state. The change in state is accompanied by a peak in the magnetic susceptibility. We show that this is analogous to the paramagnetic-ferromagnetic phase transition which occurs in conventional magnetic materials, although low-dimensionality and kinetic effects must also be considered.
Resumo:
We report weaknesses in two algebraic constructions of low-density parity-check codes based on expander graphs. The Margulis construction gives a code with near-codewords, which cause problems for the sum-product decoder; The Ramanujan-Margulis construction gives a code with low-weight codewords, which produce an error-floor. © 2004 Elsevier B.V.
Resumo:
We report weaknesses in two algebraic constructions of low-density parity-check codes based on expander graphs. The Margulis construction gives a code with near-codewords, which cause problems for the sum-product decoder; The Ramanujan-Margulis construction gives a code with low-weight codewords, which produce an error-floor. ©2003 Published by Elsevier Science B. V.