965 resultados para Space Geometry. Manipulatives. Distance Calculation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Self-compression of femtosecond pulses in noble gases with an input power close to the self-focusing threshold has been investigated experimentally and theoretically. It is demonstrated that either multiphoton ionization (MPI) or space time focusing and self-steepening effects can induce pulse shortening, but they predominate at different beam intensities during the propagation. The latter effects play a key role in the final pulse self-compression. By choosing an appropriate focusing parameter, action distance of the space time focusing and self-steepening effects can be lengthened, which can promote a shock pulse structure with a duration as short as two optical cycles. It is also found that, for our calculation cases in which an input pulse power is close to the self-focusing threshold, either group velocity dispersion (GVD) or multiphoton absorption (MPA) has a negligible influence on pulse characteristics in the propagation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of a "projection function" in a finite-dimensional real or complex normed linear space H (the function PM which carries every element into the closest element of a given subspace M) is set forth and examined.

If dim M = dim H - 1, then PM is linear. If PN is linear for all k-dimensional subspaces N, where 1 ≤ k < dim M, then PM is linear.

The projective bound Q, defined to be the supremum of the operator norm of PM for all subspaces, is in the range 1 ≤ Q < 2, and these limits are the best possible. For norms with Q = 1, PM is always linear, and a characterization of those norms is given.

If H also has an inner product (defined independently of the norm), so that a dual norm can be defined, then when PM is linear its adjoint PMH is the projection on (kernel PM) by the dual norm. The projective bounds of a norm and its dual are equal.

The notion of a pseudo-inverse F+ of a linear transformation F is extended to non-Euclidean norms. The distance from F to the set of linear transformations G of lower rank (in the sense of the operator norm ∥F - G∥) is c/∥F+∥, where c = 1 if the range of F fills its space, and 1 ≤ c < Q otherwise. The norms on both domain and range spaces have Q = 1 if and only if (F+)+ = F for every F. This condition is also sufficient to prove that we have (F+)H = (FH)+, where the latter pseudo-inverse is taken using dual norms.

In all results, the real and complex cases are handled in a completely parallel fashion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rate of electron transport between distant sites was studied. The rate depends crucially on the chemical details of the donor, acceptor, and surrounding medium. These reactions involve electron tunneling through the intervening medium and are, therefore, profoundly influenced by the geometry and energetics of the intervening molecules. The dependence of rate on distance was considered for several rigid donor-acceptor "linkers" of experimental importance. Interpretation of existing experiments and predictions for new experiments were made.

The electronic and nuclear motion in molecules is correlated. A Born-Oppenheimer separation is usually employed in quantum chemistry to separate this motion. Long distance electron transfer rate calculations require the total donor wave function when the electron is very far from its binding nuclei. The Born-Oppenheimer wave functions at large electronic distance are shown to be qualitatively wrong. A model which correctly treats the coupling was proposed. The distance and energy dependence of the electron transfer rate was determined for such a model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly.

We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments.

We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which the melting temperature is a design criterion.

We present in detail two examples of refractory materials. First, we demonstrate how key material properties that provide guidance in the design of refractory materials can be accurately determined via ab initio thermodynamic calculations in conjunction with experimental techniques based on synchrotron X-ray diffraction and thermal analysis under laser-heated aerodynamic levitation. The properties considered include melting point, heat of fusion, heat capacity, thermal expansion coefficients, thermal stability, and sublattice disordering, as illustrated in a motivating example of lanthanum zirconate (La2Zr2O7). The close agreement with experiment in the known but structurally complex compound La2Zr2O7 provides good indication that the computation methods described can be used within a computational screening framework to identify novel refractory materials. Second, we report an extensive investigation into the melting temperatures of the Hf-C and Hf-Ta-C systems using ab initio calculations. With melting points above 4000 K, hafnium carbide (HfC) and tantalum carbide (TaC) are among the most refractory binary compounds known to date. Their mixture, with a general formula TaxHf1-xCy, is known to have a melting point of 4215 K at the composition Ta4HfC5, which has long been considered as the highest melting temperature for any solid. Very few measurements of melting point in tantalum and hafnium carbides have been documented, because of the obvious experimental difficulties at extreme temperatures. The investigation lets us identify three major chemical factors that contribute to the high melting temperatures. Based on these three factors, we propose and explore a new class of materials, which, according to our ab initio calculations, may possess even higher melting temperatures than Ta-Hf-C. This example also demonstrates the feasibility of materials screening and discovery via ab initio calculations for the optimization of "higher-level" properties whose determination requires extensive sampling of atomic configuration space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The propagation of cosmic rays through interstellar space has been investigated with the view of determining what particles can traverse astronomical distances without serious loss of energy. The principal method of loss of energy of high energy particles is by interaction with radiation. It is found that high energy (1013-1018ev) electrons drop to one-tenth their energy in 108 light years in the radiation density in the galaxy and that protons are not significantly affected in this distance. The origin of the cosmic rays is not known so that various hypotheses as to their origin are examined. If the source is near a star it is found that the interaction of electrons and photons with the stellar radiation field and the interaction of electrons with the stellar magnetic field limit the amount of energy which these particles can carry away from the star. However, the interaction is not strong enough to affect the energy of protons or light nuclei appreciably. The chief uncertainty in the results is due to the possible existence of general galactic magnetic field. The main conclusion reached is that if there is a general galactic magnetic field, then the primary spectrum has very few photons, only low energy (˂ 1013 ev) electrons and the higher energy particles are primarily protons regardless of the source mechanism, and if there is no general galactic magnetic field, then the source of cosmic rays accelerates mainly protons and the present rate of production is much less than that in the past.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper the saturated diffraction efficiency has been optimized by considering the effect of the absorption of the recording light on a crossed-beam grating with 90 degrees recording geometry in Fe:LiNbO3 crystals. The dependence of saturated diffraction efficiency on the doping levels with a known oxidation-reduction state, as well as the dependence of saturated diffraction efficiency on oxidation-reduction state with known doping levels, has been investigated. Two competing effects on the saturated diffraction efficiency were discussed, and the intensity profile of the diffracted beam at the output boundary has also been investigated. The results show that the maximal saturated diffraction efficiency can be obtained in crystals with moderate doping levels and modest oxidation state. An experimental verification is performed and the results are consistent with those of the theoretical calculation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis an extensive study is made of the set P of all paranormal operators in B(H), the set of all bounded endomorphisms on the complex Hilbert space H. T ϵ B(H) is paranormal if for each z contained in the resolvent set of T, d(z, σ(T))//(T-zI)-1 = 1 where d(z, σ(T)) is the distance from z to σ(T), the spectrum of T. P contains the set N of normal operators and P contains the set of hyponormal operators. However, P is contained in L, the set of all T ϵ B(H) such that the convex hull of the spectrum of T is equal to the closure of the numerical range of T. Thus, NPL.

If the uniform operator (norm) topology is placed on B(H), then the relative topological properties of N, P, L can be discussed. In Section IV, it is shown that: 1) N P and L are arc-wise connected and closed, 2) N, P, and L are nowhere dense subsets of B(H) when dim H ≥ 2, 3) N = P when dimH ˂ ∞ , 4) N is a nowhere dense subset of P when dimH ˂ ∞ , 5) P is not a nowhere dense subset of L when dimH ˂ ∞ , and 6) it is not known if P is a nowhere dense subset of L when dimH ˂ ∞.

The spectral properties of paranormal operators are of current interest in the literature. Putnam [22, 23] has shown that certain points on the boundary of the spectrum of a paranormal operator are either normal eigenvalues or normal approximate eigenvalues. Stampfli [26] has shown that a hyponormal operator with countable spectrum is normal. However, in Theorem 3.3, it is shown that a paranormal operator T with countable spectrum can be written as the direct sum, N ⊕ A, of a normal operator N with σ(N) = σ(T) and of an operator A with σ(A) a subset of the derived set of σ(T). It is then shown that A need not be normal. If we restrict the countable spectrum of T ϵ P to lie on a C2-smooth rectifiable Jordan curve Go, then T must be normal [see Theorem 3.5 and its Corollary]. If T is a scalar paranormal operator with countable spectrum, then in order to conclude that T is normal the condition of σ(T) ≤ Go can be relaxed [see Theorem 3.6]. In Theorem 3.7 it is then shown that the above result is not true when T is not assumed to be scalar. It was then conjectured that if T ϵ P with σ(T) ≤ Go, then T is normal. The proof of Theorem 3.5 relies heavily on the assumption that T has countable spectrum and cannot be generalized. However, the corollary to Theorem 3.9 states that if T ϵ P with σ(T) ≤ Go, then T has a non-trivial lattice of invariant subspaces. After the completion of most of the work on this thesis, Stampfli [30, 31] published a proof that a paranormal operator T with σ(T) ≤ Go is normal. His proof uses some rather deep results concerning numerical ranges whereas the proof of Theorem 3.5 uses relatively elementary methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O cálculo do equilíbrio de fases é um problema de grande importância em processos da engenharia, como, por exemplo, na separação por destilação, em processos de extração e simulação da recuperação terciária de petróleo, entre outros. Mas para resolvê-lo é aconselhável que se estude a priori a estabilidade termodinâmica do sistema, a qual consiste em determinar se uma dada mistura se apresenta em uma ou mais fases. Tal problema pode ser abordado como um problema de otimização, conhecido como a minimização da função distância do plano tangente à energia livre de Gibbs molar, onde modelos termodinâmicos, de natureza não convexa e não linear, são utilizados para descrevê-lo. Esse fato tem motivado um grande interesse em técnicas de otimização robustas e eficientes para a resolução de problemas relacionados com a termodinâmica do equilíbrio de fases. Como tem sido ressaltado na literatura, para proporcionar uma completa predição do equilíbrio de fases, faz-se necessário não apenas a determinação do minimizador global da função objetivo do teste de estabilidade, mas também a obtenção de todos os seus pontos estacionários. Assim, o desenvolvimento de metodologias para essa tarefa desafiadora tem se tornado uma nova área de pesquisa da otimização global aplicada à termodinâmica do equilíbrio, com interesses comuns na engenharia química e na engenharia do petróleo. O foco do presente trabalho é uma nova metodologia para resolver o problema do teste de estabilidade. Para isso, usa-se o chamado método do conjunto gerador para realizar buscas do tipo local em uma rede de pontos previamente gerada por buscas globais efetuadas com uma metaheurística populacional, no caso o método do enxame de partículas.Para se obter mais de um ponto estacionário, minimizam-se funções de mérito polarizadas, cujos pólos são os pontos previamente encontrados. A metodologia proposta foi testada na análise de quatorze misturas polares previamente consideradas na literatura. Os resultados mostraram que o método proposto é robusto e eficiente a ponto de encontrar, além do minimizador global, todos os pontos estacionários apontados previamente na literatura, sendo também capaz de detectar, em duas misturas ternárias estudadas, pontos estacionários não obtidos pelo chamado método de análise intervalar, uma técnica confiável e muito difundida na literatura. A análise do teste de estabilidade pela simples utilização do método do enxame de partículas associado à técnica de polarização mencionada acima, para a obtenção de mais de um ponto estacionário (sem a busca local feita pelo método do conjunto gerador em uma dada rede de pontos), constitui outra metodologia para a resolução do problema de interesse. Essa utilização é uma novidade secundária deste trabalho. Tal metodologia simplificada exibiu também uma grande robustez, sendo capaz de encontrar todos os pontos estacionários pesquisados. No entanto, quando comparada com a abordagem mais geral proposta aqui, observou-se que tal simplificação pode, em alguns casos onde a função de mérito apresenta uma geometria mais complexa, consumir um tempo de máquina relativamente grande, dessa forma é menos eficiente.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Survey standardization procedures can reduce the variability in trawl catch efficiency thus producing more precise estimates of biomass. One such procedure, towing with equal amounts of trawl warp on both sides of the net, was experimentally investigated for its importance in determining optimal trawl geometry and for evaluating the effectiveness of the recent National Oceanic and Atmospheric Administration (NOAA) national protocol on accurate measurement of trawl warps. This recent standard for measuring warp length requires that the difference between warp lengths can be no more than 4% of the distance between the otter doors measured along the bridles and footrope. Trawl performance data from repetitive towing with warp differentials of 0, 3, 5, 7, 9, 11, and 20 m were analyzed for their effect on three determinants of flatfish catch efficiency: footrope distance off-bottom, bridle length in contact with the bottom, and area swept by the net. Our results indicated that the distortion of the trawl caused by asymmetry in trawl warp length could have a negative inf luence on flatfish catch efficiency. At a difference of 7 m in warp length, the NOAA 4% threshold value for the 83112 Eastern survey trawl used in our study, we found no effect on the acous-tic-based measures of door spread, wing spread, and headrope height off-bottom. However, the sensitivity of the trawl to 7 m of warp offset could be seen as footrope distances off-bottom increased slightly (particularly in the center region of the net where flatfish escapement is highest), and as the width of the bridle path responsible for flatfish herding, together with the effective net width, was reduced. For this survey trawl, a NOAA threshold value of 4% should be considered a maximum. A more conservative value (less than 4%) would likely reduce potential bias in estimates of relative abundance caused by large differences in warp length approaching 7 m.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Silent Aircraft airframe has a flying wing design with a large wing planform and a propulsion system embedded in the rear of the airframe with intake on the upper surface of the wing. In the present paper, boundary element calculations are presented to evaluate acoustic shielding at low frequencies. Besides the three-dimensional geometry of the Silent Aircraft airframe, a few two-dimensional problems are considered that provide some physical insight into the shielding calculations. Mean flow refraction effects due to forward flight motion are accounted for by a simple time transformation that decouples the mean-flow and acoustic-field calculations. It is shown that significant amount of shielding can be obtained in the shadow region where there is no direct line of sight between the source and observer. The boundary element solutions are restricted to low frequencies. We have used a simple physically-based model to extend the solution to higher frequencies. Based on this model, using a monopole acoustic source, we predict at least an 18 dBA reduction in the overall sound pressure level of forward-propagating fan noise because of shielding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The frequency range of interest for ground vibration from underground urban railways is approximately 20 to 100 Hz. For typical soils, the wavelengths of ground vibration in this frequency range are of the order of the spacing of train axles, the tunnel diameter and the distance from the tunnel to nearby building foundations. For accurate modelling, the interactions between these entities therefore have to be taken into account. This paper describes an analytical three-dimensional model for the dynamics of a deep underground railway tunnel of circular cross-section. The tunnel is conceptualised as an infinitely long, thin cylindrical shell surrounded by soil of infinite radial extent. The soil is modelled by means of the wave equations for an elastic continuum. The coupled problem is solved in the frequency domain by Fourier decomposition into ring modes circumferentially and a Fourier transform into the wavenumber domain longitudinally. Numerical results for the tunnel and soil responses due to a normal point load applied to the tunnel invert are presented. The tunnel model is suitable for use in combination with track models to calculate the ground vibration due to excitation by running trains and to evaluate different track configurations. © 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate and efficient computation of the nearest wall distance d (or level set) is important for many areas of computational science/engineering. Differential equation-based distance/ level set algorithms, such as the hyperbolic-natured Eikonal equation, have demonstrated valuable computational efficiency. Here, in the context, as an 'auxiliary' equation to the main flow equations, the Eikonal equation is solved efficiently with two different finite volume approaches (the cell vertex and cell-centered). Application of the distance solution is studied for various geometries. Moreover, a procedure using the differential field to obtain the medial axis transform (MAT) for different geometries is presented. The latter provides a skeleton representation of geometric models that has many useful analysis properties. As an alternative approach to the pure geometric methods (e.g. the Voronoi approach), the current d-MAT procedure bypasses many difficulties that are usually encountered by pure geometric methods, especially in three dimensional space. It is also shown that the d-MAT approach provides the potential to sculpt/control the MAT form for specialized solution purposes. Copyright © 2010 by the American Institute of Aeronautics and Astronautics, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Turbulent combustion of stoichiometric hydrogen-air mixture is simulated using direct numerical simulation methodology, employing complex chemical kinetics. Two flame configurations, freely propagating and V-flames stabilized behind a hot rod, are simulated. The results are analyzed to study the influence of flame configuration on the turbulence-scalar interaction, which is critical for the scalar gradient generation processes. The result suggests that this interaction process is not influenced by the flame configuration and the flame normal is found to align with the most extensive strain in the region of intense heat release. The combustion in the rod stabilized flame is found to be flamelet like in an average sense and the growth of flame-brush thickness with the downstream distance is represented well by Taylor theory of turbulent diffusion, when the flame-brushes are non-interacting. The thickness is observed to saturate when the flame-brushes interact, which is found to occur in the simulated rod stabilized flame with Taylor micro-scale Reynolds number of 97. © 2011 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a method for vote-based 3D shape recognition and registration, in particular using mean shift on 3D pose votes in the space of direct similarity transforms for the first time. We introduce a new distance between poses in this spacethe SRT distance. It is left-invariant, unlike Euclidean distance, and has a unique, closed-form mean, in contrast to Riemannian distance, so is fast to compute. We demonstrate improved performance over the state of the art in both recognition and registration on a real and challenging dataset, by comparing our distance with others in a mean shift framework, as well as with the commonly used Hough voting approach. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transient flows in a confined ventilated space induced by a buoyancy source of time-varying strength and an external wind are examined. The space considered has varying cross-sectional area with height. A generalised theoretical model is proposed to investigate the flow dynamics following the activation of an external wind and an internal source of buoyancy. To investigate the effect of geometry, we vary the angle of the wall inclination of a particular geometry in which a point source of constant buoyancy is activated in the absence of wind. Counter-intuitively the ventilation is worse and lower airflow rates are established for geometries of increasing cross-sectional areas with height. We investigate the effect of the source buoyancy strength by comparing two cases: (1) when the buoyancy input is constant and (2) when the buoyancy input gradually increases over time so that after a finite time the total buoyancy inputs for (1) and (2) are identical. The rate at which the source heat gains are introduced has a significant role on the flow behaviour as we find that, in case (2), a warmer layer and a more pronounced overshoot are obtained than in case (1). The effect of assisting and opposing wind on the transient ventilation of an enclosure of constant cross-sectional area with height and constant heat gains is examined. A Froude number Fr is used to define the relative strengths of the buoyancy-induced and wind-induced velocities and five different transient states and their associated critical Fr are identified. © 2010 Elsevier Ltd.