89 resultados para Elliptic Variational Inequatilies
Resumo:
Parity (P)-odd domains, corresponding to nontrivial topological solutions of the QCD vacuum, might be created during relativistic heavy-ion collisions. These domains are predicted to lead to charge separation of quarks along the orbital momentum of the system created in noncentral collisions. To study this effect, we investigate a three-particle mixed-harmonics azimuthal correlator which is a P-even observable, but directly sensitive to the charge-separation effect. We report measurements of this observable using the STAR detector in Au + Au and Cu + Cu collisions at root s(NN) = 200 and 62 GeV. The results are presented as a function of collision centrality, particle separation in rapidity, and particle transverse momentum. A signal consistent with several of the theoretical expectations is detected in all four data sets. We compare our results to the predictions of existing event generators and discuss in detail possible contributions from other effects that are not related to P violation.
Resumo:
We present the first measurements of identified hadron production, azimuthal anisotropy, and pion interferometry from Au + Au collisions below the nominal injection energy at the BNL Relativistic Heavy-Ion Collider (RHIC) facility. The data were collected using the large acceptance solenoidal tracker at RHIC (STAR) detector at root s(NN) = 9.2 GeV from a test run of the collider in the year 2008. Midrapidity results on multiplicity density dN/dy in rapidity y, average transverse momentum < p(T)>, particle ratios, elliptic flow, and Hanbury-Brown-Twiss (HBT) radii are consistent with the corresponding results at similar root s(NN) from fixed-target experiments. Directed flow measurements are presented for both midrapidity and forward-rapidity regions. Furthermore the collision centrality dependence of identified particle dN/dy, < p(T)>, and particle ratios are discussed. These results also demonstrate that the capabilities of the STAR detector, although optimized for root s(NN) = 200 GeV, are suitable for the proposed QCD critical-point search and exploration of the QCD phase diagram at RHIC.
Resumo:
椭圆曲线密码算法作为高安全性的公钥密码;ECC算法的优化和软硬件实现是当前的研究热点;采用硬件实现椭圆曲线密码算法具有速度快、安全性高的特点,随着功耗分析、旁路攻击等新型分析方法的发展,密码算法硬件实现中的低功耗设计越来越重要;针对椭圆曲线密码算法的特点,主要对该算法芯片设计中的低功耗设计方法进行探讨。
Resumo:
Submonolayer thin films of a three-ring bent-core (that is, banana-shaped) compound, m-bis(4-n-octyloxystyryl)benzene (m-OSB), were prepared by the vacuum-deposition method, and their morphologies, structures, and phase behavior were investigated by atomic force microscopy (AFM) and transmission electron microscopy (TEM). The films have island shapes ranging from compact elliptic or circular patterns at low temperatures (below 40 degreesC) to branched patterns at high temperatures (above 60 degreesC). This shape evolution is contrary to the prediction based on the traditional diffusion-limited aggregation (DLA) theory. AFM observations revealed that two different mechanisms governed the film growth, in which the compact islands were formed via a dewetting-like behavior, while the branched islands diffusion-mediated. It is suggested m-OSB forms a two-dimensional, liquid crystal at the low-temperature substrate that is responsible for the unusual formation of compact islands. All of the monolayer islands are unstable and apt to transform to slender bilayer crystals at room temperature. This phase transition results from the peculiar molecular shape and packing of the bent-core molecules and is interpreted as escaping from macroscopic net polarization by the formation of an antiferroelectric alignment.
Resumo:
A novel synthetic approach to biodegradable amphiphilic copolymers based on poly (epsilon-caprolactone) (PCL) and chitosan was presented, and the prepared copolymers were used to prepare nanoparticles successfully. The PCL-graft-chitosan copolymers were synthesized by coupling the hydroxyl end-groups on preformed PCL chains and the amino groups present on 6-O-triphenylmethyl chitosan and by removing the protective 6-O-triphenylmethyl groups in acidic aqueous solution. The PCL content in the copolymers can be controlled in the range of 10-90 wt %. The graft copolymers were thoroughly characterized by H-1 NAM, C-13 NMR, FT-IR and DSC. The nanoparticles made from the graft copolymers were investigated by H-1 NMR, DLS, AFM and SEM measurements. It was found that the copolymers could form spherical or elliptic nanoparticles it? water. The amount of available primary amines on the surface of the prepared nanoparticles was evaluated by ninhydrin assail, and it can be controlled by the grafting degree of PCL.
Resumo:
The general forms of the conservation of momentum, temperature and potential vorticity of coastal ocean are obtained in the x-z plane for the nonlinear ocean circulation of Boussinesq fluid, and a elliptic type partial differential equations of second order are derived. Solution of the partial differential equations are obtained under the conditions that the fluid moves along the topography. The numerical results show that there exist both upwelling and downwelling along coastline that mainly depends on the large scale ocean condition. Numerically results of the upwelling (downwelling), coastal jet and temperature front zone are favorable to the observations.
Resumo:
Instead of discussing the existence of a one-dimensional traveling wave front solution which connects two constant steady states, the present work deals with the case connecting a constant and a nonhomogeneous steady state on an infinite band region. The corresponding model is the well-known Fisher equation with variational coefficient and Dirichlet boundary condition. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The conditional nonlinear optimal perturbation (CNOP), which is a nonlinear generalization of the linear singular vector (LSV), is applied in important problems of atmospheric and oceanic sciences, including ENSO predictability, targeted observations, and ensemble forecast. In this study, we investigate the computational cost of obtaining the CNOP by several methods. Differences and similarities, in terms of the computational error and cost in obtaining the CNOP, are compared among the sequential quadratic programming (SQP) algorithm, the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm, and the spectral projected gradients (SPG2) algorithm. A theoretical grassland ecosystem model and the classical Lorenz model are used as examples. Numerical results demonstrate that the computational error is acceptable with all three algorithms. The computational cost to obtain the CNOP is reduced by using the SQP algorithm. The experimental results also reveal that the L-BFGS algorithm is the most effective algorithm among the three optimization algorithms for obtaining the CNOP. The numerical results suggest a new approach and algorithm for obtaining the CNOP for a large-scale optimization problem.
Resumo:
Mesoscale eddy plays an important role in the ocean circulation. In order to improve the simulation accuracy of the mesoscale eddies, a three-dimensional variation (3DVAR) data assimilation system called Ocean Variational Analysis System (OVALS) is coupled with a POM model to simulate the mesoscale eddies in the Northwest Pacific Ocean. In this system, the sea surface height anomaly (SSHA) data by satellite altimeters are assimilated and translated into pseudo temperature and salinity (T-S) profile data. Then, these profile data are taken as observation data to be assimilated again and produce the three-dimensional analysis T-S field. According to the characteristics of mesoscale eddy, the most appropriate assimilation parameters are set up and testified in this system. A ten years mesoscale eddies simulation and comparison experiment is made, which includes two schemes: assimilation and non-assimilation. The results of comparison between two schemes and the observation show that the simulation accuracy of the assimilation scheme is much better than that of non-assimilation, which verified that the altimetry data assimilation method can improve the simulation accuracy of the mesoscale dramatically and indicates that it is possible to use this system on the forecast of mesoscale eddies in the future.
Resumo:
图像分割是图像处理中很重要的一个问题,是计算机视觉的基础。因为它能够简化信息的存储和表示,从而能够对获取的图像内容进行智能解释,所以在很多应用问题中,图像分割是必不可少的过程,如医学图像处理,环境三维重建及自动目标识别等。图像分割的方法有很多种,如边缘检测,阈值,区域融合,分水岭及马尔可夫随机场等。虽然这些方法有其各自特点,但是它们在图象分割过程中不能充分将图像底层信息与高层信息结合,从而无法模拟人类视觉系统智能性。当图像底层信息不足时,这些仅基于数据驱动的分割模型无法达到令人满意的结果。尽管某种具体图像分割方法不可能满足所有图像分割要求,但利用尽可能多的高层与底层信息,将图像分割成有意义和人们所期望的区域始终是研究者所追求的目标。图像分割问题的数学建模和计算中有两个关键因素。第一是建立合适的分割模型将分割边界和分割区域的作用有效结合。第二是利用最有效的方法将分割边界和分割区域的几何特征统一到分割模型中。基于变分原理的主动轮廓图像分割将图像视为连续函数。这就使得研究者可以从连续函数空间角度来研究图像分割问题。这同时也为研究者提供严格的数学工具,如微分几何、泛函分析和微分方程等。为此它能很好的解决上述两个问题。第一,Mumford-Shah(M-S)模型为基于变分的主动轮廓分割模型提供了一完整的数学理论框架,并且Mumford-Shah模型从信息论的角度也能得到合理解释。第二,水平集方法能有效的表示分割边界和分割区域的几何特征。与其它方法相比,变分主动轮廓在理论和实际计算过程中都具有显著的优势。首先它能直接处理和表示各种重要的几何特征,如梯度、切向量、曲率等,并且能有效模拟很多动态过程,如线性和非线性扩散等。再则其可以利用很多已有的丰富数值方法进行分析和计算。本文基于变分原理与偏微分方程方法,利用主动轮廓模型具有结合底层图像信息与高层先验知识的特点,将特定先验知识与主动轮廓分割模型进行有效结合以弥补底层图像信息的不足,从而使主动轮分割廓模型具有更强的智能性。本文主要从两点对变分主动轮廓分割模型展开了研究:1、演化轮廓的形状约束;2、演化轮廓的梯度下降流约束及其滤波实现。其主要工作包括以下四个方面的内容:第一,基于M-S模型和样条曲线的开边界检测。本章通过对演化轮廓引入合理边界条件,利用样条曲线表示待检测的开曲线,将一般开曲线的检测问题变为合理的图像分割问题,从而达到一般开曲线检测目的。此方法称为开扩散蛇模型。一般开曲线的检测具有很多应用领域,如:河流、道路、天际线、焊缝等检测。第二,方差主动轮廓模型。在目标跟踪应用中,跟踪目标的主要运动形式表现为平移。本章将此作为一种先验知识与主动轮廓模型结合,提出了一种方差主动轮廓模型(HV)。此模型的特点是轮廓在演化过程中具有平移优先和快速的良好特性。它比已有的主动轮廓模型更适于自动目标跟踪领域。第三,基于M-S模型和隐式曲面变分方法的一般梯度下降流滤波器。本章为一般梯度下降流求取提供了统一框架及解决方法。首先本章将H0梯度下降流和一般梯度下降流统一到Mumford-Shah模型框架中,从而将一般梯度下降流的求取转换为一个极小化泛函问题,并利用隐式曲面变分方法对此极小化泛函进行求解。另外本章从滤波器设计角度出发,通过对H0梯度下降流滤波可以得到一般梯度下降流。滤波器的实现体现了内嵌于一般梯度下降流的先验属性。根据此思想,本章将对应于HV和H1主动轮廓的內积空间顺序组合,对H0梯度下降流进行顺序滤波,提出了一种既具有全局平移优先性又具有局部光滑速度场的主动轮廓,称为HV1主动轮廓。它将H0,H1和HV主动轮廓统一起来。第四,形状保持主动轮廓模型及其应用。针对某些特定目标的检测,本章提出了形状保持主动轮廓模型。此模型能够达到分割即目标的目的,同时能够给出目标的定量描述。基于此模型,本章实现了具有椭圆、直线和平行四边形轮廓特征目标的检测。椭圆形状约束用于眼底图像分割。直线和平行四边行分别用于自动目标识别中的天际线检测和机场跑道跟踪。
Resumo:
Petroleum and natural gas is an important strategic resources. The short of the reserves will block the development of economy and threaten the safety of nation, along with the main oil fields of our country coming to the height of power and splendor of the exploitation and exploration. Therefore, it makes a great sense to inaugurate new explorative field and increase the reserves of petroleum and natural gas. Magnetic exploration is a main method of geophysics exploration. the developing observation apparatus and the perfect processing method provide wide space for magnetic exploration in these years. The method of magnetic bright spot is an application of magnetic exploration. The vertical migration of the hydrocarbon changes physical and chemical environment above the hydrocarbon reservoir, the new environment make tervalent iron translate into bivalent iron, that produce small scale magnetic anomaly, that is magnetic bright spot. The method of magnetic bright spot explores oil and gas field by the relation between the hydrocarbon and magnetic anomaly. This paper systemically research to pick-up and identify magnetic bright spot combining an oil field item, then point out advantaged area. In order to test the result, the author use the seismic information to superpose the magnetic bright spot, that prove the magnetic bright spot is reliable. then, the author complete a software to pick and identify the magnetic bright spot. The magnetic basement is very important to research forming and evolvement of the basin, especially, it is a crucial parameter of exploring residual basin in the research on pre-Cenozoic residual. This paper put forward a new method to inverse the interface of the magnetic layer on the basis of previous work, that is the method of separation of magnetic field step by step. The theory of this method is to translate the result of magnetic layer fluctuation to the result of magnetization density change, and the magnetic layer is flat, the paper choose thickness of magnetic layer as unit thickness, and define magnetic layer as a unit-thickness layer in order to convenient calculation, at the same time, define the variational magnetization density as equivalent magnetic density. Then we translate the relation between magnetic field and layer fluctuation to the relation between magnetic field and equivalent magnetic density, then, we can obtain the layer fluctuation through calculating equivalent magnetic density. Contrast to conventional parker method, model experimentation and example checkout prove this method is effective. The merit of this method is to avoid flat result in a strongly fluctuant area because of using a uniform average depth, the result of this method is closer to the fact, and this method is to inverse equivalent magnetic density, then translate equivalent magnetic density to layer fluctuation, this lays a foundation to inverse variational magnetic density in the landscape orientation and portrait.
Resumo:
The primary approaches for people to understand the inner properties of the earth and the distribution of the mineral resources are mainly coming from surface geology survey and geophysical/geochemical data inversion and interpretation. The purpose of seismic inversion is to extract information of the subsurface stratum geometrical structures and the distribution of material properties from seismic wave which is used for resource prospecting, exploitation and the study for inner structure of the earth and its dynamic process. Although the study of seismic parameter inversion has achieved a lot since 1950s, some problems are still persisting when applying in real data due to their nonlinearity and ill-posedness. Most inversion methods we use to invert geophysical parameters are based on iterative inversion which depends largely on the initial model and constraint conditions. It would be difficult to obtain a believable result when taking into consideration different factors such as environmental and equipment noise that exist in seismic wave excitation, propagation and acquisition. The seismic inversion based on real data is a typical nonlinear problem, which means most of their objective functions are multi-minimum. It makes them formidable to be solved using commonly used methods such as general-linearization and quasi-linearization inversion because of local convergence. Global nonlinear search methods which do not rely heavily on the initial model seem more promising, but the amount of computation required for real data process is unacceptable. In order to solve those problems mentioned above, this paper addresses a kind of global nonlinear inversion method which brings Quantum Monte Carlo (QMC) method into geophysical inverse problems. QMC has been used as an effective numerical method to study quantum many-body system which is often governed by Schrödinger equation. This method can be categorized into zero temperature method and finite temperature method. This paper is subdivided into four parts. In the first one, we briefly review the theory of QMC method and find out the connections with geophysical nonlinear inversion, and then give the flow chart of the algorithm. In the second part, we apply four QMC inverse methods in 1D wave equation impedance inversion and generally compare their results with convergence rate and accuracy. The feasibility, stability, and anti-noise capacity of the algorithms are also discussed within this chapter. Numerical results demonstrate that it is possible to solve geophysical nonlinear inversion and other nonlinear optimization problems by means of QMC method. They are also showing that Green’s function Monte Carlo (GFMC) and diffusion Monte Carlo (DMC) are more applicable than Path Integral Monte Carlo (PIMC) and Variational Monte Carlo (VMC) in real data. The third part provides the parallel version of serial QMC algorithms which are applied in a 2D acoustic velocity inversion and real seismic data processing and further discusses these algorithms’ globality and anti-noise capacity. The inverted results show the robustness of these algorithms which make them feasible to be used in 2D inversion and real data processing. The parallel inversion algorithms in this chapter are also applicable in other optimization. Finally, some useful conclusions are obtained in the last section. The analysis and comparison of the results indicate that it is successful to bring QMC into geophysical inversion. QMC is a kind of nonlinear inversion method which guarantees stability, efficiency and anti-noise. The most appealing property is that it does not rely heavily on the initial model and can be suited to nonlinear and multi-minimum geophysical inverse problems. This method can also be used in other filed regarding nonlinear optimization.
Resumo:
This dissertation presents a series of irregular-grid based numerical technique for modeling seismic wave propagation in heterogeneous media. The study involves the generation of the irregular numerical mesh corresponding to the irregular grid scheme, the discretized version of motion equations under the unstructured mesh, and irregular-grid absorbing boundary conditions. The resulting numerical technique has been used in generating the synthetic data sets on the realistic complex geologic models that can examine the migration schemes. The motion equation discretization and modeling are based on Grid Method. The key idea is to use the integral equilibrium principle to replace the operator at each grid in Finite Difference scheme and variational formulation in Finite Element Method. The irregular grids of complex geologic model is generated by the Paving Method, which allow varying grid spacing according to meshing constraints. The grids have great quality at domain boundaries and contain equal quantities of nodes at interfaces, which avoids the interpolation of parameters and variables. The irregular grid absorbing boundary conditions is developed by extending the Perfectly Matched Layer method to the rotated local coordinates. The splitted PML equations of the first-order system is derived by using integral equilibrium principle. The proposed scheme can build PML boundary of arbitrary geometry in the computational domain, avoiding the special treatment at corners in a standard PML method and saving considerable memory and computation cost. The numerical implementation demonstrates the desired qualities of irregular grid based modeling technique. In particular, (1) smaller memory requirements and computational time are needed by changing the grid spacing according to local velocity; (2) Arbitrary surfaces and interface topographies are described accurately, thus removing the artificial reflection resulting from the stair approximation of the curved or dipping interfaces; (3) computational domain is significantly reduced by flexibly building the curved artificial boundaries using the irregular-grid absorbing boundary conditions. The proposed irregular grid approach is apply to reverse time migration as the extrapolation algorithm. It can discretize the smoothed velocity model by irregular grid of variable scale, which contributes to reduce the computation cost. The topography. It can also handle data set of arbitrary topography and no field correction is needed.
Resumo:
The scholars in the world have been trying to find an effective analytic algorithm of multiple hole problems usually meet in engineering designs. Though some studies on circular or elliptic holes had been achieved under specific conditions, no efforts were made to any multiple hole problems that is most significant for engineering designs. The author has made further studies on any multiple hole problems, using complex variable function method and Schwarz alternating method. After solving a series of technological difficulties, the author obtains an effective analytic algorithm, and acquires stress field and displacement field with high accuracy, which can be conducted for arbitrary many iterations according to practical accuracy requirements. In addition, th solution of stress and displacement fields, even for multiple holes of complex shapes and smaller distances. Further, the author made preliminary studies on viscoelastic displacement solution for any double holes. In terms of the obtained displacement solution of any multiple holes, this paper studies displacement back-analysis for the excavations of two tunnels, and find that the back-analysis method is accurate. Additionally, the author presents the mathematical prove of inversion uniqueness for ground stresses, elastic modulus and Poisson ratio. The author believes that the accurate analytic algorithm provided in this paper will presents an effective way to stress and displacement analysis for any multiple hole problems, optimal arrangement of multiple holes, hole shape optimization of multiple holes, etc..