166 resultados para isospin independence
Resumo:
本论文介绍了原子核多重碎裂中的同位旋效应、液气相变研究的意义和现状以及当前常用的几种描述原子核液气相变的理论模型,基于同位旋相关的量子分子动力学(IQMD)模型和唯象静态模型,系统研究了有限核多重碎裂中的同位旋效应和液气相变。利用非对称核物质状态方程、IQMD模型和静态模型,研究了有限核112Sn和132Sn多重碎裂的同位旋效应以及它们对温度的依赖性。给出了在一定温度下不同密度对产生中等质量碎片和平均自由中子数/平均自由质子数的影响,发现温度较低时(5MeV),低密区(0.01-0.04fm-3)对中等质量碎片产生的贡献大。随着温度的升高(10MeV,15MeV),高密区域(>0.04fm-3)对中等质量碎片的产生的贡献增加。不论是在低密度区(0.01-0.04fm-3)或是在较高密度区(>0.04fm-3),如果考察自由的中子与质子的比例,则可以看出,它们与系统的同位旋有密切的关系,即在同位旋大的系统中自由中子/自由质子的比值要大于同位旋小的系统中的比值。为了寻找出核多重碎裂的临界行为信号,分析了条件矩、折合矩和组合矩及提取临界指数。采用唯象的同位旋非对称核物质状态方程和静态模型来研究热核液-气相变的临界行为,通过对核碎片的条件矩、折合矩和组合矩分析,指出了中高能重离子碰撞中形成的高温高密核在膨胀阶段存在明显的临界现象。寻找出了临界行为的信号,发现通过Zmax与S2在自然对数的对数坐标下的等高图,可以做为核发生临界现象的信号,这种现象对较重的体系会更加明显。通过线性拟合提取了临界指数,并跟其它模型提取的进行了对比,结果表明与在3D Percolation系统、Fluid系统、Au+C Fragmentation系统提取的临界指数是一致的
Resumo:
本论文介绍了放射性核束物理研究的现状以及当前常用的几种同位旋相关的重离子微观输运理论,系统描述了非对称核物质的状态方程、中能重离子碰撞中的同位旋效应以及中子星的性质。基于 Hartree-Fock 理论和扩展的 Skyrme 相互作用,在核物质近似下得到了一个非相对论性的密度、温度和同位旋相关的核物质状态方程 (IEOS)。系统研究了核物质状态方程的同位旋效应。讨论了核子的平均场、核物质的不可压缩系数、核子的有效质量以及核物质临界温度的同位旋相关性,并且给出了核物质饱笔点处的饱和密度、不可压缩系数以及单核子结合能的抛物线规律。同时,探讨了对称能的温度和密度相关性,给出了零温度时对称能的解析表达式,并提出了对称能温度相关性的抛物线规律,发现对称能随着温度的升高而减小。另外,基于以上的同位旋相关的核物质状态方程,对 ALADIN's Caloric Curve 给出了一种静态解释。在传统量子分子动力学 (QMD) 模型的基础上,通过在相互作用平均场、两体碰撞、泡利阻塞、初始化以及碎片构造过程中适当地考虑同位旋自由度,得到了一个同位旋相关的 QMD 模型 (IQMD 模型)。利用IQMD模型系统研究了中能重离子碰撞中的同位旋效应。例如,中能重离子碰撞中同位旋自由度的弛豫、重离子碰撞中核子前平衡发射的同位旋效应、重离子碰撞中的集体流(包括直接流、转动流、挤出流和径向流)及其同位旋相关性、原子核多重碎裂的同位旋效应及其消失、重离子碰撞中的化学不稳定性以及中能重离子碰撞中如何选取事件的碰撞参数及其同位旋效应等。同样,在传统的 Boltzmann-Langevin 方程中适当地考虑同位旋自由度,得到了同位旋相关的Boltzmann-Langevin方程 (IBLE),利用IBLE研究了 ~(19)Na的产生截面。另外,利用IQMD模型探讨了多重碎裂的"neck" 机制以及重离子碰撞中局域势的有限程效应。基于前面给出的非相对论的核物质状态方程,系统研究了中子星的性质,如中子星的化学组份、质量、结合能、半径、密度剖面、转动惯量及表面红移等。结果表明,使用一些常用的 Skyrme 势参数能够给出与天文学观测相一致的结果。
Resumo:
本论文介绍了放射性核束物理研究的现状以及当前常用的几种同位旋相关的重离子微观输运理论,对传统的 Boltzmann-Langevin 方程(BLE)考虑了同位旋相关的平均场、核子-核子碰撞截面和泡里阻塞,面且在初始化相空间的抽样中区分了中子和质子,并合模型也考虑了同位旋效应,建立了同位旋相关的 Boltzmann-Langevin 方程(IBLE)。利用IBLE对放射性核引起反应中的同位素分布,~(19)Na 的产生截面,以及中能重离子碰撞中的径向膨胀流进行了系统的研究,并对超重核的合成进行了一些初步的讨论。利用 IBLE 分别研究了不同弹核 ~(14)O,~(16)O 和 ~(18)O 在入射能量为 28.7MeV/u 下轰击不同靶核~7Be 和 ~9Be的反应,计算生成碎片的产生截面,发现用丰中子(缺中子)炮弹或丰中子(缺中子)靶进行反应,所得到的产物均有丰中子(缺中子)的碎片出现。同位素分布宽度和峰位入射体系密切相关,产生碎片的电荷数越接近于入射弹核的电荷数,则同位素分布的宽度越大,峰位偏离β稳定线值越远,其同位旋效应越明显。在28.7 MeV/u入射能量下,对反应系统 ~(17-20,22)Ne + ~(12)C 和 ~(20)Ne + ~9 进行了研究。对核素 ~(19)Na 产生截面进行计算和比较后,发现缺中子核引起的反应,具有更大~(19)Na的产生截面。通过研究反应系统 ~(40)Ca + ~(58)Ni 和 ~(40)Ca + ~(58)Fe 的径向膨胀流随入射能量的变化关系,发现径向膨胀流存在着强烈的同位旋相关性。利用径向膨胀流随入射能量的变化关系和拟和结果,从理论上证实了存在使径向膨胀流为零的特定入射能量,发现对于不同的反应体系这个能量是不同的,它随同位旋自由度的变化而变化。
Resumo:
针对①中能反应中同位旋自由度是否达到平衡,②同位旋自由度对几中不同方法测量的核温度是否有影响 这两个基本问题,设计了用30和35MeV/u ~(36,40)Ar轰击~(112,124)Sn反应的实验方案。得到如下结果:对于前角5°处的耗散弹核碎裂产物,丰中子同位素与稳定核的产额比随产物出射动能的增加而减小,而丰质子子同位素与稳定核的产额比随动能的增加而增加,呈现明显的剪刀差分布特性。随耗散时间的增大,产物的平均中质比逐渐由弹核的平均中质比向系统的平均中质比过渡。这个结果说明在该反应中,同位旋自由度没有达到完全平衡。而对于20°处的DIC产物,上述剪刀差分布特性变得更不明显,这是同位旋自由度由非平衡向平衡过渡的表现。后角轻粒子的能谱分析表明,初始热核的同位旋会影响斜率核温度的提取,由于丰中子轻粒子~6He在~(40)Ar + ~(112)Sn系统中的蒸发被抑制,相比~(40)Ar + ~(112)Sn而言,其蒸发比较容易发生在衰变链早期,因此提取的温度偏高,同样,丰质子轻粒子~3He的温度在~(40)Ar + ~(112)Sn中略高。但中后角的同位素产额分析表明,反应系统的同位旋对双同位素比核温度几乎没有影响。核温度作为热核的热力学量,是独立于测量方法的,这种不同的方法得出的差异主要来源于同位旋对衰变机制的影响。作为一个尝试,将中高能反应中的熵的提取推广到这个能区,发现两个系统的熵几乎一致。在量子统计模型框架下,考察核温度与熵的关系发现,~(40)Ar + ~(112)Sn反应的挤出时刻密度略高于~(40)Ar + ~(112)Sn。
Resumo:
本论文包括两部分内容,第一部分是关于中能重离子碰撞中的同位旋效应研究。第二部分是热原子核巨共振中有限力程研究。随着放射性次级核束的产生和利用,使得中能重离子碰撞中的同位旋效应研究成为核物理学的重要前沿课题。通过从稳定线到滴线附近大跨度同位旋范围内的重离子反应,使得人们可以提取核物质状态方程和介质中核子-核子碰撞同位旋旋相关截面的知识。我们将量子分子动力学(QMD)改造成为同位旋相关的量子分子动力学(IQMD),对中能重离子碰撞中同位旋效应进行了深入而系统地研究。在研究丰中子和缺中子系统的多重碎裂过程中,发现了多重碎裂过程的同位旋效应,例如缺中子系统的中等质量碎片多重性大于丰中子系统。这对于了解多重碎裂的机理有重要意义。在研究对同位旋相关的核物质状态方程(对称势)和介质中核子-核子碰撞截面灵敏的观测量过程中,发现前平衡发射中子-质子比在较宽能区内(E<150MeV/u)对同位旋相关的核物质状态方程灵敏,但对介质中同位旋相关的核子-核子碰撞截面不灵敏;而原子核阻止在中能区(费米能
Resumo:
本论文主要包括两部分内容,一部分是关于中能重离子碰撞中的同位旋效应和同位旋非对称核物质状态方程,第二部分是改进的Glauber理论和晕核核结构的研究.利用同位旋相关的量子分子动力学模型(IQ\MD)系统而仔细的研究了同位旋相关的平均场和介质中核子核子碰撞截面对中能重离子碰撞中碎裂和耗散的同位旋效应.研究在国际上首次发现原子核阻止,中等质量碎片多重性和质子(中子)发射数都敏感的依赖于介质中核子一核子碰撞截面的同位旋效应,而对于同位旋相关的平均场(又寸称势)很不灵敏.故这些物理量可以作为提取相对高能范围缺中子系统同位旋相关介质中核子一核子碰撞故面的灵敏探针.另一方而,与前三种物理观测量相反,研究发现在相对较低能区前平衡核子发射中质比和同位旋分馏强度灵敏的依赖于对称势,而对于同位旋相关介质中核子一核子碰撞截面很不灵敏,可以用来提取同位旋相关平均场的知识.在此基础上分别研究了动量相关作用,核介质效应和库仑作用分别对提取上述知识动力学过程的机理和影响,研究发现这三种动力学因素对中能重离子碰撞过程中的同位旋效应有重要影响.例如研究发现核子一核子碰撞的介质效应明显增加了中等质量碎片多重性和核子发射数对于核子一核子碰撞截面的灵敏性.库仑作用降低了同位旋分馏和原子核阻止.但不影响它们分别对平均场和两体碰撞同位旋效应的灵敏性.动量相关作用明显增加了各种物理观测量对于平均场或两体碰撞截面同位旋效应的灵敏性.以上的研究结果对建立同位旋非对称核物质状态方程具有重要的参考价值和学术意义.在考虑量子修正、库仑修正、核子一核子碰撞同位旋效应和假定有效原子核密度分布后将仅适用于计算高能核子对原子核反应总截面的Glouber理论推广到能适用于中低能情况下核一核反应的Glaube理论.研究发现在应用推广的Glauber理论计算中、低能核一核反应截面时,量子修正是重要的.利用修正了的Glauber理论,系统计算了从低能到较高能大量稳定线附近30个核一核反应总截面,在没有可调参数的情况下,都与实验结果较好地符合.在计算晕核与稳定核反应总截面时,发现对于"Be,"Be和" Li等入射晕核,必须考虑它们的晕核结构才能得到与实验符合的反应截面,并可依据反应总截面来提取晕核的密度分布和均方半径等信息,以此来判定晕核的存在。
Resumo:
中能重离子碰撞中的核反应机制及其形成的高激发热核的性质是中能重离子物理研究的重要领域,而高激发热核性质的同位旋效应研究是这一领域的热点之一。选取了具有不同N亿比的反应体系以研究激发热核性质的同位旋效应。本论文涉及的反应系统三对共六个反应体系:55MeV/u~(40)Ar+~(58.64)Ni、30MeV/u~(40)Ar+~(112,124)Sn、35Mev/u~(36)Ar+~(112,124)Sn,这六个反应体系的N/z比分别为1.13,1.26、1.24,1.41、1.18,1.35。分别从带电粒子多重性、相对态布居核温度、关联函数等角度研究了这三对反应体系高激发热核性质的同位旋效应。在55MeV/u 40Ar+58,64Ni核反应中,用兰州4π带电粒子探测器阵列测量带电粒子多重性,研究了He和中等质量碎片的产额与反应系统的同位旋的关系,以及这种同位旋效应与反应系统的碰撞参数(即碰撞的激烈程度)、系统的激发能的变化关系。对两个反应系统,观察到带电粒子多重性中He的比分随带电粒子多重性的增加而增大,带电粒子多重性中IMF的比分随带电粒子多重性的增加而先增大,后减小的规律。两个反应系统虽然具有相同的核电荷数,但轻粒子He和中等质量碎片在多重性中的比分有明显的同位旋相关性。在30Mev/u40Sn、35MeV/u~(40)Ar~(112,124)Sn、35Mev/u 36Ar+112,124Sn反应中用13单元望远镜探测器阵列测量了小角关联粒子。由价a关联函数提取了30Mev/u 40Ar+112,12Sn反应系统中激发热核的态布居核温。对于不同同位旋反应系统舜UAr+112Sn和4VAr+124Sn,提取的相对杰布居核温度分别是4.18+0.28/0.21MEV和4.10士0.22/0.20MeV;考察态布居核温度和粒子能量的关薰时,观察到两个系统的发射温度均随着粒子能量的增加而降低,缺中子系统40Ar+l12Sn中由低能时的5.13士.30/0.26MEV降低到高能时的3.87士0.37/0.29MeV,丰中子系统40Ar+124Sn中由低能时的5.39士0.30/0.26MeV降低到高能时的3.32士MeV。讨论了这种布居态核温度的同位旋相关性。在35Mev/u 36Ar+112,124Sn反应中提取了洲F(3‘25)的约化速度关联函数。相对丰中子36Ar+124Sn系统的IMF关联函数在小约化速度处反关联程度更强,表明36Ai+124Sn系统的发射IMF的平均时间更短。用MENEKA程序提取了两个系统IMF的平均发射时间,36Ai+112sn反应中IMF的发射时间约为150fm/c,而36Ar十124Sn反应IME的发射时间稍短,约为120fm/c。以关联IMF的单核子总能量/动量为窗条件,发现低能IMF关联函数几乎没有差别,而高能IMF关联函数在小约化速度处的差别更为明显,表明两个系统IMF关联函数的同位旋效应可能来自于IMF的早期发射。为了得到进一步的信息,我们提取了高动量窗条件下的IMF发射时间,它们比平均发射时间短,36Ar+112Sn反应中高能IMF的发射时间约为100蒯c,而36Ai+124Sn反应中IMF的发射时间则更短,约为50fm/c。
Resumo:
对可重构模块化机器人模块的结构进行了研究,并归纳设计出7种功能模块,其中包括3种1自由度的关节模块,2种连杆模块和2种辅助模块·所有模块的功能都是独立的,并且每个模块的连接界面都设计成了圆筒形以便重组和提高其刚度·每一种模块都可设计成不同尺寸系列,这些不同类型和尺寸系列的模块便可构成一个模块库·作者对3个自由度串联机器人的构形进行了系统的研究,并应用制作的实验模块对研究结果的可行性进行了验证·
Resumo:
自1979年海底热液喷口被首次发现以来,因其巨大的经济和科研价值引起了科学界的巨大关注。海底热液喷口释放的热液与周围海水混合,形成热液羽流,其范围可以达到数千米。热液羽流的存在使在几千米深的海底定位范围只有几米的热液喷口成为可能。湍流的作用使热液羽流与喷口位置存在不确定性,而在搜索区域中包含多个热液源会增加这种不确定性,这是海底热液喷口探测需要克服的难题之一。 本文主要研究了使用AUV探测海底热液喷口的方法。这个问题从更大范畴来说属于机器人化学羽流源定位问题(也称为移动机器人气源/味源定位),其潜在应用包括污染与环境监测,化学工厂安全,搜索与救援,反恐,麻醉品控制,爆炸物清除,以及热液喷口探测等。 首先,从AUV探测的角度研究了海底热液羽流的特性,分析了海底热液羽流的模型并根据模型对羽流进行了动态仿真。 从化学羽流源定位的角度研究了两种海底热液喷口探测策略──梯度搜索策略和构建占据栅格地图(Occupancy Grid Mapping,OGM)的策略。并利用仿真羽流环境验证了上述两种定位策略的可行性。 梯度搜索策略通过基于行为的方法实现,将梯度搜索任务分解为五个行为,并设计了行为间的转换规则,AUV按此规则在不同的行为间转化,跟踪羽流浓度梯度的方向,最终到达浓度极值点。 通过将每个栅格的二值状态重新定义为是否存在一个活跃的热液源,可以将OGM应用于热液源定位。融合传感器数据得到的后验概率地图可以反映每个栅格中存在热液源的可能性。本文采用基于贝叶斯规则的算法融合传感器数据。由于热液源的数量稀少,使用标准贝叶斯方法往往对栅格的占据概率做出过高估计,无法清晰的定位热液源。为此,又研究了一种精确算法和一种基于后验独立假设(Independence of Posteriors,IP)的近似算法,并分析了三种算法的优缺点。 最后,将占据栅格地图应用于分阶段海底热液喷口探测,利用栅格地图帮助实现探测的自主嵌套。
Resumo:
Conventional seismic attribute analysis is not only time consuming, but also has several possible results. Therefore, seismic attribute optimization and multi-attribute analysis are needed. In this paper, Fuyu oil layer in Daqing oil field is our main studying object. And there is much difference between seismic attributes and well logs. So under this condition, Independent Component Analysis (ICA) and Kohonen neural net are introduced to seismic attribute optimization and multi-attribute analysis. The main contents are as follows: (1) Now the method of seismic attribute compression is mainly principal component analysis (PCA). In this article, independent component analysis (ICA), which is superficially related to PCA, but much more powerful, is used to seismic reservoir characterizeation. The fundamental, algorithms and applications of ICA are surveyed. And comparation of ICA with PCA is stydied. On basis of the ne-entropy measurement of independence, the FastICA algorithm is implemented. (2) Two parts of ICA application are included in this article: First, ICA is used directly to identify sedimentary characters. Combined with geology and well data, ICA results can be used to predict sedimentary characters. Second, ICA treats many attributes as multi-dimension random vectors. Through ICA transform, a few good new attributes can be got from a lot of seismic attributes. Attributes got from ICA optimization are independent. (3) In this paper, Kohonen self-organizing neural network is studied. First, the characteristics of neural network’s structure and algorithm is analyzed in detail, and the traditional algorithm is achieved which has been used in seism. From experimental results, we know that the Kohonen self-organizing neural network converges fast and classifies accurately. Second, the self-organizing feature map algorithm needs to be improved because the result of classification is not very exact, the boundary is not quite clear and the velocity is not fast enough, and so on. Here frequency sensitive principle is introduced. Combine it with the self-organizing feature map algorithm, then get frequency sensitive self-organizing feature map algorithm. Experimental results show that it is really better. (4) Kohonen self-organizing neural network is used to classify seismic attributes. And it can be avoided drawing confusing conclusions because the algorithm’s characteristics integrate many kinds of seismic features. The result can be used in the division of sand group’s seismic faces, and so on. And when attributes are extracted from seismic data, some useful information is lost because of difference and deriveative. But multiattributes can make this lost information compensated in a certain degree.
Resumo:
In exploration geophysics,velocity analysis and migration methods except reverse time migration are based on ray theory or one-way wave-equation. So multiples are regarded as noise and required to be attenuated. It is very important to attenuate multiples for structure imaging, amplitude preserving migration. So it is an interesting research in theory and application about how to predict and attenuate internal multiples effectively. There are two methods based on wave-equation to predict internal multiples for pre-stack data. One is common focus point method. Another is inverse scattering series method. After comparison of the two methods, we found that there are four problems in common focus point method: 1. dependence of velocity model; 2. only internal multiples related to a layer can be predicted every time; 3. computing procedure is complex; 4. it is difficult to apply it in complex media. In order to overcome these problems, we adopt inverse scattering series method. However, inverse scattering series method also has some problems: 1. computing cost is high; 2. it is difficult to predict internal multiples in the far offset; 3. it is not able to predict internal multiples in complex media. Among those problems, high computing cost is the biggest barrier in field seismic processing. So I present 1D and 1.5D improved algorithms for reducing computing time. In addition, I proposed a new algorithm to solve the problem which exists in subtraction, especially for surface related to multiples. The creative results of my research are following: 1. derived an improved inverse scattering series prediction algorithm for 1D. The algorithm has very high computing efficiency. It is faster than old algorithm about twelve times in theory and faster about eighty times for lower spatial complexity in practice; 2. derived an improved inverse scattering series prediction algorithm for 1.5D. The new algorithm changes the computing domain from pseudo-depth wavenumber domain to TX domain for predicting multiples. The improved algorithm demonstrated that the approach has some merits such as higher computing efficiency, feasibility to many kinds of geometries, lower predictive noise and independence to wavelet; 3. proposed a new subtraction algorithm. The new subtraction algorithm is not used to overcome nonorthogonality, but utilize the nonorthogonality's distribution in TX domain to estimate the true wavelet with filtering method. The method has excellent effectiveness in model testing. Improved 1D and 1.5D inverse scattering series algorithms can predict internal multiples. After filtering and subtracting among seismic traces in a window time, internal multiples can be attenuated in some degree. The proposed 1D and 1.5D algorithms have demonstrated that they are effective to the numerical and field data. In addition, the new subtraction algorithm is effective to the complex theoretic models.
Resumo:
The most prominent tectonic and environmental events during the Cenozoic in Asia are the uplift of the Himalaya-Tibetan plateau, aridification in the Asian interior, and onset of the Asian monsoons. These caused more humid conditions in southeastern China and the formation of inland deserts in northwestern China. The 22 Ma eolian deposits in northern China provide an excellent terrestrial record relative to the above environmental events. Up to date, many studies have focused on the geochemical characters of the late Mio-Pleistocene eolian deposits, however, the geochemical characteristics of the Miocene loess and soils is still much less known. In this study, the elemental and Sr-Nd isotopic compositions of the eolian deposits from the Qinan (from 22.0 to 6.2 Ma) and the Xifeng (from 3.5 Ma until now) loess-soil sections were analyzed to examine the grain size effects on the element concentrations and the implications about the dust origin and climate. The main results are as follows: 1. The contents of Si, Na, Zr and Sr are higher in the coarser fractions while Ti and Nb have the highest contents in the 2-8 μm fractions. Al, Fe, Mg, K, Mn, Rb, Cu, Ga, Zn, V, Cr, Ni, LOI have clear relationships with grain-size, more abundant in the fine fraction while non significant relationship is observed for Y. Based on these features, we suggest that K2O/Al2O3 ratio can be used to address the dust provenance, and that VR (Vogt ratio = (Al2O3+K2O)/(MgO+CaO+Na2O)) can be used as a chemical weathering proxy for the Miocene eolian deposits because of their relative independence on the grain size. Meanwhile, SiO2/Al2O3 molar ratio is a best geochemical indicator of original eolian grain size, as suggested in earlier studies. 2. Analyses on the Sr and Nd isotope composition of the last glacial loess samples (L1) and comparison with the data from the deserts in northern China suggest that that Taklimakan desert is unlikely to be the main source region of the eolian dust. In contrast, these data suggest greater contributions of the Tengger, Badain Jaran and Qaidam deserts to the eolian dust during the last glacial cycle. Since the geochemical compositions (major, trace, REE and Sr, Nd isotope) of loess samples for the past 22 Ma are broadly similar with the samples from L1, these data trend to suggest relatively stable and insignificant changes of dust sources over the past 22 Ma. 3. Chemical weathering is stronger for Miocene paleosol samples than for the Plio-Pleistocene ones, showing warmer/more humid climatic conditions with a stronger summer monsoon in the Miocene. However, chemical weathering is typical of Ca-Na removal stage, suggesting a climate range from semiarid to subhumid conditions. These support the notion about the formation of a semi-arid to semi-humid monsoonal regime by the early Miocene, as is consistent with earlier studies.
Resumo:
The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.
Resumo:
Previous researches about family caregiving revealed that caregiving has both negative and positive effects on caregivers’ well-being. Based on Lawton’s two-factor model, this study aims at examining how caring for old parents would affect adult daughters’ psychological well-being. According to Lawton, objective stressors as caregiving would arouse two different kinds of caregivers’ subjective appraisal, i.e., negative appraisal and positive appraisal, which in turn correlate with the negative and positive dimensions of caregivers’ psychological well-being, respectively. There were two main purposes of this study: a) to verify both the negative and positive paths in the two-factor model and their relatively independence; and b) to examine the effects of relationship quality between caregiver and care-recipient on those paths. The results are as follows: 1) Caregiving stressors have significant positive predictive effect on caregivers’ negative appraisal, but have no direct effect on caregivers’ positive appraisal. 2) Caregivers’ negative appraisal has significant positive predictive effect on their negative emotional experience, while caregivers’ positive appraisal has significant positive predictive effect on their positive emotional experience. 3) Certain dimensions of relationship quality, including the Appreciation and General Appraisal, have significant negative predictive effect on caregivers’ negative appraisal, and have significant positive predictive effect on caregivers’ positive appraisal. 4) The Appreciation dimension of relationship quality moderates the path from caregiving demands to caregivers’ burden; and the General Appraisal of relationship quality moderates the path from caregivers’ positive appraisal to life satisfaction. Based on the above results, the researcher concluded that a) both the negative path and positive path exist in caregiving process, and they are relatively independent from each other; and b) relationship quality does moderate certain paths in the model. Meanwhile, the main effect of relationship quality on caregivers’ experience is also significant and more remarkable. This study attempts to explain these results in terms of coping resources. Both relationship quality and many other factors might be explained as resources that caregivers utilize to cope with stress of caregiving. With more resources, caregivers tend to appraise more positively, and less negatively, and vice versa. However, the resources which might affect caregivers’ positive appraisal, as well as the ways they work, may be different from that affect caregivers’ negative appraisal.
Resumo:
In current days, many companies have carried out their branding strategies, because strong brand usually provides confidence and reduce risks to its consumers. No matter what a brand is based on tangible products or services, it will possess the common attributes of this category, and it also has its unique attributes. Brand attribute is defined as descriptive features, which are intrinsic characteristics, values or benefits endowed by users of the product or service (Keller, 1993; Romaniuk, 2003). The researches on models of brand multi-attributes are one of the most studied areas of consumer psychology (Werbel, 1978), and attribute weight is one of its key pursuits. Marketing practitioners also paid much attention to evaluations of attributes. Because those evaluations are relevant to the competitiveness and the strategies of promotion and new product development of the company (Green & Krieger, 1995). Then, how brand attributes correlate with weight judgments? And what features the attribute judgment reaction? Especially, what will feature the attribute weight judgment process of consumer who is facing the homogeneity of brands? Enlightened by the lexical hypothesis of researches on personality traits of psychology, this study choose search engine brands as the subject and adopt reaction time, which has been introduced into multi-attributes decision making by many researchers. Researches on independence of affect and cognition and on primacy of affect have cued us that we can categorize brand attributes into informative and affective ones. Meanwhile, Park has gone further to differentiate representative and experiential with functional attributes. This classification reflects the trend of emotion-branding and brand-consumer relationship. Three parts compose the research: the survey to collect attribute words, experiment one on affective primacy and experiment two on correlation between weight judgment and reaction. The results are as follow: In experiment one, we found: (1) affect words are not rated significantly from cognitive attributes, but affect words are responded faster than cognitive ones; (2) subjects comprehend and respond in different ways to functional attribute words and to representative and experiential words. In experiment two, we fund: (1) a significant negative correlation between attributes weight judgment and reaction time; (2) affective attributes will cause faster reaction than cognitive ones; (3) the reaction time difference between functional and representative or experiential attribute is significant, but there is no different between representative and experiential. In sum, we conclude that: (1): In word comprehension and weight judgment, we observed the affective primacy, even when the affect stimulus is presented as meaningful words. (2): The negative correlation between weight judgment and reaction time suggest us that the more important of attribute, the quicker of the reaction. (3): The difference on reaction time of functional, representative and experiential reflects the trend of emotional branding.