929 resultados para independence
Resumo:
对可重构模块化机器人模块的结构进行了研究,并归纳设计出7种功能模块,其中包括3种1自由度的关节模块,2种连杆模块和2种辅助模块·所有模块的功能都是独立的,并且每个模块的连接界面都设计成了圆筒形以便重组和提高其刚度·每一种模块都可设计成不同尺寸系列,这些不同类型和尺寸系列的模块便可构成一个模块库·作者对3个自由度串联机器人的构形进行了系统的研究,并应用制作的实验模块对研究结果的可行性进行了验证·
Resumo:
自1979年海底热液喷口被首次发现以来,因其巨大的经济和科研价值引起了科学界的巨大关注。海底热液喷口释放的热液与周围海水混合,形成热液羽流,其范围可以达到数千米。热液羽流的存在使在几千米深的海底定位范围只有几米的热液喷口成为可能。湍流的作用使热液羽流与喷口位置存在不确定性,而在搜索区域中包含多个热液源会增加这种不确定性,这是海底热液喷口探测需要克服的难题之一。 本文主要研究了使用AUV探测海底热液喷口的方法。这个问题从更大范畴来说属于机器人化学羽流源定位问题(也称为移动机器人气源/味源定位),其潜在应用包括污染与环境监测,化学工厂安全,搜索与救援,反恐,麻醉品控制,爆炸物清除,以及热液喷口探测等。 首先,从AUV探测的角度研究了海底热液羽流的特性,分析了海底热液羽流的模型并根据模型对羽流进行了动态仿真。 从化学羽流源定位的角度研究了两种海底热液喷口探测策略──梯度搜索策略和构建占据栅格地图(Occupancy Grid Mapping,OGM)的策略。并利用仿真羽流环境验证了上述两种定位策略的可行性。 梯度搜索策略通过基于行为的方法实现,将梯度搜索任务分解为五个行为,并设计了行为间的转换规则,AUV按此规则在不同的行为间转化,跟踪羽流浓度梯度的方向,最终到达浓度极值点。 通过将每个栅格的二值状态重新定义为是否存在一个活跃的热液源,可以将OGM应用于热液源定位。融合传感器数据得到的后验概率地图可以反映每个栅格中存在热液源的可能性。本文采用基于贝叶斯规则的算法融合传感器数据。由于热液源的数量稀少,使用标准贝叶斯方法往往对栅格的占据概率做出过高估计,无法清晰的定位热液源。为此,又研究了一种精确算法和一种基于后验独立假设(Independence of Posteriors,IP)的近似算法,并分析了三种算法的优缺点。 最后,将占据栅格地图应用于分阶段海底热液喷口探测,利用栅格地图帮助实现探测的自主嵌套。
Resumo:
Conventional seismic attribute analysis is not only time consuming, but also has several possible results. Therefore, seismic attribute optimization and multi-attribute analysis are needed. In this paper, Fuyu oil layer in Daqing oil field is our main studying object. And there is much difference between seismic attributes and well logs. So under this condition, Independent Component Analysis (ICA) and Kohonen neural net are introduced to seismic attribute optimization and multi-attribute analysis. The main contents are as follows: (1) Now the method of seismic attribute compression is mainly principal component analysis (PCA). In this article, independent component analysis (ICA), which is superficially related to PCA, but much more powerful, is used to seismic reservoir characterizeation. The fundamental, algorithms and applications of ICA are surveyed. And comparation of ICA with PCA is stydied. On basis of the ne-entropy measurement of independence, the FastICA algorithm is implemented. (2) Two parts of ICA application are included in this article: First, ICA is used directly to identify sedimentary characters. Combined with geology and well data, ICA results can be used to predict sedimentary characters. Second, ICA treats many attributes as multi-dimension random vectors. Through ICA transform, a few good new attributes can be got from a lot of seismic attributes. Attributes got from ICA optimization are independent. (3) In this paper, Kohonen self-organizing neural network is studied. First, the characteristics of neural network’s structure and algorithm is analyzed in detail, and the traditional algorithm is achieved which has been used in seism. From experimental results, we know that the Kohonen self-organizing neural network converges fast and classifies accurately. Second, the self-organizing feature map algorithm needs to be improved because the result of classification is not very exact, the boundary is not quite clear and the velocity is not fast enough, and so on. Here frequency sensitive principle is introduced. Combine it with the self-organizing feature map algorithm, then get frequency sensitive self-organizing feature map algorithm. Experimental results show that it is really better. (4) Kohonen self-organizing neural network is used to classify seismic attributes. And it can be avoided drawing confusing conclusions because the algorithm’s characteristics integrate many kinds of seismic features. The result can be used in the division of sand group’s seismic faces, and so on. And when attributes are extracted from seismic data, some useful information is lost because of difference and deriveative. But multiattributes can make this lost information compensated in a certain degree.
Resumo:
In exploration geophysics,velocity analysis and migration methods except reverse time migration are based on ray theory or one-way wave-equation. So multiples are regarded as noise and required to be attenuated. It is very important to attenuate multiples for structure imaging, amplitude preserving migration. So it is an interesting research in theory and application about how to predict and attenuate internal multiples effectively. There are two methods based on wave-equation to predict internal multiples for pre-stack data. One is common focus point method. Another is inverse scattering series method. After comparison of the two methods, we found that there are four problems in common focus point method: 1. dependence of velocity model; 2. only internal multiples related to a layer can be predicted every time; 3. computing procedure is complex; 4. it is difficult to apply it in complex media. In order to overcome these problems, we adopt inverse scattering series method. However, inverse scattering series method also has some problems: 1. computing cost is high; 2. it is difficult to predict internal multiples in the far offset; 3. it is not able to predict internal multiples in complex media. Among those problems, high computing cost is the biggest barrier in field seismic processing. So I present 1D and 1.5D improved algorithms for reducing computing time. In addition, I proposed a new algorithm to solve the problem which exists in subtraction, especially for surface related to multiples. The creative results of my research are following: 1. derived an improved inverse scattering series prediction algorithm for 1D. The algorithm has very high computing efficiency. It is faster than old algorithm about twelve times in theory and faster about eighty times for lower spatial complexity in practice; 2. derived an improved inverse scattering series prediction algorithm for 1.5D. The new algorithm changes the computing domain from pseudo-depth wavenumber domain to TX domain for predicting multiples. The improved algorithm demonstrated that the approach has some merits such as higher computing efficiency, feasibility to many kinds of geometries, lower predictive noise and independence to wavelet; 3. proposed a new subtraction algorithm. The new subtraction algorithm is not used to overcome nonorthogonality, but utilize the nonorthogonality's distribution in TX domain to estimate the true wavelet with filtering method. The method has excellent effectiveness in model testing. Improved 1D and 1.5D inverse scattering series algorithms can predict internal multiples. After filtering and subtracting among seismic traces in a window time, internal multiples can be attenuated in some degree. The proposed 1D and 1.5D algorithms have demonstrated that they are effective to the numerical and field data. In addition, the new subtraction algorithm is effective to the complex theoretic models.
Resumo:
The most prominent tectonic and environmental events during the Cenozoic in Asia are the uplift of the Himalaya-Tibetan plateau, aridification in the Asian interior, and onset of the Asian monsoons. These caused more humid conditions in southeastern China and the formation of inland deserts in northwestern China. The 22 Ma eolian deposits in northern China provide an excellent terrestrial record relative to the above environmental events. Up to date, many studies have focused on the geochemical characters of the late Mio-Pleistocene eolian deposits, however, the geochemical characteristics of the Miocene loess and soils is still much less known. In this study, the elemental and Sr-Nd isotopic compositions of the eolian deposits from the Qinan (from 22.0 to 6.2 Ma) and the Xifeng (from 3.5 Ma until now) loess-soil sections were analyzed to examine the grain size effects on the element concentrations and the implications about the dust origin and climate. The main results are as follows: 1. The contents of Si, Na, Zr and Sr are higher in the coarser fractions while Ti and Nb have the highest contents in the 2-8 μm fractions. Al, Fe, Mg, K, Mn, Rb, Cu, Ga, Zn, V, Cr, Ni, LOI have clear relationships with grain-size, more abundant in the fine fraction while non significant relationship is observed for Y. Based on these features, we suggest that K2O/Al2O3 ratio can be used to address the dust provenance, and that VR (Vogt ratio = (Al2O3+K2O)/(MgO+CaO+Na2O)) can be used as a chemical weathering proxy for the Miocene eolian deposits because of their relative independence on the grain size. Meanwhile, SiO2/Al2O3 molar ratio is a best geochemical indicator of original eolian grain size, as suggested in earlier studies. 2. Analyses on the Sr and Nd isotope composition of the last glacial loess samples (L1) and comparison with the data from the deserts in northern China suggest that that Taklimakan desert is unlikely to be the main source region of the eolian dust. In contrast, these data suggest greater contributions of the Tengger, Badain Jaran and Qaidam deserts to the eolian dust during the last glacial cycle. Since the geochemical compositions (major, trace, REE and Sr, Nd isotope) of loess samples for the past 22 Ma are broadly similar with the samples from L1, these data trend to suggest relatively stable and insignificant changes of dust sources over the past 22 Ma. 3. Chemical weathering is stronger for Miocene paleosol samples than for the Plio-Pleistocene ones, showing warmer/more humid climatic conditions with a stronger summer monsoon in the Miocene. However, chemical weathering is typical of Ca-Na removal stage, suggesting a climate range from semiarid to subhumid conditions. These support the notion about the formation of a semi-arid to semi-humid monsoonal regime by the early Miocene, as is consistent with earlier studies.
Resumo:
The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.
Resumo:
Previous researches about family caregiving revealed that caregiving has both negative and positive effects on caregivers’ well-being. Based on Lawton’s two-factor model, this study aims at examining how caring for old parents would affect adult daughters’ psychological well-being. According to Lawton, objective stressors as caregiving would arouse two different kinds of caregivers’ subjective appraisal, i.e., negative appraisal and positive appraisal, which in turn correlate with the negative and positive dimensions of caregivers’ psychological well-being, respectively. There were two main purposes of this study: a) to verify both the negative and positive paths in the two-factor model and their relatively independence; and b) to examine the effects of relationship quality between caregiver and care-recipient on those paths. The results are as follows: 1) Caregiving stressors have significant positive predictive effect on caregivers’ negative appraisal, but have no direct effect on caregivers’ positive appraisal. 2) Caregivers’ negative appraisal has significant positive predictive effect on their negative emotional experience, while caregivers’ positive appraisal has significant positive predictive effect on their positive emotional experience. 3) Certain dimensions of relationship quality, including the Appreciation and General Appraisal, have significant negative predictive effect on caregivers’ negative appraisal, and have significant positive predictive effect on caregivers’ positive appraisal. 4) The Appreciation dimension of relationship quality moderates the path from caregiving demands to caregivers’ burden; and the General Appraisal of relationship quality moderates the path from caregivers’ positive appraisal to life satisfaction. Based on the above results, the researcher concluded that a) both the negative path and positive path exist in caregiving process, and they are relatively independent from each other; and b) relationship quality does moderate certain paths in the model. Meanwhile, the main effect of relationship quality on caregivers’ experience is also significant and more remarkable. This study attempts to explain these results in terms of coping resources. Both relationship quality and many other factors might be explained as resources that caregivers utilize to cope with stress of caregiving. With more resources, caregivers tend to appraise more positively, and less negatively, and vice versa. However, the resources which might affect caregivers’ positive appraisal, as well as the ways they work, may be different from that affect caregivers’ negative appraisal.
Resumo:
In current days, many companies have carried out their branding strategies, because strong brand usually provides confidence and reduce risks to its consumers. No matter what a brand is based on tangible products or services, it will possess the common attributes of this category, and it also has its unique attributes. Brand attribute is defined as descriptive features, which are intrinsic characteristics, values or benefits endowed by users of the product or service (Keller, 1993; Romaniuk, 2003). The researches on models of brand multi-attributes are one of the most studied areas of consumer psychology (Werbel, 1978), and attribute weight is one of its key pursuits. Marketing practitioners also paid much attention to evaluations of attributes. Because those evaluations are relevant to the competitiveness and the strategies of promotion and new product development of the company (Green & Krieger, 1995). Then, how brand attributes correlate with weight judgments? And what features the attribute judgment reaction? Especially, what will feature the attribute weight judgment process of consumer who is facing the homogeneity of brands? Enlightened by the lexical hypothesis of researches on personality traits of psychology, this study choose search engine brands as the subject and adopt reaction time, which has been introduced into multi-attributes decision making by many researchers. Researches on independence of affect and cognition and on primacy of affect have cued us that we can categorize brand attributes into informative and affective ones. Meanwhile, Park has gone further to differentiate representative and experiential with functional attributes. This classification reflects the trend of emotion-branding and brand-consumer relationship. Three parts compose the research: the survey to collect attribute words, experiment one on affective primacy and experiment two on correlation between weight judgment and reaction. The results are as follow: In experiment one, we found: (1) affect words are not rated significantly from cognitive attributes, but affect words are responded faster than cognitive ones; (2) subjects comprehend and respond in different ways to functional attribute words and to representative and experiential words. In experiment two, we fund: (1) a significant negative correlation between attributes weight judgment and reaction time; (2) affective attributes will cause faster reaction than cognitive ones; (3) the reaction time difference between functional and representative or experiential attribute is significant, but there is no different between representative and experiential. In sum, we conclude that: (1): In word comprehension and weight judgment, we observed the affective primacy, even when the affect stimulus is presented as meaningful words. (2): The negative correlation between weight judgment and reaction time suggest us that the more important of attribute, the quicker of the reaction. (3): The difference on reaction time of functional, representative and experiential reflects the trend of emotional branding.
Resumo:
As a kind of mood, depression is one of the emotions which people experienced usually. Depressive disorder is one of the common mental diseases. There are about 100 million people in the world be disturbed by depression every year. So it is long history that depression is investigated widely in medicine, psychology, and sociology. There are many theorial problems remain to be solved. Viewed from latest vocuments, the development of depression theory is tending to become more complicated. Most of the prior depression theory focused on relation between one factor and depression. Because depressed individuls have various characteristics and factors that cause depression are different, and each factor can explain only part of depression variance, these prior depression theories are defected. As the knowledge about depression accumulated, the view that depression be caused by multifactor is clearer. There is tendency to integrate these cooperational factor into a model while developing a depression theory. In the present study, depression status of 1625 middle school students from junior 1 to senior 3 are measured using Depression Scale of Middle-school Student which is developed by ourselves. From approach of depressive mood, the present study explored depressive diathesis including attributional style, personality, coping style, and self. The relation among depressive diathesis, stress and depression is analysed. The relation between depression and school life adaptation, depression and cohesion, adaptation in family are also analysed from environmental view. At last, relation among environment, stress, depressive diathesis is examined by using covariance structure modelling. Synthesizing the results from the present study, the following conclusions were drawn: (1) There is grade-characteristics in development of depression in middle school students. Depression degree increased with grade. The main reason may be that the stress middle-school students experience increase and self-acceptance decrease with grade; (2) High depressive diathesis is different from low depressive diathesis. the features of high depressive diathesis are that attributing failure to ability or background, low capacity for status, low sociability, low independence, low self-blame, more illusion. The features of low depressive diathesis are that not attributing failure to ability or background, high capacity for status, high sociability, high independence, high self-acceptance, while facing difficulties, using problem-resolving coping strategy, less self-blame, less illusion. Individuals who have high depressive diathesis showed serious depression, and individuals who have low depressive diathesis showed light depression; (3) Depressive diathesis had accumulative effect on depression. More low depressive diathesis, more light is depression. More high depressive diathesis, more serious is depression; (4) Depressive diathesis can predict present depression and future depression. Predicting present depression is more effective than predicting future depression; (5) Individual who has different depressive diathesis experiences different level of stress. Higher the depressive diathesis individual has, higher stress he will experience. Lower the depressive diathesis individual has, lower stress will he experience; (6) There is correlation between life event pressure and depression. Life event pressure can predict a part of variance of depression. Life event pressure has accumulative effect on depression. More life event and higher life event pressure, more serious depression individual will experience; (7) There exits high correlation between depression and school life adaptation which can predict depression; (8) There is high correlative relation between depression and cohesion, adaptation in family which can predict depression; and (9) Environment have more effect on diathesis than on stress. Diathesis has more effect on depression than stress does. The past depression can predict future depression. This study had enlarged the domain of depressive diathesis such as attributional style, personality, coping style, and self, which are analysed wholly. This study had also enriched the connotation of diathesis=stress theory. Above two aspects are theoretical significance of the study. This study provide a framework of mental health educational curriculum in high school and provide the guideline for prevention and cure of depression. It is the practical significance of this study.
Resumo:
The actor message-passing model of concurrent computation has inspired new ideas in the areas of knowledge-based systems, programming languages and their semantics, and computer systems architecture. The model itself grew out of computer languages such as Planner, Smalltalk, and Simula, and out of the use of continuations to interpret imperative constructs within A-calculus. The mathematical content of the model has been developed by Carl Hewitt, Irene Greif, Henry Baker, and Giuseppe Attardi. This thesis extends and unifies their work through the following observations. The ordering laws postulated by Hewitt and Baker can be proved using a notion of global time. The most general ordering laws are in fact equivalent to an axiom of realizability in global time. Independence results suggest that some notion of global time is essential to any model of concurrent computation. Since nondeterministic concurrency is more fundamental than deterministic sequential computation, there may be no need to take fixed points in the underlying domain of a power domain. Power domains built from incomplete domains can solve the problem of providing a fixed point semantics for a class of nondeterministic programming languages in which a fair merge can be written. The event diagrams of Greif's behavioral semantics, augmented by Baker's pending events, form an incomplete domain. Its power domain is the semantic domain in which programs written in actor-based languages are assigned meanings. This denotational semantics is compatible with behavioral semantics. The locality laws postulated by Hewitt and Baker may be proved for the semantics of an actor-based language. Altering the semantics slightly can falsify the locality laws. The locality laws thus constrain what counts as an actor semantics.
Resumo:
A prototype presentation system base is described. It offers mechanisms, tools, and ready-made parts for building user interfaces. A general user interface model underlies the base, organized around the concept of a presentation: a visible text or graphic for conveying information. Te base and model emphasize domain independence and style independence, to apply to the widest possible range of interfaces. The primitive presentation system model treats the interface as a system of processes maintaining a semantic relation between an application data base and a presentation data base, the symbolic screen description containing presentations. A presenter continually updates the presentation data base from the application data base. The user manipulates presentations with a presentation editor. A recognizer translates the user's presentation manipulation into application data base commands. The primitive presentation system can be extended to model more complex systems by attaching additional presentation systems. In order to illustrate the model's generality and descriptive capabilities, extended model structures for several existing user interfaces are discussed. The base provides support for building the application and presentation data bases, linked together into a single, uniform network, including descriptions of classes of objects as we as the objects themselves. The base provides an initial presentation data base network graphics to continually display it, and editing functions. A variety of tools and mechanisms help create and control presenters and recognizers. To demonstrate the base's utility, three interfaces to an operating system were constructed, embodying different styles: icons, menu, and graphical annotation.
Resumo:
The work examines the change involving the Church in Tunisia from the period of the Protectorate to the present through the fundamental moments of independence (1956) and the signing of the ‘Modus vivendi’ (1964). In the first structure of the “modern” Church, a fundamental role was played by the complex figure of the French Cardinal Charles-Allemand Lavigerie who, while giving strong impulse to setting up disinterested charitable social initiatives by the congregations (Pères Blancs, Soeurs Blanches and others), also represented the ideal of the ‘evangelizing’ (as well as colonial) Church which, despite its declared will to avoid proselytism, almost inevitably tended to slip into it. During the French Protectorate (1881-1956) the ecclesiastic institution concentrated strongly on itself, with little heed for the sensitivity of its host population, and developed its activities as if it were in a European country. From the social standpoint, the Church was mostly involved in teaching, which followed the French model, and health facilities. In the Church only the Pères Blancs missionaries were sincerely committed to promoting awareness of the local context and dialogue with the Muslims. The Catholic clergy in the country linked its religious activity close to the policy of the Protectorate, in the hope of succeeding in returning to the ancient “greatness of the African Church”, as the Eucharistic Congress in Carthage in 1930 made quite clear. The Congress itself planted the first seed in the twentyfive- year struggle that led the Tunisian population to independence in 1956 and the founding of the Republic in 1957. The conquest of independence and the ‘Modus vivendi’ marked a profound change in the situation and led to an inversion of roles: the Catholic community was given the right to exist only on the condition that it should not interfere in Tunisian society. The political project of Bourguiba, who led the Republic from 1957 to 1987, aimed to create a strongly egalitarian society, with a separation between political and religious powers. In particular, in referring to the Church, he appeared as a secularist with no hostility towards the Catholics who were, however, considered as “cooperators”, welcome so long as they were willing to place their skills at the service of the construction of the state. So, in the catholic Community was a tension between the will of being on the side of the country and that of conserving a certain distance from it and not being an integral part of it. In this process of reflection, the role of the Second Vatican Council was fundamental: it spread the idea of a Church open to the world and the other religions, in particular to Islam: the teaching of the Council led the congregations present in the country to accept the new condition. This new Church that emerged from the Council saw some important events in the process of “living together”, of “cultural mixing” and the search for a common ground between different realities. The almost contemporary arrival of Arab bishops raised awareness among the Tunisians of the existence of Christian Arabs and, at the same time, the Catholic community began considering their faith in a different way. In the last twenty years the situation has continued to change. Side by side with the priests present for decades or even those born there, some new congregations have begun to operate, albeit in small numbers: they have certainly revitalized the community of the faithful, but they sometimes appear more devoted to service “within” the Church, than to services for the population, and are thus characterized by exterior manifestations of their religion. This sort of presence has made it possible for Bourguiba's successor, Ben Ali (president from 1987 to 2011), to practice forms of tolerance even more clearly, but always limited to formal relations; the Tunisians are still far from having a real understanding of the Catholic reality, with certain exceptions connected to relations on a personal and not structured plane, as was the case in the previous period. The arrival of a good number of young people from sub-Saharan Africa, most of all students, belonging to the JCAT, and personnel of the BAD has “Africanized” the Church in Tunisia and has brought about an increase in Christians' exterior manifestations; but this is a visibility that is not blatant but discreet, with the implicit risk of the Church continuing to be perceived as a sort of exterior body, alien to the country; nor can we say, lacking proper documentation, how it will be possible to build a bridge between different cultures through the “accompaniment” of Christian wives of Tunisians. Today, the Church is living in a country that has less and less need of it; its presence, in the schools and in health facilities, is extremely reduced. And also in other sectors of social commitment, such as care for the disabled, the number of clergymen involved is quite small. The ‘revolution’ in 2011 and the later developments up to the present have brought about another socio-political change, characterized by a climate of greater freedom, but with as yet undefinable contours. This change in the political climate will inevitable have consequences in Tunisia’s approach to religious and cultural minorities, but it is far too soon to discuss this on the historical and scientific planes.
Resumo:
Korosteleva-Polglase Elena, White, S., 'Political Leadership and Public Support In Belarus: Forward to the Past?', In: 'The EU and Belarus: Between Moscow and Brussels', (London: Kogan Page), pp.51-71, 2001 RAE2008
Resumo:
Marnet, Oliver, 'Behaviour and rationality in corporate governance', Journal of Economic Issues (2005) 39(3) pp.613-632 RAE2008