6 resultados para weak thought

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Effects of the conflict between reason and passion in Bernard Mandeville’s moral, economic and political thought My PhD dissertation focuses on Bernard Mandeville (1670-1732), a Dutch philosopher who moved to London in his late twenties. The aspect of Mandeville’s thought I take into account in my research is the conflicting relation between reason and passions, and the consequences that Mandeville’s view of this conflict has in the development of his theory of human nature which, I argue, is what grounds his moral, economic and, above all, political theory. According to Mandeville, reason is fundamentally weak. Passions influence with more strength human actions, and, eventually, are the ones which motivate them. The role of reason is merely instrumental, restricted to finding appropriate means in order to reach the desired ends, which are capricious and inconstant, since they all come from unstable passions. Reason cannot take decisions meant to act in the long term, pursuing an object which has not a selfish and temporary nature. There is no possibility, thus, that men’s actions aim just to achieve a good and just society, without their interests being directly involved. The basically selfish root of every desire leads Mandeville to claim that there is neither benevolence nor altruism which guides human behaviour. Hence he expresses a judgement on the moral character of human beings, always busy with their self-satisfaction, and hardly ever considering what would be good on a wider perspective, including other people’s sake. The anthropological features ascribed to men by Mandeville, are those which lead him to prefer a political system where governors are not supposed to have particular abilities, either from an intellectual or from a moral point of view, and peace and order are preserved by the bureaucratic machine, which is meant to work with the least effort on the part of the politicians, and no big harm can be done even by corrupted or wicked governors. This system is adopted with an eye at remedying human deficiencies: Mandeville takes into primary account, when he thinks of how to build a peaceful and functioning society, that everyone is concerned with his selfish interest, and that the rationality of a single politician, or of a group of them belonging to a same generation, cannot find a good “solution” to govern men able to last over the long period, and to work in different ages. This implies a refusal of the Hobbesian theory of the pactum subjectionis, which has the character of a rational and definitive choice, and leads Mandeville to consider the order which arises spontaneously, without any plan or rational intervention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMM–Newton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (~1.8), the high-energy cut-off (~290 keV), and the relative amount of cold reflection (~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMM–Newton and X–CfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(−0.22±0.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined X–C f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(38−43) erg s^−1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the X–CfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMM–Newton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Weak lensing experiments such as the future ESA-accepted mission Euclid aim to measure cosmological parameters with unprecedented accuracy. It is important to assess the precision that can be obtained in these measurements by applying analysis software on mock images that contain many sources of noise present in the real data. In this Thesis, we show a method to perform simulations of observations, that produce realistic images of the sky according to characteristics of the instrument and of the survey. We then use these images to test the performances of the Euclid mission. In particular, we concentrate on the precision of the photometric redshift measurements, which are key data to perform cosmic shear tomography. We calculate the fraction of the total observed sample that must be discarded to reach the required level of precision, that is equal to 0.05(1+z) for a galaxy with measured redshift z, with different ancillary ground-based observations. The results highlight the importance of u-band observations, especially to discriminate between low (z < 0.5) and high (z ~ 3) redshifts, and the need for good observing sites, with seeing FWHM < 1. arcsec. We then construct an optimal filter to detect galaxy clusters through photometric catalogues of galaxies, and we test it on the COSMOS field, obtaining 27 lensing-confirmed detections. Applying this algorithm on mock Euclid data, we verify the possibility to detect clusters with mass above 10^14.2 solar masses with a low rate of false detections.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The focus of this dissertation is the analysis of the music-related philosophical passages from the 5th century B.C. to the 2nd century B.C. It aims to provide a multifaceted view towards music as a cultural phenomenon, which is based primarily on the philological and culturological explorations instead of the technical-musicological approach. The texts from our selected period attest that mousikē had an extremely broad conceptualisation which led to the attribution of the different, sometimes completely opposite value: from an insignificant performative practice to an activity which corresponds to the divine laws and directly affects the human soul. The discussed testimonia provide evidence of defining music both as an exclusively acoustic phenomenon and as a philosophically significant concept that oversteps the sonic definition. Our sources clearly demonstrate that mousikē was a polysemous term: it was understood as an interdisciplinary form of art (as the arts of the Muses), though it was also used to indicate the exclusively instrumental music or a philosophical concept, which does not necessarily define sound as its essential quality. The aim of this dissertation is to clarify the arguments behind each of these positions, to analyse whether such different modes of conceptualisation are compatible among themselves, and to see how they fit together into explaining what was understood as music in Antiquity. In this thesis we explore the conceptual framework of mousikē and analyse what enabled the musical thought to be worthy of the attention of the greatest philosophical minds. We will demonstrate that it was not the sound or the artistic practices that were central in the philosophical thought on music, but instead the embedded structural qualities that have correspondence to the universal proportions of the cosmic world and which are perceptible to the listeners through the medium of sound.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the literature on philosophical practices, despite the crucial role that argumentation plays in these activities, no specific argumentative theories have ever been proposed to assist the figure of the facilitator in conducting philosophical dialogue and to enhance student’s critical thinking skills. The dissertation starts from a cognitive perspective that challenges the classic Cartesian notion of rationality by focusing on limits and biases of human reasoning. An argumentative model (WRAT – Weak Reasoning Argumentative Theory) is then outlined in order to respond to the needs of philosophical dialogue. After justifying the claim that this learning activity, among other inductive methodologies, is the most suitable for critical thinking education, I inquired into the specific goal of ‘arguing’ within this context by means of the tools provided by Speech Act Theory: the speaker’s intention is to construct new knowledge by questioning her own and other’s beliefs. The model proposed has been theorized on this assumption, starting from which the goals, and, in turn, the related norms, have been pinpointed. In order to include all the epistemic attitudes required to accomplish the complex task of arguing in philosophical dialogue, I needed to integrate two opposed cognitive accounts, Dual Process Theory and Evolutionary Approach, that, although they provide incompatible descriptions of reasoning, can be integrated to provide a normative account of argumentation. The model, apart from offering a theoretical contribution to argumentation studies, is designed to be applied to the Italian educational system, in particular to classes in technical and professional high schools belonging to the newly created network Inventio. This initiative is one of the outcomes of the research project by the same name, which also includes an original Syllabus, research seminars, a monitoring action and publications focused on introducing philosophy, in the form of workshop activities, into technical and professional schools.