117 resultados para TDP, Travelling Deliveryman Problem, Algoritmi di ottimizzazione


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CHAPTER 1:FLUID-VISCOUS DAMPERS In this chapter the fluid-viscous dampers are introduced. The first section is focused on the technical characteristics of these devices, their mechanical behavior and the latest evolution of the technology whose they are equipped. In the second section we report the definitions and the guide lines about the design of these devices included in some international codes. In the third section the results of some experimental tests carried out by some authors on the response of these devices to external forces are discussed. On this purpose we report some technical schedules that are usually enclosed to the devices now available on the international market. In the third section we show also some analytic models proposed by various authors, which are able to describe efficiently the physical behavior of the fluid-viscous dampers. In the last section we propose some cases of application of these devices on existing structures and on new-construction structures. We show also some cases in which these devices have been revealed good for aims that lies outside the reduction of seismic actions on the structures. CHAPTER 2:DESIGN METHODS PROPOSED IN LITERATURE In this chapter the more widespread design methods proposed in literature for structures equipped by fluid-viscous dampers are introduced. In the first part the response of sdf systems in the case of harmonic external force is studied, in the last part the response in the case of random external force is discussed. In the first section the equations of motion in the case of an elastic-linear sdf system equipped with a non-linear fluid-viscous damper undergoing a harmonic force are introduced. This differential problem is analytically quite complex and it’s not possible to be solved in a closed form. Therefore some authors have proposed approximate solution methods. The more widespread methods are based on equivalence principles between a non-linear device and an equivalent linear one. Operating in this way it is possible to define an equivalent damping ratio and the problem becomes linear; the solution of the equivalent problem is well-known. In the following section two techniques of linearization, proposed by some authors in literature, are described: the first technique is based on the equivalence of the energy dissipated by the two devices and the second one is based on the equivalence of power consumption. After that we compare these two techniques by studying the response of a sdf system undergoing a harmonic force. By introducing the equivalent damping ratio we can write the equation of motion of the non-linear differential problem in an implicit form, by dividing, as usual, for the mass of the system. In this way, we get a reduction of the number of variables, by introducing the natural frequency of the system. The equation of motion written in this form has two important properties: the response is linear dependent on the amplitude of the external force and the response is dependent on the ratio of the frequency of the external harmonic force and the natural frequency of the system only, and not on their single values. All these considerations, in the last section, are extended to the case of a random external force. CHAPTER 3: DESIGN METHOD PROPOSED In this chapter the theoretical basis of the design method proposed are introduced. The need to propose a new design method for structures equipped with fluid-viscous dampers arises from the observation that the methods reported in literature are always iterative, because the response affects some parameters included in the equation of motion (such as the equivalent damping ratio). In the first section the dimensionless parameterε is introduced. This parameter has been obtained from the definition of equivalent damping ratio. The implicit form of the equation of motion is written by introducing the parameter ε, instead of the equivalent damping ratio. This new implicit equation of motions has not any terms affected by the response, so that once ε is known the response can be evaluated directly. In the second section it is discussed how the parameter ε affects some characteristics of the response: drift, velocity and base shear. All the results described till this point have been obtained by keeping the non-linearity of the behavior of the dampers. In order to get a linear formulation of the problem, that is possible to solve by using the well-known methods of the dynamics of structures, as we did before for the iterative methods by introducing the equivalent damping ratio, it is shown how the equivalent damping ratio can be evaluated from knowing the value of ε. Operating in this way, once the parameter ε is known, it is quite easy to estimate the equivalent damping ratio and to proceed with a classic linear analysis. In the last section it is shown how the parameter ε could be taken as reference for the evaluation of the convenience of using non-linear dampers instead of linear ones on the basis of the type of external force and the characteristics of the system. CHAPTER 4: MULTI-DEGREE OF FREEDOM SYSTEMS In this chapter the design methods of a elastic-linear mdf system equipped with non-linear fluidviscous dampers are introduced. It has already been shown that, in the sdf systems, the response of the structure can be evaluated through the estimation of the equivalent damping ratio (ξsd) assuming the behavior of the structure elastic-linear. We would to mention that some adjusting coefficients, to be applied to the equivalent damping ratio in order to consider the actual behavior of the structure (that is non-linear), have already been proposed in literature; such coefficients are usually expressed in terms of ductility, but their treatment is over the aims of this thesis and we does not go into further. The method usually proposed in literature is based on energy equivalence: even though this procedure has solid theoretical basis, it must necessary include some iterative process, because the expression of the equivalent damping ratio contains a term of the response. This procedure has been introduced primarily by Ramirez, Constantinou et al. in 2000. This procedure is reported in the first section and it is defined “Iterative Method”. Following the guide lines about sdf systems reported in the previous chapters, it is introduced a procedure for the assessment of the parameter ε in the case of mdf systems. Operating in this way the evaluation of the equivalent damping ratio (ξsd) can be done directly without implementing iterative processes. This procedure is defined “Direct Method” and it is reported in the second section. In the third section the two methods are analyzed by studying 4 cases of two moment-resisting steel frames undergoing real accelerogramms: the response of the system calculated by using the two methods is compared with the numerical response obtained from the software called SAP2000-NL, CSI product. In the last section a procedure to create spectra of the equivalent damping ratio, affected by the parameter ε and the natural period of the system for a fixed value of exponent α, starting from the elasticresponse spectra provided by any international code, is introduced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Obstructive sleep apnoea/hypopnoea syndrome (OSAHS) is the periodic reduction or cessation of airflow during sleep. The syndrome is associated whit loud snoring, disrupted sleep and observed apnoeas. Surgery aims to alleviate symptoms of daytime sleepiness, improve quality of life and reduce the signs of sleep apnoea recordered by polysomnography. Surgical intervention for snoring and OSAHS includes several procedures, each designed to increase the patency of the upper airway. Procedures addressing nasal obstruction include septoplasty, turbinectomy, and radiofrequency ablation (RF) of the turbinates. Surgical procedures to reduce soft palate redundancy include uvulopalatopharyngoplasty with or without tonsillectomy, uvulopalatal flap, laser-assisted uvulopalatoplasty, and RF of the soft palate. More significant, however, particularly in cases of severe OSA, is hypopharyngeal or retrolingual obstruction related to an enlarged tongue, or more commonly due to maxillomandibular deficiency. Surgeries in these cases are aimed at reducing the bulk of the tongue base or providing more space for the tongue in the oropharynx so as to limit posterior collapse during sleep. These procedures include tongue-base suspension, genioglossal advancement, hyoid suspension, lingualplasty, and maxillomandibular advancement. We reviewed 269 patients undergoing to osas surgery at the ENT Department of Forlì Hospital in the last decade. Surgery was considered a success if the postoperative apnea/hypopnea index (AHI) was less than 20/h. According to the results, we have developed surgical decisional algorithms with the aims to optimize the success of these procedures by identifying proper candidates for surgery and the most appropriate surgical techniques. Although not without risks and not as predictable as positive airway pressure therapy, surgery remains an important treatment option for patients with obstructive sleep apnea (OSA), particularly for those who have failed or cannot tolerate positive airway pressure therapy. Successful surgery depends on proper patient selection, proper procedure selection, and experience of the surgeon. The intended purpose of medical algorithms is to improve and standardize decisions made in the delivery of medical care, assist in standardizing selection and application of treatment regimens, to reduce potential introduction of errors. Nasal Continuous Positive Airway Pressure (nCPAP) is the recommended therapy for patients with moderate to severe OSAS. Unfortunately this treatment is not accepted by some patient, appears to be poorly tolerated in a not neglible number of subjects, and the compliance may be critical, especially in the long term if correctly evaluated with interview as well with CPAP smart cards analysis. Among the alternative options in Literature, surgery is a long time honoured solution. However until now no clear scientific evidence exists that surgery can be considered a really effective option in OSAHS management. We have design a randomized prospective study comparing MMA and a ventilatory device (Autotitrating Positive Airways Pressure – APAP) in order to understand the real effectiveness of surgery in the management of moderate to severe OSAS. Fifty consecutive previously full informed patients suffering from severe OSAHS were enrolled and randomised into a conservative (APAP) or surgical (MMA) arm. Demographic, biometric, PSG and ESS profiles of the two group were statistically not significantly different. One year after surgery or continuous APAP treatment both groups showed a remarkable improvement of mean AHI and ESS; the degree of improvement was not statistically different. Provided the relatively small sample of studied subjects and the relatively short time of follow up, MMA proved to be in our adult and severe OSAHS patients group a valuable alternative therapeutical tool with a success rate not inferior to APAP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low-pressure/high-temperature (LP/HT) metamorphic belts are characterised by rocks that experienced abnormal heat flow in shallow crustal levels (T > 600 °C; P < 4 kbar) resulting in anomalous geothermal gradients (60-150 °C/km). The abnormal amount of heat has been related to crustal underplating of mantle-derived basic magmas or to thermal perturbation linked to intrusion of large volumes of granitoids in the intermediate crust. In particular, in this latter context, magmatic or aqueous fluids are able to transport relevant amounts of heat by advection, thus favouring regional LP/HT metamorphism. However, the thermal perturbation consequent to heat released by cooling magmas is responsible also for contact metamorphic effects. A first problem is that time and space relationships between regional LP/HT metamorphism and contact metamorphism are usually unclear. A second problem is related to the high temperature conditions reached at different crustal levels. These, in some cases, can completely erase the previous metamorphic history. Notwithstanding this problem is very marked in lower crustal levels, petrologic and geochronologic studies usually concentrate in these attractive portions of the crust. However, only in the intermediate/upper-crustal levels of a LP/HT metamorphic belt the tectono-metamorphic events preceding the temperature peak, usually not preserved in the lower crustal portions, can be readily unravelled. The Hercynian Orogen of Western Europe is a well-documented example of a continental collision zone with widespread LP/HT metamorphism, intense crustal anatexis and granite magmatism. Owing to the exposure of a nearly continuous cross-section of the Hercynian continental crust, the Sila massif (northern Calabria) represents a favourable area to understand large-scale relationships between granitoids and LP/HT metamorphic rocks, and to discriminate regional LP/HT metamorphic events from contact metamorphic effects. Granulite-facies rocks of the lower crust and greenschist- to amphibolite-facies rocks of the intermediate-upper crust are separated by granitoids emplaced into the intermediate level during the late stages of the Hercynian orogeny. Up to now, advanced petrologic studies have been focused mostly in understanding P-T evolution of deeper crustal levels and magmatic bodies, whereas the metamorphic history of the shallower crustal levels is poorly constrained. The Hercynian upper crust exposed in Sila has been subdivided in two different metamorphic complexes by previous authors: the low- to very low-grade Bocchigliero complex and the greenschist- to amphibolite-facies Mandatoriccio complex. The latter contains favourable mineral assemblages in order to unravel the tectono-metamorphic evolution of the Hercynian upper crust. The Mandatoriccio complex consists mainly of metapelites, meta-arenites, acid metavolcanites and metabasites with rare intercalations of marbles and orthogneisses. Siliciclastic metasediments show a static porphyroblastic growth mainly of biotite, garnet, andalusite, staurolite and muscovite, whereas cordierite and fibrolite are less common. U-Pb ages and internal features of zircons suggest that the protoliths of the Mandatoriccio complex formed in a sedimentary basin filled by Cambrian to Silurian magmatic products as well as by siliciclastic sediments derived from older igneous and metamorphic rocks. In some localities, metamorphic rocks are injected by numerous aplite/pegmatite veins. Small granite bodies are also present and are always associated to spotted schists with large porphyroblasts. They occur along a NW-SE trending transcurrent cataclastic fault zone, which represents the tectonic contact between the Bocchigliero and the Mandatoriccio complexes. This cataclastic fault zone shows evidence of activity at least from middle-Miocene to Recent, indicating that brittle deformation post-dated the Hercynian orogeny. P-T pseudosections show that micaschists and paragneisses of the Mandatoriccio complex followed a clockwise P-T path characterised by four main prograde phases: thickening, peak-pressure condition, decompression and peak-temperature condition. During the thickening phase, garnet blastesis started up with spessartine-rich syntectonic core developed within micaschists and paragneisses. Coevally (340 ± 9.6 Ma), mafic sills and dykes injected the upper crustal volcaniclastic sedimentary sequence of the Mandatoriccio complex. After reaching the peak-pressure condition (≈4 kbar), the upper crust experienced a period of deformation quiescence marked by the static overgrowths of S2 by Almandine-rich-garnet rims and by porphyroblasts of biotite and staurolite. Probably, this metamorphic phase is related to isotherms relaxation after the thickening episode recorder by the Rb/Sr isotopic system (326 ± 6 Ma isochron age). The post-collisional period was mainly characterised by decompression with increasing temperature. This stage is documented by the andalusite+biotite coronas overgrown on staurolite porphyroblasts and represents a critical point of the metamorphic history, since metamorphic rocks begin to record a significant thermal perturbation. Peak-temperature conditions (≈620 °C) were reached at the end of this stage. They are well constrained by some reaction textures and mineral assemblages observed almost exclusively within paragneisses. The later appearance of fibrolitic sillimanite documents a small excursion of the P-T path across the And-Sil boundary due to the heating. Stephanian U-Pb ages of monazite crystals from the paragneiss, can be related to this heating phase. Similar monazite U-Pb ages from the micaschist combined with the lack of fibrolitic sillimanite suggest that, during the same thermal perturbation, micaschists recorded temperatures slightly lower than those reached by paragneisses. The metamorphic history ended with the crystallisation of cordierite mainly at the expense of andalusite. Consequently, the Ms+Bt+St+And+Sill+Crd mineral assemblage observed in the paragneisses is the result of a polyphasic evolution and is characterised by the metastable persistence of the staurolite in the stability fields of the cordierite. Geologic, geochronologic and petrographic data suggest that the thermal peak recorded by the intermediate/upper crust could be strictly connected with the emplacement of large amounts of granitoid magmas in the middle crust. Probably, the lithospheric extension in the relatively heated crust favoured ascent and emplacement of granitoids and further exhumation of metamorphic rocks. After a comparison among the tectono-metamorphic evolutions of the different Hercynian crustal levels exposed in Sila, it is concluded that the intermediate/upper crustal level offers the possibility to reconstruct a more detailed tectono-metamorphic history. The P-T paths proposed for the lower crustal levels probably underestimate the amount of the decompression. Apart from these considerations, the comparative analysis indicates that P-T paths at various crustal levels in the Sila cross section are well compatible with a unique geologic scenario, characterized by post-collisional extensional tectonics and magmas ascent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The labyrinthum Capella quoted in the title (from a Prudentius of Troyes epistle) represents the allegory of the studium of the liberal arts and the looking for knowledge in the early middle age. This is a capital problem in the early Christianity and, in general, for all the western world, concerning the relationship between faith and science. I studied the evolution of this subject from its birth to Carolingian age, focusing on the most relevant figures, for the western Europe, such Saint Augustine (De doctrina christiana), Martianus Capella (De Nuptiis Philologiae et Mercurii) and Iohannes Scotus Eriugena (Annotationes in Marcianum). Clearly it emerges that there were two opposite ways about this relatioship. According to the first, the human being is capable of get a knowledge about God thanks to its own reason and logical thought processes (by the analysis of the nature as a Speculum Dei); on the other way, only the faith and the grace could give the man the possibility to perceive God, and the Bible is the only book men need to know. From late antiquity to Iohannes Scotus times, a few christian and pagan authors fall into line with first position (the neoplatonic one): Saint Augustine (first part of his life, then he retracted some of his views), Martianus, Calcidius and Macrobius. Other philosophers were not neoplatonic bat believed in the power of the studium: Boethius, Cassiodorus, Isidorus of Seville, Hrabanus Maurus and Lupus of Ferriéres. In order to get an idea of this conception, I finally focused the research on Iohannes Scotus Eriugena's Annotationes in Marcianum. I commented Eriugena's work phrase by phrase trying to catch the sense of his words, the reference, philosophical influences, to trace antecedents and its clouts to later middle age and Chartres school. In this scholastic text Eriugena comments the Capella's work and poses again the question of the studium to his students. Iohannes was a magister in schola Palatina during the time of Carl the Bald, he knew Saint Augustine works, and he knew Boethius, Calcidius, Macrobius, Isidorus and Cassiodorus ones too. He translated Pseudo-Dionysius the Areopagite and Maximus the Confessor. He had a neoplatonic view of Christianity and tried to harmonize the impossibility to know God to man's intellectual capability to get a glimpse of God through the study of the nature. According to this point of view, Eriugena's comment of Martianus Capella was no more a secondary work. It gets more and more importance to understand his research and his mystic, and to understand and really grasp the inner sense of his chief work Periphyseon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

«In altri termini mi sfuggiva e ancora oggi mi sfugge gran parte del significato dell’evoluzione del tempo; come se il tempo fosse una materia che osservo dall’esterno. Questa mancanza di evoluzione è fonte di alcune mie sventure ma anche mi appartiene con gioia.» Aldo Rossi, Autobiografia scientifica. The temporal dimension underpinning the draft of Autobiografia scientifica by Aldo Rossi may be referred to what Lucien Lévy-Bruhl, the well-known French anthropologist, defines as “primitive mentality” and “prelogical” conscience : the book of life has lost its page numbers, even punctuation. For Lévy-Bruhl, but certainly for Rossi, life or its summing up becomes a continuous account of ellipses, gaps, repetitions that may be read from left to right or viceversa, from head to foot or viceversa without distinction. Rossi’s autobiographical writing seems to accept and support the confusion with which memories have been collected, recording them after the order memory gives them in the mental distillation or simply according to the chronological order in which they have happened. For Rossi, the confusion reflects the melting of memory elements into a composite image which is the result of a fusion. He is aware that the same sap pervades all memories he is going to put in order: each of them has got a common denominator. Differences have diminished, almost faded; the quick glance is prevalent over the distinction of each episode. Rossi’s writing is beyond the categories dependent on time: past and present, before and now. For Rossi, the only repetition – the repetition the text will make possible for an indefinite number of times – gives peculiarity to the event. As Gilles Deleuze knows, “things” may only last as “singleness”: more frequent the repetition is, more singular is the memory phenomenon that recurs, because only what is singular magnifies itself and happens endlessly forever. Rossi understands that “to raise the first time to nth forever”, repetition becomes glorification . It may be an autobiography that, celebrating the originality, enhances the memory event in the repetition; in fact it greatly differs from the biographical reproduction, in which each repetition is but a weaker echo, a duller copy, provided with a smaller an smaller power in comparison with the original. Paradoxically, for Deleuze the repetition asserts the originality and singularity of what is repeated. Rossi seems to share the thought expressed by Kierkegaard in the essay Repetition: «The hope is a graceful maiden slipping through your fingers; the memory of an elderly woman, indeed pretty, but never satisfactory if necessary; the repetition is a loved friend you are never tired of, as it is only the new to make you bored. The old never bores you and its presence makes you happy [...] life is but a repetition [...] here is the beauty of life» . Rossi knows well that repetition hints at the lasting stability of cosmic time. Kierkegaard goes on: «The world exists, and it exists as a repetition» . Rossi devotes himself, on purpose and in all conscience, to collect, to inventory and «to review life», his own life, according to a recovery not from the past but of the past: a search work, the «recherche du temps perdu», as Proust entitled his masterpiece on memory. If you want the past time to be not wasted, you must give it presence. «Memoria e specifico come caratteristiche per riconoscere se stesso e ciò che è estraneo mi sembravano le più chiare condizioni e spiegazioni della realtà. Non esiste uno specifico senza memoria, e una memoria che non provenga da un momento specifico; e solo questa unione permette la conoscenza della propria individualità e del contrario (self e non-self)» . Rossi wants to understand himself, his own character; it is really his own character that requires to be understood, to increase its own introspective ability and intelligence. «Può sembrare strano che Planck e Dante associno la loro ricerca scientifica e autobiografica con la morte; una morte che è in qualche modo continuazione di energia. In realtà, in ogni artista o tecnico, il principio della continuazione dell’energia si mescola con la ricerca della felicità e della morte» . The eschatological incipit of Rossi’s autobiography refers to Freud’s thought in the exact circularity of Dante’s framework and in as much exact circularity of the statement of the principle of the conservation of energy: in fact it was Freud to connect repetition to death. For Freud, the desire of repetition is an instinct rooted in biology. The primary aim of such an instinct would be to restore a previous condition, so that the repeated history represents a part of the past (even if concealed) and, relieving the removal, reduces anguish and tension. So, Freud ask himself, what is the most remote state to which the instinct, through the repetition, wants to go back? It is a pre-vital condition, inorganic of the pure entropy, a not-to-be condition in which doesn’t exist any tension; in other words, Death. Rossi, with the theme of death, introduces the theme of circularity which further on refers to the sense of continuity in transformation or, in the opposite way, the transformation in continuity. «[...] la descrizione e il rilievo delle forme antiche permettevano una continuità altrimenti irripetibile, permettevano anche una trasformazione, una volta che la vita fosse fermata in forme precise» . Rossi’s attitude seems to hint at the reflection on time and – in a broad sense – at the thought on life and things expressed by T.S. Eliot in Four Quartets: «Time present and time past / Are both perhaps present in time future, / And time future is contained in time past. / I all time is eternally present / All time is unredeemable. / What might have been is an abstraction / Remaining perpetual possibility / Only in a word of speculation. / What might have been and what has been / Point to one end, which is always present. [...]» . Aldo Rossi’s autobiographical story coincides with the description of “things” and the description of himself through the things in the exact parallel with craft or art. He seems to get all things made by man to coincide with the personal or artistic story, with the consequent immediate necessity of formulating a new interpretation: the flow of things has never met a total stop; all that exists nowadays is but a repetition or a variant of something existing some time ago and so on, without any interruption until the early dawnings of human life. Nevertheless, Rossi must operate specific subdivisions inside the continuous connection in time – of his time – even if limited by a present beginning and end of his own existence. This artist, as an “historian” of himself and his own life – as an auto-biographer – enjoys the privilege to be able to decide if and how to operate the cutting in a certain point rather than in another one, without being compelled to justify his choice. In this sense, his story is a matter very ductile and flexible: a good story-teller can choose any moment to start a certain sequence of events. Yet, Rossi is aware that, beyond the mere narration, there is the problem to identify in history - his own personal story – those flakings where a clean cut enables the separation of events of different nature. In order to do it, he has to make not only an inventory of his own “things”, but also to appeal to authority of the Divina Commedia started by Dante when he was 30. «A trent’anni si deve compiere o iniziare qualcosa di definitivo e fare i conti con la propria formazione» . For Rossi, the poet performs his authority not only in the text, but also in his will of setting out on a mystical journey and handing it down through an exact descriptive will. Rossi turns not only to the authority of poetry, but also evokes the authority of science with Max Plank and his Scientific Autobiography, published, in Italian translation, by Einaudi, 1956. Concerning Planck, Rossi resumes an element seemingly secondary in hit account where the German physicist «[...] risale alle scoperte della fisica moderna ritrovando l’impressione che gli fece l’enunciazione del principio di conservazione dell’energia; [...]» . It is again the act of describing that links Rossi to Planck, it is the description of a circularity, the one of conservation of energy, which endorses Rossi’s autobiographical speech looking for both happiness and death. Rossi seems to agree perfectly to the thought of Planck at the opening of his own autobiography: «The decision to devote myself to science was a direct consequence of a discovery which was never ceased to arouse my enthusiasm since my early youth: the laws of human thought coincide with the ones governing the sequences of the impressions we receive from the world surrounding us, so that the mere logic can enable us to penetrate into the latter one’s mechanism. It is essential that the outer world is something independent of man, something absolute. The search of the laws dealing with this absolute seems to me the highest scientific aim in life» . For Rossi the survey of his own life represents a way to change the events into experiences, to concentrate the emotion and group them in meaningful plots: «It seems, as one becomes older. / That the past has another pattern, and ceases to be a mere sequence [...]» Eliot wrote in Four Quartet, which are a meditation on time, old age and memory . And he goes on: «We had the experience but missed the meaning, / And approach to the meaning restores the experience / In a different form, beyond any meaning [...]» . Rossi restores in his autobiography – but not only in it – the most ancient sense of memory, aware that for at least 15 centuries the Latin word memoria was used to show the activity of bringing back images to mind: the psychology of memory, which starts with Aristotele (De Anima), used to consider such a faculty totally essential to mind. Keith Basso writes: «The thought materializes in the form of “images”» . Rossi knows well – as Aristotele said – that if you do not have a collection of mental images to remember – imagination – there is no thought at all. According to this psychological tradition, what today we conventionally call “memory” is but a way of imagining created by time. Rossi, entering consciously this stream of thought, passing through the Renaissance ars memoriae to reach us gives a great importance to the word and assumes it as a real place, much more than a recollection, even more than a production and an emotional elaboration of images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral dissertation faces the debated topic of the traditions of Republicanism in the Modern Age assuming, as a point of view, the problem of the "mixed" government. The research therefore dwells upon the use of this model in Sixteenth-Century Italy, also in connection with the historical events of two standard Republics such as Florence and Venice. The work focuses on Donato Giannotti (1492-1573), Gasparo Contarini (1483-1542) and Paolo Paruta (1540-1598), as the main figures in order to reconstruct the debate on "mixed" constitution: in them, decisive in the attention paid to the peculiar structure of the Venetian Republic, the only of a certain dimension and power to survive after 1530. The research takes into account also the writings of Traiano Boccalini (1556-1613): he himself, though being involved in the same topics of debate, sets for some aspects his considerations in the framework of a new theme, that of Reason of State.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fenomeni di trasporto ed elettrostatici in membrane da Nanofiltrazione La capacità di predire le prestazioni delle membrane da nanofiltrazione è molto importante per il progetto e la gestione di processi di separazione a membrana. Tali prestazioni sono strettamente legate ai fenomeni di trasporto che regolano il moto dei soluti all’interno della matrice della membrana. Risulta, quindi, di rilevante importanza la conoscenza e lo studio di questi fenomeni; l’obiettivo finale è quello di mettere a punto modelli di trasporto appropriati che meglio descrivano il flusso dei soluti all’interno della membrana. A fianco dei modelli di trasporto ricopre, quindi, una importanza non secondaria la caratterizzazione dei parametri aggiustabili propri della membrana sulla quale si opera. La procedura di caratterizzazione di membrane deve chiarire le modalità di svolgimento delle prove sperimentali e le finalità che esse dovrebbero conseguire. Tuttavia, nonostante i miglioramenti concernenti la modellazione del trasporto di ioni in membrana ottenuti dalla ricerca negli ultimi anni, si è ancora lontani dall’avere a disposizione un modello univoco in grado di descrivere i fenomeni coinvolti in maniera chiara. Oltretutto, la palese incapacità del modello di non riuscire a prevedere gli andamenti sperimentali di reiezione nella gran parte dei casi relativi a miscele multicomponenti e le difficoltà legate alla convergenza numerica degli algoritmi risolutivi hanno fortemente limitato gli sviluppi del processo anche e soprattutto in termini applicativi. Non da ultimo, si avverte la necessità di poter prevedere ed interpretare l’andamento della carica di membrana al variare delle condizioni operative attraverso lo sviluppo di un modello matematico in grado di descrivere correttamente il meccanismo di formazione della carica. Nel caso di soluzioni elettrolitiche, infatti, è stato riconosciuto che la formazione della carica superficiale è tra i fattori che maggiormente caratterizzano le proprietà di separazione delle membrane. Essa gioca un ruolo importante nei processi di trasporto ed influenza la sua selettività nella separazione di molecole caricate; infatti la carica di membrana interagisce elettrostaticamente con gli ioni ed influenza l’efficienza di separazione degli stessi attraverso la partizione degli elettroliti dalla soluzione esterna all’interno dei pori del materiale. In sostanza, la carica delle membrane da NF è indotta dalle caratteristiche acide delle soluzioni elettrolitiche poste in contatto con la membrana stessa, nonché dal tipo e dalla concentrazione delle specie ioniche. Nello svolgimento di questo lavoro sono stati analizzati i principali fenomeni di trasporto ed elettrostatici coinvolti nel processo di nanofiltrazione, in particolare si è focalizzata l’attenzione sugli aspetti relativi alla loro modellazione matematica. La prima parte della tesi è dedicata con la presentazione del problema generale del trasporto di soluti all’interno di membrane da nanofiltrazione con riferimento alle equazioni alla base del modello DSP&DE, che rappresenta una razionalizzazione dei modelli esistenti sviluppati a partire dal modello DSPM, nel quale sono stati integrarti i fenomeni di esclusione dielettrica, per quanto riguarda la separazione di elettroliti nella filtrazione di soluzioni acquose in processi di Nanofiltrazione. Il modello DSP&DE, una volta definita la tipologia di elettroliti presenti nella soluzione alimentata e la loro concentrazione, viene completamente definito da tre parametri aggiustabili, strettamente riconducibili alle proprietà della singola membrana: il raggio medio dei pori all’interno della matrice, lo spessore effettivo e la densità di carica di membrana; in più può essere considerato un ulteriore parametro aggiustabile del modello il valore che la costante dielettrica del solvente assume quando confinato in pori di ridotte dimensioni. L’impostazione generale del modello DSP&DE, prevede la presentazione dei fenomeni di trasporto all’interno della membrana, descritti attraverso l’equazione di Nerst-Planck, e lo studio della ripartizione a ridosso dell’interfaccia membrana/soluzione esterna, che tiene in conto di diversi contributi: l’impedimento sterico, la non idealità della soluzione, l’effetto Donnan e l’esclusione dielettrica. Il capitolo si chiude con la presentazione di una procedura consigliata per la determinazione dei parametri aggiustabili del modello di trasporto. Il lavoro prosegue con una serie di applicazioni del modello a dati sperimentali ottenuti dalla caratterizzazione di membrane organiche CSM NE70 nel caso di soluzioni contenenti elettroliti. In particolare il modello viene applicato quale strumento atto ad ottenere informazioni utili per lo studio dei fenomeni coinvolti nel meccanismo di formazione della carica; dall’elaborazione dei dati sperimentali di reiezione in funzione del flusso è possibile ottenere dei valori di carica di membrana, assunta quale parametro aggiustabile del modello. che permettono di analizzare con affidabilità gli andamenti qualitativi ottenuti per la carica volumetrica di membrana al variare della concentrazione di sale nella corrente in alimentazione, del tipo di elettrolita studiato e del pH della soluzione. La seconda parte della tesi relativa allo studio ed alla modellazione del meccanismo di formazione della carica. Il punto di partenza di questo studio è rappresentato dai valori di carica ottenuti dall’elaborazione dei dati sperimentali di reiezione con il modello di trasporto, e tali valori verranno considerati quali valori “sperimentali” di riferimento con i quali confrontare i risultati ottenuti. Nella sezione di riferimento è contenuta la presentazione del modello teorico “adsorption-amphoteric” sviluppato al fine di descrivere ed interpretare i diversi comportamenti sperimentali ottenuti per la carica di membrana al variare delle condizioni operative. Nel modello la membrana è schematizzata come un insieme di siti attivi di due specie: il gruppo di siti idrofobici e quello de siti idrofilici, in grado di supportare le cariche derivanti da differenti meccanismi chimici e fisici. I principali fenomeni presi in considerazione nel determinare la carica volumetrica di membrana sono: i) la dissociazione acido/base dei siti idrofilici; ii) il site-binding dei contro-ioni sui siti idrofilici dissociati; iii) l’adsorbimento competitivo degli ioni in soluzione sui gruppi funzionali idrofobici. La struttura del modello è del tutto generale ed è in grado di mettere in evidenza quali sono i fenomeni rilevanti che intervengono nel determinare la carica di membrana; per questo motivo il modello permette di indagare il contributo di ciascun meccanismo considerato, in funzione delle condizioni operative. L’applicazione ai valori di carica disponibili per membrane Desal 5-DK nel caso di soluzioni contenenti singoli elettroliti, in particolare NaCl e CaCl2 permette di mettere in evidenza due aspetti fondamentali del modello: in primis la sua capacità di descrivere andamenti molto diversi tra loro per la carica di membrana facendo riferimento agli stessi tre semplici meccanismi, dall’altra parte permette di studiare l’effetto di ciascun meccanismo sull’andamento della carica totale di membrana e il suo peso relativo. Infine vengono verificate le previsioni ottenute con il modello dal suddetto studio attraverso il confronto con dati sperimentali di carica ottenuti dall’elaborazione dei dati sperimentali di reiezione disponibili per il caso di membrane CSM NE70. Tale confronto ha messo in evidenza le buone capacità previsionali del modello soprattutto nel caso di elettroliti non simmetrici quali CaCl2 e Na2SO4. In particolare nel caso un cui lo ione divalente rappresenta il contro-ione rispetto alla carica propria di membrana, la carica di membrana è caratterizzata da un andamento unimodale (contraddistinto da un estremante) con la concentrazione di sale in alimentazione. Il lavoro viene concluso con l’estensione del modello ADS-AMF al caso di soluzioni multicomponenti: è presentata una regola di mescolamento che permette di ottenere la carica per le soluzioni elettrolitiche multicomponenti a partire dai valori disponibili per i singoli ioni componenti la miscela.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: • devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; • new architectures as Silicon–On–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control Short–Channel–Effects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with band–gap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunction–bipolar–transistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on double–gate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe self–heating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for high–performance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of self–heating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, three–dimensional electro–thermal simulations have been applied to the analysis of SHE in planar single and double–gate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the high–performance transistor of the 45 nm technology node. Its effects on the ON–current, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on self–heating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La valutazione dell’intensità secondo una procedura formale trasparente, obiettiva e che permetta di ottenere valori numerici attraverso scelte e criteri rigorosi, rappresenta un passo ed un obiettivo per la trattazione e l’impiego delle informazioni macrosismiche. I dati macrosismici possono infatti avere importanti applicazioni per analisi sismotettoniche e per la stima della pericolosità sismica. Questa tesi ha affrontato il problema del formalismo della stima dell’intensità migliorando aspetti sia teorici che pratici attraverso tre passaggi fondamentali sviluppati in ambiente MS-Excel e Matlab: i) la raccolta e l’archiviazione del dataset macrosismico; ii), l’associazione (funzione di appartenenza o membership function) tra effetti e gradi di intensità della scala macrosismica attraverso i principi della logica dei fuzzy sets; iii) l’applicazione di algoritmi decisionali rigorosi ed obiettivi per la stima dell’intensità finale. L’intera procedura è stata applicata a sette terremoti italiani sfruttando varie possibilità, anche metodologiche, come la costruzione di funzioni di appartenenza combinando le informazioni macrosismiche di più terremoti: Monte Baldo (1876), Valle d’Illasi (1891), Marsica (1915), Santa Sofia (1918), Mugello (1919), Garfagnana (1920) e Irpinia (1930). I risultati ottenuti hanno fornito un buon accordo statistico con le intensità di un catalogo macrosismico di riferimento confermando la validità dell’intera metodologia. Le intensità ricavate sono state poi utilizzate per analisi sismotettoniche nelle aree dei terremoti studiati. I metodi di analisi statistica sui piani quotati (distribuzione geografica delle intensità assegnate) si sono rivelate in passato uno strumento potente per analisi e caratterizzazione sismotettonica, determinando i principali parametri (localizzazione epicentrale, lunghezza, larghezza, orientazione) della possibile sorgente sismogenica. Questa tesi ha implementato alcuni aspetti delle metodologie di analisi grazie a specifiche applicazioni sviluppate in Matlab che hanno permesso anche di stimare le incertezze associate ai parametri di sorgente, grazie a tecniche di ricampionamento statistico. Un’analisi sistematica per i terremoti studiati è stata portata avanti combinando i vari metodi per la stima dei parametri di sorgente con i piani quotati originali e ricalcolati attraverso le procedure decisionali fuzzy. I risultati ottenuti hanno consentito di valutare le caratteristiche delle possibili sorgenti e formulare ipotesi di natura sismotettonica che hanno avuto alcuni riscontri indiziali con dati di tipo geologico e geologico-strutturale. Alcuni eventi (1915, 1918, 1920) presentano una forte stabilità dei parametri calcolati (localizzazione epicentrale e geometria della possibile sorgente) con piccole incertezze associate. Altri eventi (1891, 1919 e 1930) hanno invece mostrato una maggiore variabilità sia nella localizzazione dell’epicentro che nella geometria delle box: per il primo evento ciò è probabilmente da mettere in relazione con la ridotta consistenza del dataset di intensità mentre per gli altri con la possibile molteplicità delle sorgenti sismogenetiche. Anche l’analisi bootstrap ha messo in evidenza, in alcuni casi, le possibili asimmetrie nelle distribuzioni di alcuni parametri (ad es. l’azimut della possibile struttura), che potrebbero suggerire meccanismi di rottura su più faglie distinte.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The elusive fiction of J. M. Coetzee is not a work in which you can read fixed ethical stances. I suggest testing the potentialities of a logic based on frames and double binds in Coetzee's novels. A double bind is a dilemma in communication which consists on tho conflicting messages, with the result that you can’t successfully respond to neither. Jacques Derrida highlighted the strategic value of a way of thinking based on the double bind (but on frames as well), which enables to escape binary thinking and so it opens an ethical space, where you can make a choice out of a set of fixed rules and take responsibility for it. In Coetzee’s fiction the author himself can be considered in a double bind, seeing that he is a white South African writer who feels that his “task” can’t be as simply as choosing to represent faithfully the violence and the racism of the apartheid or of choosing to give a voice to the oppressed. Good intentions alone do not ensure protection against entering unwittingly into complicity with the dominant discourse, and this is why is important to make the frame in which one is always situated clearly visible and explicit. The logic of the double bind becomes the way in which moral problem are staged in Coetzee’s fiction as well: the opportunity to give a voice to the oppressed through the same language which co-opted to serve the cause of oppression, a relation with the otherness never completed, or the representability of evil in literature, of the secret and of the paradoxical implications of confession and forgiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The olive oil extraction industry is responsible for the production of high quantities of vegetation waters, represented by the constitutive water of the olive fruit and by the water used during the process. This by-product represent an environmental problem in the olive’s cultivation areas because of its high content of organic matter, with high value of BOD5 and COD. For that reason the disposal of the vegetation water is very difficult and needs a previous depollution. The organic matter of vegetation water mainly consists of polysaccharides, sugars, proteins, organic acids, oil and polyphenols. This last compounds are the principal responsible for the pollution problems, due to their antimicrobial activity, but, at the same time they are well known for their antioxidant properties. The most concentrate phenolic compounds in waters and also in virgin olive oils are secoiridoids like oleuropein, demethyloleuropein and ligstroside derivatives (the dialdehydic form of elenolic acid linked to 3,4-DHPEA, or p-HPEA (3,4-DHPEA-EDA or p-HPEA-EDA) and an isomer of the oleuropein aglycon (3,4-DHPEA-EA). The management of the olive oil vegetation water has been extensively investigated and several different valorisation methods have been proposed, such as the direct use as fertilizer or the transformation by physico-chemical or biological treatments. During the last years researchers focused their interest on the recovery of the phenolic fraction from this waste looking for its exploitation as a natural antioxidant source. At the present only few contributes have been aimed to the utilization for a large scale phenols recovery and further investigations are required for the evaluation of feasibility and costs of the proposed processes. The present PhD thesis reports a preliminary description of a new industrial scale process for the recovery of the phenolic fraction from olive oil vegetation water treated with enzymes, by direct membrane filtration (microfiltration/ultrafiltration with a cut-off of 250 KDa, ultrafiltration with a cut-off of 7 KDa/10 KDa and nanofiltration/reverse osmosis), partial purification by the use of a purification system based on SPE analysis and by a liquid-liquid extraction system (LLE) with contemporary reduction of the pollution related problems. The phenolic fractions of all the samples obtained were qualitatively and quantitatively by HPLC analysis. The work efficiency in terms of flows and in terms of phenolic recovery gave good results. The final phenolic recovery is about 60% respect the initial content in the vegetation waters. The final concentrate has shown a high content of phenols that allow to hypothesize a possible use as zootechnic nutritional supplements. The purification of the final concentrate have garanteed an high purity level of the phenolic extract especially in SPE analysis by the use of XAD-16 (73% of the total phenolic content of the concentrate). This purity level could permit a future food industry employment such as food additive, or, thanks to the strong antioxidant activity, it would be also use in pharmaceutical or cosmetic industry. The vegetation water depollutant activity has brought good results, as a matter of fact the final reverse osmosis permeate has a low pollutant rate in terms of COD and BOD5 values (2% of the initial vegetation water), that could determinate a recycling use in the virgin olive oil mechanical extraction system producing a water saving and reducing thus the oil industry disposal costs .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this PhD thesis is to study accurately and in depth the figure and the literary production of the intellectual Jacopo Aconcio. This minor author of the 16th century has long been considered a sort of “enigmatic character”, a profile which results from the work of those who, for many centuries, have left his writing to its fate: a story of constant re-readings and equally incessant oversights. This is why it is necessary to re-read Aconcio’s production in its entirety and to devote to it a monographic study. Previous scholars’ interpretations will obviously be considered, but at the same time an effort will be made to go beyond them through the analysis of both published and manuscript sources, in the attempt to attain a deeper understanding of the figure of this man, who was a Christian, a military and hydraulic engineer and a political philosopher,. The title of the thesis was chosen to emphasise how, throughout the three years of the doctorate, my research concentrated in equal measure and with the same degree of importance on all the reflections and activities of Jacopo Aconcio. My object, in fact, was to establish how and to what extent the methodological thinking of the intellectual found application in, and at the same time guided, his theoretical and practical production. I did not mention in the title the author’s religious thinking, which has always been considered by everyone the most original and interesting element of his production, because religion, from the Reformation onwards, was primarily a political question and thus it was treated by almost all the authors involved in the Protestant movement - Aconcio in the first place. Even the remarks concerning the private, intimate sphere of faith have therefore been analysed in this light: only by acknowledging the centrality of the “problem of politics” in Aconcio’s theories, in fact, is it possible to interpret them correctly. This approach proves the truth of the theoretical premise to my research, that is to say the unity and orderliness of the author’s thought: in every field of knowledge, Aconcio applies the rules of the methodus resolutiva, as a means to achieve knowledge and elaborate models of pacific cohabitation in society. Aconcio’s continuous references to method can make his writing pedant and rather complex, but at the same time they allow for a consistent and valid analysis of different disciplines. I have not considered the fact that most of his reflections appear to our eyes as strongly conditioned by the time in which he lived as a limit. To see in him, as some have done, the forerunner of Descartes’ methodological discourse or, conversely, to judge his religious theories as not very modern, is to force the thought of an author who was first and foremost a Christian man of his own time. Aconcio repeats this himself several times in his writings: he wants to provide individuals with the necessary tools to reach a full-fledged scientific knowledge in the various fields, and also to enable them to seek truth incessantly in the religious domain, which is the duty of every human being. The will to find rules, instruments, effective solutions characterizes the whole of the author’s corpus: Aconcio feels he must look for truth in all the arts, aware as he is that anything can become science as long as it is analysed with method. Nevertheless, he remains a man of his own time, a Christian convinced of the existence of God, creator and governor of the world, to whom people must account for their own actions. To neglect this fact in order to construct a “character”, a generic forerunner, but not participant, of whatever philosophical current, is a dangerous and sidetracking operation. In this study, I have highlighted how Aconcio’s arguments only reveal their full meaning when read in the context in which they were born, without depriving them of their originality but also without charging them with meanings they do not possess. Through a historical-doctrinal approach, I have tried to analyse the complex web of theories and events which constitute the substratum of Aconcio’s reflection, in order to trace the correct relations between texts and contexts. The thesis is therefore organised in six chapters, dedicated respectively to Aconcio’s biography, to the methodological question, to the author’s engineering activity, to his historical knowledge and to his religious thinking, followed by a last section concerning his fortune throughout the centuries. The above-mentioned complexity is determined by the special historical moment in which the author lived. On the one hand, thanks to the new union between science and technique, the 16th century produces discoveries and inventions which make available a previously unthinkable number of notions and lead to a “revolution” in the way of studying and teaching the different subjects, which, by producing a new form of intellectual, involved in politics but also aware of scientific-technological issues, will contribute to the subsequent birth of modern science. On the other, the 16th century is ravaged by religious conflicts, which shatter the unity of the Christian world and generate theological-political disputes which will inform the history of European states for many decades. My aim is to show how Aconcio’s multifarious activity is the conscious fruit of this historical and religious situation, as well as the attempt of an answer to the request of a new kind of engagement on the intellectual’s behalf. Plunged in the discussions around methodus, employed in the most important European courts, involved in the abrupt acceleration of technical-scientific activities, and especially concerned by the radical religious reformation brought on by the Protestant movement, Jacopo Aconcio reflects this complex conjunction in his writings, without lacking in order and consistency, differently from what many scholars assume. The object of this work, therefore, is to highlight the unity of the author’s thought, in which science, technique, faith and politics are woven into a combination which, although it may appear illogical and confused, is actually tidy and methodical, and therefore in agreement with Aconcio’s own intentions and with the specific characters of European culture in the Renaissance. This theory is confirmed by the reading of the Ars muniendorum oppidorum, Aconcio’s only work which had been up till now unavailable. I am persuaded that only a methodical reading of Aconcio’s works, without forgetting nor glorifying any single one, respects the author’s will. From De methodo (1558) onwards, all his writings are summae, guides for the reader who wishes to approach the study of the various disciplines. Undoubtedly, Satan’s Stratagems (1565) is something more, not only because of its length, but because it deals with the author’s main interest: the celebration of doubt and debate as bases on which to build religious tolerance, which is the best method for pacific cohabitation in society. This, however, does not justify the total centrality which the Stratagems have enjoyed for centuries, at the expense of a proper understanding of the author’s will to offer examples of methodological rigour in all sciences. Maybe it is precisely because of the reforming power of Aconcio’s thought that, albeit often forgotten throughout the centuries, he has never ceased to reappear and continues to draw attention, both as a man and as an author. His ideas never stop stimulating the reader’s curiosity and this may ultimately be the best demonstration of their worth, independently from the historical moment in which they come back to the surface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: One major problem in counselling couples with a prenatal diagnosis of a correctable fetal anomaly is the ability to exclude associated malformations that may modify the prognosis. Our aim was to assess the precision of fetal sonography in identifying isolated malformations. METHODS: We retrospectively reviewed the prenatal and postnatal records of our center for cases with a prenatal diagnosis of an isolated fetal anomaly in the period 2002-2007. RESULTS: The antenatal diagnosis of an isolated malformation was made in 284 cases. In one of this cases the anomaly disappeared in utero. Of the remaining cases, the prenatal diagnosis was confirmed after birth in 251 (88.7%). In 8 fetuses (7 with a suspected coarctation of the aorta, 1 with ventricular septal defect) the prenatal diagnosis was not confirmed. In 24 fetuses (8.5%) additional malformations were detected at postnatal or post-mortem. In 16 of these cases the anomalies were mild or would not have changed the prognosis. In 8 cases (2.8%) severe anomalies were present (1 hypoplasia of the corpus callosum with ventriculomegaly, 1 tracheal agenesis, 3 cases with multiple anomalies, 1 Opitz Syndrome, 1 with CHARGE Syndrome, 1 COFS Syndrome). Two of these infants died. CONCLUSIONS: the prenatal diagnosis of an isolated fetal anomaly is highly reliable. However, the probability that additional malformations will go undetected albeit small remains tangible. In our experience, it was 2.8%.