912 resultados para Ancient logic


Relevância:

20.00% 20.00%

Publicador:

Resumo:

«In altri termini mi sfuggiva e ancora oggi mi sfugge gran parte del significato dell’evoluzione del tempo; come se il tempo fosse una materia che osservo dall’esterno. Questa mancanza di evoluzione è fonte di alcune mie sventure ma anche mi appartiene con gioia.» Aldo Rossi, Autobiografia scientifica. The temporal dimension underpinning the draft of Autobiografia scientifica by Aldo Rossi may be referred to what Lucien Lévy-Bruhl, the well-known French anthropologist, defines as “primitive mentality” and “prelogical” conscience : the book of life has lost its page numbers, even punctuation. For Lévy-Bruhl, but certainly for Rossi, life or its summing up becomes a continuous account of ellipses, gaps, repetitions that may be read from left to right or viceversa, from head to foot or viceversa without distinction. Rossi’s autobiographical writing seems to accept and support the confusion with which memories have been collected, recording them after the order memory gives them in the mental distillation or simply according to the chronological order in which they have happened. For Rossi, the confusion reflects the melting of memory elements into a composite image which is the result of a fusion. He is aware that the same sap pervades all memories he is going to put in order: each of them has got a common denominator. Differences have diminished, almost faded; the quick glance is prevalent over the distinction of each episode. Rossi’s writing is beyond the categories dependent on time: past and present, before and now. For Rossi, the only repetition – the repetition the text will make possible for an indefinite number of times – gives peculiarity to the event. As Gilles Deleuze knows, “things” may only last as “singleness”: more frequent the repetition is, more singular is the memory phenomenon that recurs, because only what is singular magnifies itself and happens endlessly forever. Rossi understands that “to raise the first time to nth forever”, repetition becomes glorification . It may be an autobiography that, celebrating the originality, enhances the memory event in the repetition; in fact it greatly differs from the biographical reproduction, in which each repetition is but a weaker echo, a duller copy, provided with a smaller an smaller power in comparison with the original. Paradoxically, for Deleuze the repetition asserts the originality and singularity of what is repeated. Rossi seems to share the thought expressed by Kierkegaard in the essay Repetition: «The hope is a graceful maiden slipping through your fingers; the memory of an elderly woman, indeed pretty, but never satisfactory if necessary; the repetition is a loved friend you are never tired of, as it is only the new to make you bored. The old never bores you and its presence makes you happy [...] life is but a repetition [...] here is the beauty of life» . Rossi knows well that repetition hints at the lasting stability of cosmic time. Kierkegaard goes on: «The world exists, and it exists as a repetition» . Rossi devotes himself, on purpose and in all conscience, to collect, to inventory and «to review life», his own life, according to a recovery not from the past but of the past: a search work, the «recherche du temps perdu», as Proust entitled his masterpiece on memory. If you want the past time to be not wasted, you must give it presence. «Memoria e specifico come caratteristiche per riconoscere se stesso e ciò che è estraneo mi sembravano le più chiare condizioni e spiegazioni della realtà. Non esiste uno specifico senza memoria, e una memoria che non provenga da un momento specifico; e solo questa unione permette la conoscenza della propria individualità e del contrario (self e non-self)» . Rossi wants to understand himself, his own character; it is really his own character that requires to be understood, to increase its own introspective ability and intelligence. «Può sembrare strano che Planck e Dante associno la loro ricerca scientifica e autobiografica con la morte; una morte che è in qualche modo continuazione di energia. In realtà, in ogni artista o tecnico, il principio della continuazione dell’energia si mescola con la ricerca della felicità e della morte» . The eschatological incipit of Rossi’s autobiography refers to Freud’s thought in the exact circularity of Dante’s framework and in as much exact circularity of the statement of the principle of the conservation of energy: in fact it was Freud to connect repetition to death. For Freud, the desire of repetition is an instinct rooted in biology. The primary aim of such an instinct would be to restore a previous condition, so that the repeated history represents a part of the past (even if concealed) and, relieving the removal, reduces anguish and tension. So, Freud ask himself, what is the most remote state to which the instinct, through the repetition, wants to go back? It is a pre-vital condition, inorganic of the pure entropy, a not-to-be condition in which doesn’t exist any tension; in other words, Death. Rossi, with the theme of death, introduces the theme of circularity which further on refers to the sense of continuity in transformation or, in the opposite way, the transformation in continuity. «[...] la descrizione e il rilievo delle forme antiche permettevano una continuità altrimenti irripetibile, permettevano anche una trasformazione, una volta che la vita fosse fermata in forme precise» . Rossi’s attitude seems to hint at the reflection on time and – in a broad sense – at the thought on life and things expressed by T.S. Eliot in Four Quartets: «Time present and time past / Are both perhaps present in time future, / And time future is contained in time past. / I all time is eternally present / All time is unredeemable. / What might have been is an abstraction / Remaining perpetual possibility / Only in a word of speculation. / What might have been and what has been / Point to one end, which is always present. [...]» . Aldo Rossi’s autobiographical story coincides with the description of “things” and the description of himself through the things in the exact parallel with craft or art. He seems to get all things made by man to coincide with the personal or artistic story, with the consequent immediate necessity of formulating a new interpretation: the flow of things has never met a total stop; all that exists nowadays is but a repetition or a variant of something existing some time ago and so on, without any interruption until the early dawnings of human life. Nevertheless, Rossi must operate specific subdivisions inside the continuous connection in time – of his time – even if limited by a present beginning and end of his own existence. This artist, as an “historian” of himself and his own life – as an auto-biographer – enjoys the privilege to be able to decide if and how to operate the cutting in a certain point rather than in another one, without being compelled to justify his choice. In this sense, his story is a matter very ductile and flexible: a good story-teller can choose any moment to start a certain sequence of events. Yet, Rossi is aware that, beyond the mere narration, there is the problem to identify in history - his own personal story – those flakings where a clean cut enables the separation of events of different nature. In order to do it, he has to make not only an inventory of his own “things”, but also to appeal to authority of the Divina Commedia started by Dante when he was 30. «A trent’anni si deve compiere o iniziare qualcosa di definitivo e fare i conti con la propria formazione» . For Rossi, the poet performs his authority not only in the text, but also in his will of setting out on a mystical journey and handing it down through an exact descriptive will. Rossi turns not only to the authority of poetry, but also evokes the authority of science with Max Plank and his Scientific Autobiography, published, in Italian translation, by Einaudi, 1956. Concerning Planck, Rossi resumes an element seemingly secondary in hit account where the German physicist «[...] risale alle scoperte della fisica moderna ritrovando l’impressione che gli fece l’enunciazione del principio di conservazione dell’energia; [...]» . It is again the act of describing that links Rossi to Planck, it is the description of a circularity, the one of conservation of energy, which endorses Rossi’s autobiographical speech looking for both happiness and death. Rossi seems to agree perfectly to the thought of Planck at the opening of his own autobiography: «The decision to devote myself to science was a direct consequence of a discovery which was never ceased to arouse my enthusiasm since my early youth: the laws of human thought coincide with the ones governing the sequences of the impressions we receive from the world surrounding us, so that the mere logic can enable us to penetrate into the latter one’s mechanism. It is essential that the outer world is something independent of man, something absolute. The search of the laws dealing with this absolute seems to me the highest scientific aim in life» . For Rossi the survey of his own life represents a way to change the events into experiences, to concentrate the emotion and group them in meaningful plots: «It seems, as one becomes older. / That the past has another pattern, and ceases to be a mere sequence [...]» Eliot wrote in Four Quartet, which are a meditation on time, old age and memory . And he goes on: «We had the experience but missed the meaning, / And approach to the meaning restores the experience / In a different form, beyond any meaning [...]» . Rossi restores in his autobiography – but not only in it – the most ancient sense of memory, aware that for at least 15 centuries the Latin word memoria was used to show the activity of bringing back images to mind: the psychology of memory, which starts with Aristotele (De Anima), used to consider such a faculty totally essential to mind. Keith Basso writes: «The thought materializes in the form of “images”» . Rossi knows well – as Aristotele said – that if you do not have a collection of mental images to remember – imagination – there is no thought at all. According to this psychological tradition, what today we conventionally call “memory” is but a way of imagining created by time. Rossi, entering consciously this stream of thought, passing through the Renaissance ars memoriae to reach us gives a great importance to the word and assumes it as a real place, much more than a recollection, even more than a production and an emotional elaboration of images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The advent of distributed and heterogeneous systems has laid the foundation for the birth of new architectural paradigms, in which many separated and autonomous entities collaborate and interact to the aim of achieving complex strategic goals, impossible to be accomplished on their own. A non exhaustive list of systems targeted by such paradigms includes Business Process Management, Clinical Guidelines and Careflow Protocols, Service-Oriented and Multi-Agent Systems. It is largely recognized that engineering these systems requires novel modeling techniques. In particular, many authors are claiming that an open, declarative perspective is needed to complement the closed, procedural nature of the state of the art specification languages. For example, the ConDec language has been recently proposed to target the declarative and open specification of Business Processes, overcoming the over-specification and over-constraining issues of classical procedural approaches. On the one hand, the success of such novel modeling languages strongly depends on their usability by non-IT savvy: they must provide an appealing, intuitive graphical front-end. On the other hand, they must be prone to verification, in order to guarantee the trustworthiness and reliability of the developed model, as well as to ensure that the actual executions of the system effectively comply with it. In this dissertation, we claim that Computational Logic is a suitable framework for dealing with the specification, verification, execution, monitoring and analysis of these systems. We propose to adopt an extended version of the ConDec language for specifying interaction models with a declarative, open flavor. We show how all the (extended) ConDec constructs can be automatically translated to the CLIMB Computational Logic-based language, and illustrate how its corresponding reasoning techniques can be successfully exploited to provide support and verification capabilities along the whole life cycle of the targeted systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Human reasoning is a fascinating and complex cognitive process that can be applied in different research areas such as philosophy, psychology, laws and financial. Unfortunately, developing supporting software (to those different areas) able to cope such as complex reasoning it’s difficult and requires a suitable logic abstract formalism. In this thesis we aim to develop a program, that has the job to evaluate a theory (a set of rules) w.r.t. a Goal, and provide some results such as “The Goal is derivable from the KB5 (of the theory)”. In order to achieve this goal we need to analyse different logics and choose the one that best meets our needs. In logic, usually, we try to determine if a given conclusion is logically implied by a set of assumptions T (theory). However, when we deal with programming logic we need an efficient algorithm in order to find such implications. In this work we use a logic rather similar to human logic. Indeed, human reasoning requires an extension of the first order logic able to reach a conclusion depending on not definitely true6 premises belonging to a incomplete set of knowledge. Thus, we implemented a defeasible logic7 framework able to manipulate defeasible rules. Defeasible logic is a non-monotonic logic designed for efficient defeasible reasoning by Nute (see Chapter 2). Those kind of applications are useful in laws area especially if they offer an implementation of an argumentation framework that provides a formal modelling of game. Roughly speaking, let the theory is the set of laws, a keyclaim is the conclusion that one of the party wants to prove (and the other one wants to defeat) and adding dynamic assertion of rules, namely, facts putted forward by the parties, then, we can play an argumentative challenge between two players and decide if the conclusion is provable or not depending on the different strategies performed by the players. Implementing a game model requires one more meta-interpreter able to evaluate the defeasible logic framework; indeed, according to Göedel theorem (see on page 127), we cannot evaluate the meaning of a language using the tools provided by the language itself, but we need a meta-language able to manipulate the object language8. Thus, rather than a simple meta-interpreter, we propose a Meta-level containing different Meta-evaluators. The former has been explained above, the second one is needed to perform the game model, and the last one will be used to change game execution and tree derivation strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ancient pavements are composed of a variety of preparatory or foundation layers constituting the substrate, and of a layer of tesserae, pebbles or marble slabs forming the surface of the floor. In other cases, the surface consists of a mortar layer beaten and polished. The term mosaic is associated with the presence of tesserae or pebbles, while the more general term pavement is used in all the cases. As past and modern excavations of ancient pavements demonstrated, all pavements do not necessarily display the stratigraphy of the substrate described in the ancient literary sources. In fact, the number and thickness of the preparatory layers, as well as the nature and the properties of their constituent materials, are often varying in pavements which are placed either in different sites or in different buildings within a same site or even in a same building. For such a reason, an investigation that takes account of the whole structure of the pavement is important when studying the archaeological context of the site where it is placed, when designing materials to be used for its maintenance and restoration, when documenting it and when presenting it to public. Five case studies represented by archaeological sites containing floor mosaics and other kind of pavements, dated to the Hellenistic and the Roman period, have been investigated by means of in situ and laboratory analyses. The results indicated that the characteristics of the studied pavements, namely the number and the thickness of the preparatory layers, and the properties of the mortars constituting them, vary according to the ancient use of the room where the pavements are placed and to the type of surface upon which they were built. The study contributed to the understanding of the function and the technology of the pavements’ substrate and to the characterization of its constituent materials. Furthermore, the research underlined the importance of the investigation of the whole structure of the pavement, included the foundation surface, in the interpretation of the archaeological context where it is located. A series of practical applications of the results of the research, in the designing of repair mortars for pavements, in the documentation of ancient pavements in the conservation practice, and in the presentation to public in situ and in museums of ancient pavements, have been suggested.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main scope of my PhD is the reconstruction of the large-scale bivalve phylogeny on the basis of four mitochondrial genes, with samples taken from all major groups of the class. To my knowledge, it is the first attempt of such a breadth in Bivalvia. I decided to focus on both ribosomal and protein coding DNA sequences (two ribosomal encoding genes -12s and 16s -, and two protein coding ones - cytochrome c oxidase I and cytochrome b), since either bibliography and my preliminary results confirmed the importance of combined gene signals in improving evolutionary pathways of the group. Moreover, I wanted to propose a methodological pipeline that proved to be useful to obtain robust results in bivalves phylogeny. Actually, best-performing taxon sampling and alignment strategies were tested, and several data partitioning and molecular evolution models were analyzed, thus demonstrating the importance of molding and implementing non-trivial evolutionary models. In the line of a more rigorous approach to data analysis, I also proposed a new method to assess taxon sampling, by developing Clarke and Warwick statistics: taxon sampling is a major concern in phylogenetic studies, and incomplete, biased, or improper taxon assemblies can lead to misleading results in reconstructing evolutionary trees. Theoretical methods are already available to optimize taxon choice in phylogenetic analyses, but most involve some knowledge about genetic relationships of the group of interest, or even a well-established phylogeny itself; these data are not always available in general phylogenetic applications. The method I proposed measures the "phylogenetic representativeness" of a given sample or set of samples and it is based entirely on the pre-existing available taxonomy of the ingroup, which is commonly known to investigators. Moreover, it also accounts for instability and discordance in taxonomies. A Python-based script suite, called PhyRe, has been developed to implement all analyses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study poses as its objective the genetic characterization of the ancient population of the Great White shark, Carcharodon carcharias, L.1758, present in the Mediterranean Sea. Using historical evidence, for the most part buccal arches but also whole, stuffed examples from various national museums, research institutes and private collections, a dataset of 18 examples coming from the Mediterranean Sea has been created, in order to increase the informations regarding this species in the Mediterranean. The importance of the Mediterranean provenance derives from the fact that a genetic characterization of this species' population does not exist, and this creates gaps in the knowledge of this species in the Mediterranean. The genetic characterization of the individuals will initially take place by the extraction of the ancient DNA and the analysis of the variations in the sequence markers of the mitochondrial DNA. This approach has allowed the genetic comparison between ancient populations of the Mediterranean and contemporary populations of the same geographical area. In addition, the genetic characterization of the population of white sharks of the Mediterranean, has allowed a genetic comparison with populations from global "hot spots", using published sequences in online databases (NCBI, GenBank). Analyzing the variability of the dataset, both in terms space and time, I assessed the evolutionary relationships of the Mediterranean population of Great Whites with the global populations (Australia/New Zealand, South Africa, Pacific USA, West Atlantic), and the temporal trend of the Mediterranean population variability. This method based on the sequencing of two portions of mitochondrial DNA genes, markers showed us how the population of Great White Sharks in the Mediterranean, is genetically more similar to the populations of the Australia Pacific ocean, American Pacific Ocean, rather than the population of South Africa, and showing also how the population of South Africa is abnormally distant from all other clusters. Interestingly, these results are inconsistent with the results from tagging of this species. In addition, there is evidence of differences between the ancient population of the Mediterranean with the modern one. This differentiation between the ancient and modern population of white shark can be the result of events impacting on this species occurred over the last two centuries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis moves from the need of understanding how a historical building would behave in case of earthquake and this purpose is strongly linked to the fact that the majority of Italian structures are old ones placed in seismic sites. Primarily an architectural and chronological research is provided in order to figure out how the building has developed in time; then, after the reconstruction of the skeleton of the analyzed element (“Villa i Bossi” in Gragnone, AR), a virtual model is created such that the main walls and sections are tested according to the magnitude of expected seismic events within the reference area. This approach is basically aimed at verifying the structure’s reliability as composed by single units; the latter are treated individually in order to find out all the main critical points where rehabilitation might be needed. Finally the most harmful sections are studied in detail and proper strengthening is advised according to the current know-how.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Implicazioni tettoniche ed estetiche delle logiche monoscocca integrate e stress lines analysis in architettura.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since historical times, coastal areas throughout the eastern Mediterranean are exposed to tsunami hazard. For many decades the knowledge about palaeotsunamis was solely based on historical accounts. However, results from timeline analyses reveal different characteristics affecting the quality of the dataset (i.e. distribution of data, temporal thinning backward of events, local periodization phenomena) that emphasize the fragmentary character of the historical data. As an increasing number of geo-scientific studies give convincing examples of well dated tsunami signatures not reported in catalogues, the non-existing record is a major problem to palaeotsunami research. While the compilation of historical data allows a first approach in the identification of areas vulnerable to tsunamis, it must not be regarded as reliable for hazard assessment. Considering the increasing economic significance of coastal regions (e.g. for mass tourism) and the constantly growing coastal population, our knowledge on the local, regional and supraregional tsunami hazard along Mediterranean coasts has to be improved. For setting up a reliable tsunami risk assessment and developing risk mitigation strategies, it is of major importance (i) to identify areas under risk and (ii) to estimate the intensity and frequency of potential events. This approach is most promising when based on the analysis of palaeotsunami research seeking to detect areas of high palaeotsunami hazard, to calculate recurrence intervals and to document palaeotsunami destructiveness in terms of wave run-up, inundation and long-term coastal change. Within the past few years, geo-scientific studies on palaeotsunami events provided convincing evidence that throughout the Mediterranean ancient harbours were subject to strong tsunami-related disturbance or destruction. Constructed to protect ships from storm and wave activity, harbours provide especially sheltered and quiescent environments and thus turned out to be valuable geo-archives for tsunamigenic high-energy impacts on coastal areas. Directly exposed to the Hellenic Trench and extensive local fault systems, coastal areas in the Ionian Sea and the Gulf of Corinth hold a considerably high risk for tsunami events, respectively.Geo-scientific and geoarcheaological studies carried out in the environs of the ancient harbours of Krane (Cefalonia Island), Lechaion (Corinth, Gulf of Corinth) and Kyllini (western Peloponnese) comprised on-shore and near-shore vibracoring and subsequent sedimentological, geochemical and microfossil analyses of the recovered sediments. Geophysical methods like electrical resistivity tomography and ground penetrating radar were applied in order to detect subsurface structures and to verify stratigraphical patterns derived from vibracores over long distances. The overall geochronological framework of each study area is based on radiocarbon dating of biogenic material and age determination of diagnostic ceramic fragments. Results presented within this study provide distinct evidence of multiple palaeotsunami landfalls for the investigated areas. Tsunami signatures encountered in the environs of Krane, Lechaion and Kyllini include (i) coarse-grained allochthonous marine sediments intersecting silt-dominated quiescent harbour deposits and/or shallow marine environments, (ii) disturbed microfaunal assemblages and/or (iii) distinct geochemical fingerprints as well as (iv) geo-archaeological destruction layers and (v) extensive units of beachrock-type calcarenitic tsunamites. For Krane, geochronological data yielded termini ad or post quem (maximum ages) for tsunami event generations dated to 4150 ± 60 cal BC, ~ 3200 ± 110 cal BC, ~ 650 ± 110 cal BC, and ~ 930 ± 40 cal AD, respectively. Results for Lechaion suggest that the harbour was hit by strong tsunami impacts in the 8th-6th century BC, the 1st-2nd century AD and in the 6th century AD. At Kyllini, the harbour site was affected by tsunami impact in between the late 7th and early 4th cent. BC and between the 4th and 6th cent. AD. In case of Lechaion and Kyllini, the final destruction of the harbour facilities also seems to be related to the tsunami impact. Comparing the tsunami signals obtained for each study areas with geo-scientific data from palaeotsunami events from other sites indicates that the investigated harbour sites represent excellent geo-archives for supra-regional mega-tsunamis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis was part of a multidisciplinary research project funded by the German Research Foundation (“Bevölkerungsgeschichte des Karpatenbeckens in der Jungsteinzeit und ihr Einfluss auf die Besiedlung Mitteleuropas”, grant no. Al 287/10-1) aimed at elucidating the population history of the Carpathian Basin during the Neolithic. The Carpathian Basin was an important waypoint on the spread of the Neolithic from southeastern to central Europe. On the Great Hungarian Plain (Alföld), the first farming communities appeared around 6000 cal BC. They belonged to the Körös culture, which derived from the Starčevo-Körös-Criş complex in the northern Balkans. Around 5600 cal BC the Alföld-Linearbandkeramik (ALBK), so called due to its stylistic similarities with the Transdanubian and central European LBK, emerged in the northwestern Alföld. Following a short “classical phase”, the ALBK split into several regional subgroups during its later stages, but did not expand beyond the Great Hungarian Plain. Marking the beginning of the late Neolithic period, the Tisza culture first appeared in the southern Alföld around 5000 cal BC and subsequently spread into the central and northern Alföld. Together with the Herpály and Csőszhalom groups it was an integral part of the late Neolithic cultural landscape of the Alföld. Up until now, the Neolithic cultural succession on the Alföld has been almost exclusively studied from an archaeological point of view, while very little is known about the population genetic processes during this time period. The aim of this thesis was to perform ancient DNA (aDNA) analyses on human samples from the Alföld Neolithic and analyse the resulting mitochondrial population data to address the following questions: is there population continuity between the Central European Mesolithic hunter-gatherer metapopulation and the first farming communities on the Alföld? Is there genetic continuity from the early to the late Neolithic? Are there genetic as well as cultural differences between the regional groups of the ALBK? Additionally, the relationships between the Alföld and the neighbouring Transdanubian Neolithic as well as other European early farming communities were evaluated to gain insights into the genetic affinities of the Alföld Neolithic in a larger geographic context. 320 individuals were analysed for this study; reproducible mitochondrial haplogroup information (HVS-I and/or SNP data) could be obtained from 242 Neolithic individuals. According to the analyses, population continuity between hunter-gatherers and the Neolithic cultures of the Alföld can be excluded at any stage of the Neolithic. In contrast, there is strong evidence for population continuity from the early to the late Neolithic. All cultural groups on the Alföld were heavily shaped by the genetic substrate introduced into the Carpathian Basin during the early Neolithic by the Körös and Starčevo cultures. Accordingly, genetic differentiation between regional groups of the ALBK is not very pronounced. The Alföld cultures are furthermore genetically highly similar to the Transdanubian Neolithic cultures, probably due to common ancestry. In the wider European context, the Alföld Neolithic cultures also highly similar to the central European LBK, while they differ markedly from contemporaneous populations of the Iberian Peninsula and the Ukraine. Thus, the Körös culture, the ALBK and the Tisza culture can be regarded as part of a “genetic continuum” that links the Neolithic Carpathian Basin to central Europe and likely has its roots in the Starčevo -Körös-Criş complex of the northern Balkans.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador: