10 resultados para Logic, Ancient
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Interaction protocols establish how different computational entities can interact with each other. The interaction can be finalized to the exchange of data, as in 'communication protocols', or can be oriented to achieve some result, as in 'application protocols'. Moreover, with the increasing complexity of modern distributed systems, protocols are used also to control such a complexity, and to ensure that the system as a whole evolves with certain features. However, the extensive use of protocols has raised some issues, from the language for specifying them to the several verification aspects. Computational Logic provides models, languages and tools that can be effectively adopted to address such issues: its declarative nature can be exploited for a protocol specification language, while its operational counterpart can be used to reason upon such specifications. In this thesis we propose a proof-theoretic framework, called SCIFF, together with its extensions. SCIFF is based on Abductive Logic Programming, and provides a formal specification language with a clear declarative semantics (based on abduction). The operational counterpart is given by a proof procedure, that allows to reason upon the specifications and to test the conformance of given interactions w.r.t. a defined protocol. Moreover, by suitably adapting the SCIFF Framework, we propose solutions for addressing (1) the protocol properties verification (g-SCIFF Framework), and (2) the a-priori conformance verification of peers w.r.t. the given protocol (AlLoWS Framework). We introduce also an agent based architecture, the SCIFF Agent Platform, where the same protocol specification can be used to program and to ease the implementation task of the interacting peers.
Resumo:
Sustainable computer systems require some flexibility to adapt to environmental unpredictable changes. A solution lies in autonomous software agents which can adapt autonomously to their environments. Though autonomy allows agents to decide which behavior to adopt, a disadvantage is a lack of control, and as a side effect even untrustworthiness: we want to keep some control over such autonomous agents. How to control autonomous agents while respecting their autonomy? A solution is to regulate agents’ behavior by norms. The normative paradigm makes it possible to control autonomous agents while respecting their autonomy, limiting untrustworthiness and augmenting system compliance. It can also facilitate the design of the system, for example, by regulating the coordination among agents. However, an autonomous agent will follow norms or violate them in some conditions. What are the conditions in which a norm is binding upon an agent? While autonomy is regarded as the driving force behind the normative paradigm, cognitive agents provide a basis for modeling the bindingness of norms. In order to cope with the complexity of the modeling of cognitive agents and normative bindingness, we adopt an intentional stance. Since agents are embedded into a dynamic environment, things may not pass at the same instant. Accordingly, our cognitive model is extended to account for some temporal aspects. Special attention is given to the temporal peculiarities of the legal domain such as, among others, the time in force and the time in efficacy of provisions. Some types of normative modifications are also discussed in the framework. It is noteworthy that our temporal account of legal reasoning is integrated to our commonsense temporal account of cognition. As our intention is to build sustainable reasoning systems running unpredictable environment, we adopt a declarative representation of knowledge. A declarative representation of norms will make it easier to update their system representation, thus facilitating system maintenance; and to improve system transparency, thus easing system governance. Since agents are bounded and are embedded into unpredictable environments, and since conflicts may appear amongst mental states and norms, agent reasoning has to be defeasible, i.e. new pieces of information can invalidate formerly derivable conclusions. In this dissertation, our model is formalized into a non-monotonic logic, namely into a temporal modal defeasible logic, in order to account for the interactions between normative systems and software cognitive agents.
Resumo:
«In altri termini mi sfuggiva e ancora oggi mi sfugge gran parte del significato dell’evoluzione del tempo; come se il tempo fosse una materia che osservo dall’esterno. Questa mancanza di evoluzione è fonte di alcune mie sventure ma anche mi appartiene con gioia.» Aldo Rossi, Autobiografia scientifica. The temporal dimension underpinning the draft of Autobiografia scientifica by Aldo Rossi may be referred to what Lucien Lévy-Bruhl, the well-known French anthropologist, defines as “primitive mentality” and “prelogical” conscience : the book of life has lost its page numbers, even punctuation. For Lévy-Bruhl, but certainly for Rossi, life or its summing up becomes a continuous account of ellipses, gaps, repetitions that may be read from left to right or viceversa, from head to foot or viceversa without distinction. Rossi’s autobiographical writing seems to accept and support the confusion with which memories have been collected, recording them after the order memory gives them in the mental distillation or simply according to the chronological order in which they have happened. For Rossi, the confusion reflects the melting of memory elements into a composite image which is the result of a fusion. He is aware that the same sap pervades all memories he is going to put in order: each of them has got a common denominator. Differences have diminished, almost faded; the quick glance is prevalent over the distinction of each episode. Rossi’s writing is beyond the categories dependent on time: past and present, before and now. For Rossi, the only repetition – the repetition the text will make possible for an indefinite number of times – gives peculiarity to the event. As Gilles Deleuze knows, “things” may only last as “singleness”: more frequent the repetition is, more singular is the memory phenomenon that recurs, because only what is singular magnifies itself and happens endlessly forever. Rossi understands that “to raise the first time to nth forever”, repetition becomes glorification . It may be an autobiography that, celebrating the originality, enhances the memory event in the repetition; in fact it greatly differs from the biographical reproduction, in which each repetition is but a weaker echo, a duller copy, provided with a smaller an smaller power in comparison with the original. Paradoxically, for Deleuze the repetition asserts the originality and singularity of what is repeated. Rossi seems to share the thought expressed by Kierkegaard in the essay Repetition: «The hope is a graceful maiden slipping through your fingers; the memory of an elderly woman, indeed pretty, but never satisfactory if necessary; the repetition is a loved friend you are never tired of, as it is only the new to make you bored. The old never bores you and its presence makes you happy [...] life is but a repetition [...] here is the beauty of life» . Rossi knows well that repetition hints at the lasting stability of cosmic time. Kierkegaard goes on: «The world exists, and it exists as a repetition» . Rossi devotes himself, on purpose and in all conscience, to collect, to inventory and «to review life», his own life, according to a recovery not from the past but of the past: a search work, the «recherche du temps perdu», as Proust entitled his masterpiece on memory. If you want the past time to be not wasted, you must give it presence. «Memoria e specifico come caratteristiche per riconoscere se stesso e ciò che è estraneo mi sembravano le più chiare condizioni e spiegazioni della realtà. Non esiste uno specifico senza memoria, e una memoria che non provenga da un momento specifico; e solo questa unione permette la conoscenza della propria individualità e del contrario (self e non-self)» . Rossi wants to understand himself, his own character; it is really his own character that requires to be understood, to increase its own introspective ability and intelligence. «Può sembrare strano che Planck e Dante associno la loro ricerca scientifica e autobiografica con la morte; una morte che è in qualche modo continuazione di energia. In realtà, in ogni artista o tecnico, il principio della continuazione dell’energia si mescola con la ricerca della felicità e della morte» . The eschatological incipit of Rossi’s autobiography refers to Freud’s thought in the exact circularity of Dante’s framework and in as much exact circularity of the statement of the principle of the conservation of energy: in fact it was Freud to connect repetition to death. For Freud, the desire of repetition is an instinct rooted in biology. The primary aim of such an instinct would be to restore a previous condition, so that the repeated history represents a part of the past (even if concealed) and, relieving the removal, reduces anguish and tension. So, Freud ask himself, what is the most remote state to which the instinct, through the repetition, wants to go back? It is a pre-vital condition, inorganic of the pure entropy, a not-to-be condition in which doesn’t exist any tension; in other words, Death. Rossi, with the theme of death, introduces the theme of circularity which further on refers to the sense of continuity in transformation or, in the opposite way, the transformation in continuity. «[...] la descrizione e il rilievo delle forme antiche permettevano una continuità altrimenti irripetibile, permettevano anche una trasformazione, una volta che la vita fosse fermata in forme precise» . Rossi’s attitude seems to hint at the reflection on time and – in a broad sense – at the thought on life and things expressed by T.S. Eliot in Four Quartets: «Time present and time past / Are both perhaps present in time future, / And time future is contained in time past. / I all time is eternally present / All time is unredeemable. / What might have been is an abstraction / Remaining perpetual possibility / Only in a word of speculation. / What might have been and what has been / Point to one end, which is always present. [...]» . Aldo Rossi’s autobiographical story coincides with the description of “things” and the description of himself through the things in the exact parallel with craft or art. He seems to get all things made by man to coincide with the personal or artistic story, with the consequent immediate necessity of formulating a new interpretation: the flow of things has never met a total stop; all that exists nowadays is but a repetition or a variant of something existing some time ago and so on, without any interruption until the early dawnings of human life. Nevertheless, Rossi must operate specific subdivisions inside the continuous connection in time – of his time – even if limited by a present beginning and end of his own existence. This artist, as an “historian” of himself and his own life – as an auto-biographer – enjoys the privilege to be able to decide if and how to operate the cutting in a certain point rather than in another one, without being compelled to justify his choice. In this sense, his story is a matter very ductile and flexible: a good story-teller can choose any moment to start a certain sequence of events. Yet, Rossi is aware that, beyond the mere narration, there is the problem to identify in history - his own personal story – those flakings where a clean cut enables the separation of events of different nature. In order to do it, he has to make not only an inventory of his own “things”, but also to appeal to authority of the Divina Commedia started by Dante when he was 30. «A trent’anni si deve compiere o iniziare qualcosa di definitivo e fare i conti con la propria formazione» . For Rossi, the poet performs his authority not only in the text, but also in his will of setting out on a mystical journey and handing it down through an exact descriptive will. Rossi turns not only to the authority of poetry, but also evokes the authority of science with Max Plank and his Scientific Autobiography, published, in Italian translation, by Einaudi, 1956. Concerning Planck, Rossi resumes an element seemingly secondary in hit account where the German physicist «[...] risale alle scoperte della fisica moderna ritrovando l’impressione che gli fece l’enunciazione del principio di conservazione dell’energia; [...]» . It is again the act of describing that links Rossi to Planck, it is the description of a circularity, the one of conservation of energy, which endorses Rossi’s autobiographical speech looking for both happiness and death. Rossi seems to agree perfectly to the thought of Planck at the opening of his own autobiography: «The decision to devote myself to science was a direct consequence of a discovery which was never ceased to arouse my enthusiasm since my early youth: the laws of human thought coincide with the ones governing the sequences of the impressions we receive from the world surrounding us, so that the mere logic can enable us to penetrate into the latter one’s mechanism. It is essential that the outer world is something independent of man, something absolute. The search of the laws dealing with this absolute seems to me the highest scientific aim in life» . For Rossi the survey of his own life represents a way to change the events into experiences, to concentrate the emotion and group them in meaningful plots: «It seems, as one becomes older. / That the past has another pattern, and ceases to be a mere sequence [...]» Eliot wrote in Four Quartet, which are a meditation on time, old age and memory . And he goes on: «We had the experience but missed the meaning, / And approach to the meaning restores the experience / In a different form, beyond any meaning [...]» . Rossi restores in his autobiography – but not only in it – the most ancient sense of memory, aware that for at least 15 centuries the Latin word memoria was used to show the activity of bringing back images to mind: the psychology of memory, which starts with Aristotele (De Anima), used to consider such a faculty totally essential to mind. Keith Basso writes: «The thought materializes in the form of “images”» . Rossi knows well – as Aristotele said – that if you do not have a collection of mental images to remember – imagination – there is no thought at all. According to this psychological tradition, what today we conventionally call “memory” is but a way of imagining created by time. Rossi, entering consciously this stream of thought, passing through the Renaissance ars memoriae to reach us gives a great importance to the word and assumes it as a real place, much more than a recollection, even more than a production and an emotional elaboration of images.
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
The advent of distributed and heterogeneous systems has laid the foundation for the birth of new architectural paradigms, in which many separated and autonomous entities collaborate and interact to the aim of achieving complex strategic goals, impossible to be accomplished on their own. A non exhaustive list of systems targeted by such paradigms includes Business Process Management, Clinical Guidelines and Careflow Protocols, Service-Oriented and Multi-Agent Systems. It is largely recognized that engineering these systems requires novel modeling techniques. In particular, many authors are claiming that an open, declarative perspective is needed to complement the closed, procedural nature of the state of the art specification languages. For example, the ConDec language has been recently proposed to target the declarative and open specification of Business Processes, overcoming the over-specification and over-constraining issues of classical procedural approaches. On the one hand, the success of such novel modeling languages strongly depends on their usability by non-IT savvy: they must provide an appealing, intuitive graphical front-end. On the other hand, they must be prone to verification, in order to guarantee the trustworthiness and reliability of the developed model, as well as to ensure that the actual executions of the system effectively comply with it. In this dissertation, we claim that Computational Logic is a suitable framework for dealing with the specification, verification, execution, monitoring and analysis of these systems. We propose to adopt an extended version of the ConDec language for specifying interaction models with a declarative, open flavor. We show how all the (extended) ConDec constructs can be automatically translated to the CLIMB Computational Logic-based language, and illustrate how its corresponding reasoning techniques can be successfully exploited to provide support and verification capabilities along the whole life cycle of the targeted systems.
Resumo:
Ancient pavements are composed of a variety of preparatory or foundation layers constituting the substrate, and of a layer of tesserae, pebbles or marble slabs forming the surface of the floor. In other cases, the surface consists of a mortar layer beaten and polished. The term mosaic is associated with the presence of tesserae or pebbles, while the more general term pavement is used in all the cases. As past and modern excavations of ancient pavements demonstrated, all pavements do not necessarily display the stratigraphy of the substrate described in the ancient literary sources. In fact, the number and thickness of the preparatory layers, as well as the nature and the properties of their constituent materials, are often varying in pavements which are placed either in different sites or in different buildings within a same site or even in a same building. For such a reason, an investigation that takes account of the whole structure of the pavement is important when studying the archaeological context of the site where it is placed, when designing materials to be used for its maintenance and restoration, when documenting it and when presenting it to public. Five case studies represented by archaeological sites containing floor mosaics and other kind of pavements, dated to the Hellenistic and the Roman period, have been investigated by means of in situ and laboratory analyses. The results indicated that the characteristics of the studied pavements, namely the number and the thickness of the preparatory layers, and the properties of the mortars constituting them, vary according to the ancient use of the room where the pavements are placed and to the type of surface upon which they were built. The study contributed to the understanding of the function and the technology of the pavementsâ substrate and to the characterization of its constituent materials. Furthermore, the research underlined the importance of the investigation of the whole structure of the pavement, included the foundation surface, in the interpretation of the archaeological context where it is located. A series of practical applications of the results of the research, in the designing of repair mortars for pavements, in the documentation of ancient pavements in the conservation practice, and in the presentation to public in situ and in museums of ancient pavements, have been suggested.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
The main scope of my PhD is the reconstruction of the large-scale bivalve phylogeny on the basis of four mitochondrial genes, with samples taken from all major groups of the class. To my knowledge, it is the first attempt of such a breadth in Bivalvia. I decided to focus on both ribosomal and protein coding DNA sequences (two ribosomal encoding genes -12s and 16s -, and two protein coding ones - cytochrome c oxidase I and cytochrome b), since either bibliography and my preliminary results confirmed the importance of combined gene signals in improving evolutionary pathways of the group. Moreover, I wanted to propose a methodological pipeline that proved to be useful to obtain robust results in bivalves phylogeny. Actually, best-performing taxon sampling and alignment strategies were tested, and several data partitioning and molecular evolution models were analyzed, thus demonstrating the importance of molding and implementing non-trivial evolutionary models. In the line of a more rigorous approach to data analysis, I also proposed a new method to assess taxon sampling, by developing Clarke and Warwick statistics: taxon sampling is a major concern in phylogenetic studies, and incomplete, biased, or improper taxon assemblies can lead to misleading results in reconstructing evolutionary trees. Theoretical methods are already available to optimize taxon choice in phylogenetic analyses, but most involve some knowledge about genetic relationships of the group of interest, or even a well-established phylogeny itself; these data are not always available in general phylogenetic applications. The method I proposed measures the "phylogenetic representativeness" of a given sample or set of samples and it is based entirely on the pre-existing available taxonomy of the ingroup, which is commonly known to investigators. Moreover, it also accounts for instability and discordance in taxonomies. A Python-based script suite, called PhyRe, has been developed to implement all analyses.