24 resultados para non-trivial data structures
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The main scope of my PhD is the reconstruction of the large-scale bivalve phylogeny on the basis of four mitochondrial genes, with samples taken from all major groups of the class. To my knowledge, it is the first attempt of such a breadth in Bivalvia. I decided to focus on both ribosomal and protein coding DNA sequences (two ribosomal encoding genes -12s and 16s -, and two protein coding ones - cytochrome c oxidase I and cytochrome b), since either bibliography and my preliminary results confirmed the importance of combined gene signals in improving evolutionary pathways of the group. Moreover, I wanted to propose a methodological pipeline that proved to be useful to obtain robust results in bivalves phylogeny. Actually, best-performing taxon sampling and alignment strategies were tested, and several data partitioning and molecular evolution models were analyzed, thus demonstrating the importance of molding and implementing non-trivial evolutionary models. In the line of a more rigorous approach to data analysis, I also proposed a new method to assess taxon sampling, by developing Clarke and Warwick statistics: taxon sampling is a major concern in phylogenetic studies, and incomplete, biased, or improper taxon assemblies can lead to misleading results in reconstructing evolutionary trees. Theoretical methods are already available to optimize taxon choice in phylogenetic analyses, but most involve some knowledge about genetic relationships of the group of interest, or even a well-established phylogeny itself; these data are not always available in general phylogenetic applications. The method I proposed measures the "phylogenetic representativeness" of a given sample or set of samples and it is based entirely on the pre-existing available taxonomy of the ingroup, which is commonly known to investigators. Moreover, it also accounts for instability and discordance in taxonomies. A Python-based script suite, called PhyRe, has been developed to implement all analyses.
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.
Resumo:
In this thesis the evolution of the techno-social systems analysis methods will be reported, through the explanation of the various research experience directly faced. The first case presented is a research based on data mining of a dataset of words association named Human Brain Cloud: validation will be faced and, also through a non-trivial modeling, a better understanding of language properties will be presented. Then, a real complex system experiment will be introduced: the WideNoise experiment in the context of the EveryAware european project. The project and the experiment course will be illustrated and data analysis will be displayed. Then the Experimental Tribe platform for social computation will be introduced . It has been conceived to help researchers in the implementation of web experiments, and aims also to catalyze the cumulative growth of experimental methodologies and the standardization of tools cited above. In the last part, three other research experience which already took place on the Experimental Tribe platform will be discussed in detail, from the design of the experiment to the analysis of the results and, eventually, to the modeling of the systems involved. The experiments are: CityRace, about the measurement of human traffic-facing strategies; laPENSOcosì, aiming to unveil the political opinion structure; AirProbe, implemented again in the EveryAware project framework, which consisted in monitoring air quality opinion shift of a community informed about local air pollution. At the end, the evolution of the technosocial systems investigation methods shall emerge together with the opportunities and the threats offered by this new scientific path.
Resumo:
Non-B DNA structures like R-loops and G-quadruplexes play a pivotal role in several cellular vital processes like DNA transcription regulation. Misregulation of said non-canonical DNA structures can often lead to genome instability, DNA damage, and, eventually, to the activation of an innate immune response. For such reasons they have been studied as adjuvants in anticancer therapies. Here we studied drugs targeting R-loops (Top1 poisons) and G4s (hydrazone derivatives) in order to observe their effects in terms of DNA damage induction and, subsequently, activation of innate immune response. We studied how non-cytotoxic doses of ampthotecin and LMP-776 impact on genome instability, are capable to induce DNA damage and micronuclei, and, eventually lead to an innate immune gene response via the cGAS/STING pathway. G-quadruplexes are another ubiquitous, non-canonical DNA structure, more abundant in telomeric regions, demonstrating a marked relation with the impairment of telomerase and the regulation of DNA replication and transcription. Furthermore, we investigated the properties of new-synthesized molecules belonging to the highly promising class of hydrazone derivatives, in terms of cytotoxicity, ability to stabilize G4-structures, induce DNA damage, and activate interferon-B production. Both Top1 poisons and G4-stabilizers possess several features that can be very useful in clinical applications, in light of their ability to stimulate innate immune response factors and exert a certain cell-killing power, plus they offer a broad and diverse range of treatment options in order to face a variety of patient treatment needs. It is for these very reasons that it is of uttermost importance that further studies are conducted on these compounds, in order to synthesize new and increasingly powerful and flexible ones, with fewer side effects to customize therapies on specific cancers’ and patients’ features.
Resumo:
The thesis describes three studies concerning the role of the Economic Preference set investigated in the Global Preference Survey (GPS) in the following cases: 1) the needs of women with breast cancer; 2) pain undertreament in oncology; 3) legal status of euthanasia and assisted suicide. The analyses, based on regression techniques, were always conducted on the basis of aggregate data and revealed in all cases a possible role of the Economic Preferences studied, also resisting the concomitant effect of the other covariates that were considered from time to time. Regarding individual studies, the related conclusion are: 1) Economic Preferences appear to play a role in influencing the needs of women with breast cancer, albeit of non-trivial interpretation, statistically "resisting" the concomitant effect of the other independent variables considered. However, these results should be considered preliminary and need further confirmation, possibly with prospective studies conducted at the level of the individual; 2) the results show a good degree of internal consistency with regard to pro-social GPS scores, since they are all found to be non-statistically significant and united, albeit only weakly in trend, by a negative correlation with the % of pain undertreated patients. Sharper, at least statistically, is the role of Patience and Willingness to Take Risk, although of more complex empirical interpretation. 3) the results seem to indicate an obvious role of Economic Preferences, however difficult to interpret empirically. Less evidence, at least on the inferential level, emerged, however, regarding variables that, based on common sense, should play an even more obvious role than Economic Preferences in orienting attitudes toward euthanasia and assisted suicide, namely Healthcare System, Legal Origin, and Kinship Tightness; striking, in particular, is the inability to prove a role for the dominant religious orientation even with a simple bivariate analysis.
Resumo:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
Resumo:
In this thesis, we present our work about some generalisations of ideas, techniques and physical interpretations typical for integrable models to one of the most outstanding advances in theoretical physics of nowadays: the AdS/CFT correspondences. We have undertaken the problem of testing this conjectured duality under various points of view, but with a clear starting point - the integrability - and with a clear ambitious task in mind: to study the finite-size effects in the energy spectrum of certain string solutions on a side and in the anomalous dimensions of the gauge theory on the other. Of course, the final desire woul be the exact comparison between these two faces of the gauge/string duality. In few words, the original part of this work consists in application of well known integrability technologies, in large parte borrowed by the study of relativistic (1+1)-dimensional integrable quantum field theories, to the highly non-relativisic and much complicated case of the thoeries involved in the recent conjectures of AdS5/CFT4 and AdS4/CFT3 corrspondences. In details, exploiting the spin chain nature of the dilatation operator of N = 4 Super-Yang-Mills theory, we concentrated our attention on one of the most important sector, namely the SL(2) sector - which is also very intersting for the QCD understanding - by formulating a new type of nonlinear integral equation (NLIE) based on a previously guessed asymptotic Bethe Ansatz. The solutions of this Bethe Ansatz are characterised by the length L of the correspondent spin chain and by the number s of its excitations. A NLIE allows one, at least in principle, to make analytical and numerical calculations for arbitrary values of these parameters. The results have been rather exciting. In the important regime of high Lorentz spin, the NLIE clarifies how it reduces to a linear integral equations which governs the subleading order in s, o(s0). This also holds in the regime with L ! 1, L/ ln s finite (long operators case). This region of parameters has been particularly investigated in literature especially because of an intriguing limit into the O(6) sigma model defined on the string side. One of the most powerful methods to keep under control the finite-size spectrum of an integrable relativistic theory is the so called thermodynamic Bethe Ansatz (TBA). We proposed a highly non-trivial generalisation of this technique to the non-relativistic case of AdS5/CFT4 and made the first steps in order to determine its full spectrum - of energies for the AdS side, of anomalous dimensions for the CFT one - at any values of the coupling constant and of the size. At the leading order in the size parameter, the calculation of the finite-size corrections is much simpler and does not necessitate the TBA. It consists in deriving for a nonrelativistc case a method, invented for the first time by L¨uscher to compute the finite-size effects on the mass spectrum of relativisic theories. So, we have formulated a new version of this approach to adapt it to the case of recently found classical string solutions on AdS4 × CP3, inside the new conjecture of an AdS4/CFT3 correspondence. Our results in part confirm the string and algebraic curve calculations, in part are completely new and then could be better understood by the rapidly evolving developments of this extremely exciting research field.
Resumo:
The association between celiac disease (CD) and dental enamel defects (DED) is well known. AIM: This study was designed to investigate the prevalence of DED in CD children and to specifically find a possible correlation between DED and gluten exposure period, CD clinical forms, HLA class II haplotype. MATERIALS AND METHODS: This study was designed as a matched case-control study: 374 children were enrolled (187 celiac and 187 non celiac). Data about age at CD diagnosis, CD clinical form and HLA haplotype were recorded. RESULTS: DED were detected in 87 celiac subject while no dental lesions were found in the remaining 100 patients; in 187 healthy controls enamel lesion were significantly less frequent (5.3 % versus 46.5% ; p<0.005).We found a correlation between DED and gluten exposure period, since among CD patients the mean age at CD diagnosis was significantly (p= 0.0004) higher in the group with DED (3.41± 1.27) than without DED (1.26± 0.7). DED resulted more frequent in atypical and silent forms than in the typical one. The presence of HLA DR 52-53 and DQ7 antigens significantly increased the risk of DED (p=0.0017). CONCLUSIONS: Our results confirmed a possible correlation between CD clinical form, age at CD diagnosis, HLA antigens and DED. The origin of DED in CD children is due to multifactorial events and further studies are needed to investigate other determinants.
Resumo:
La ricerca si propone un duplice obbiettivo: 1. provare, attraverso l’applicazione di un metodo teorico tradizionale di analisi economico-finanziaria, il livello ottimale di equilibrio finanziario fra accesso al credito esterno e capitale proprio; 2. mostrare l’utilità di alcuni strumenti finanziari partecipativi per la ricapitalizzazione dell’impresa cooperativa. Oggetto di studio è l’impresa cooperativa che si occupa di una o più fasi del processo di lavorazione, trasformazione e prima commercializzazione del prodotto agricolo conferito dai soci, confrontata con le imprese di capitali che svolgono la medesima attività. La società cooperativa e quella capitalistica saranno, pertanto analizzate in termini di liquidità generata, redditività prodotta e grado di indebitamento, attraverso il calcolo e l’analisi di una serie di indici, tratti dai rispettivi bilanci d’esercizio. È opportuno sottolineare che nella seguente trattazione sarà riservato uno spazio al tema della ricerca del valore nell’impresa cooperativa inteso come espressione della ricchezza creata dai processi aziendali in un determinato periodo di tempo tentando di definire, se esiste, una struttura finanziaria ottimale , ossia uno specifico rapporto tra indebitamento finanziario e mezzi propri, che massimizzi il valore dell’impresa. L’attenzione verso la struttura finanziaria, pertanto, non sarà solo rivolta al costo esplicito del debito o dell’equity, ma si estenderà anche alle implicazioni delle scelte di finanziamento sulle modalità di governo dell’impresa. Infatti molti studi di economia aziendale, e in particolar modo di gestione d’impresa e finanza aziendale, hanno trattato il tema dell’attività di governo dell’impresa, quale elemento in grado di contribuire alla creazione di valore non solo attraverso la selezione dei progetti d’investimento ma anche attraverso la composizione della struttura finanziaria.
Resumo:
In this thesis we will investigate some properties of one-dimensional quantum systems. From a theoretical point of view quantum models in one dimension are particularly interesting because they are strongly interacting, since particles cannot avoid each other in their motion, and you we can never ignore collisions. Yet, integrable models often generate new and non-trivial solutions, which could not be found perturbatively. In this dissertation we shall focus on two important aspects of integrable one- dimensional models: Their entanglement properties at equilibrium and their dynamical correlators after a quantum quench. The first part of the thesis will be therefore devoted to the study of the entanglement entropy in one- dimensional integrable systems, with a special focus on the XYZ spin-1/2 chain, which, in addition to being integrable, is also an interacting model. We will derive its Renyi entropies in the thermodynamic limit and its behaviour in different phases and for different values of the mass-gap will be analysed. In the second part of the thesis we will instead study the dynamics of correlators after a quantum quench , which represent a powerful tool to measure how perturbations and signals propagate through a quantum chain. The emphasis will be on the Transverse Field Ising Chain and the O(3) non-linear sigma model, which will be both studied by means of a semi-classical approach. Moreover in the last chapter we will demonstrate a general result about the dynamics of correlation functions of local observables after a quantum quench in integrable systems. In particular we will show that if there are not long-range interactions in the final Hamiltonian, then the dynamics of the model (non equal- time correlations) is described by the same statistical ensemble that describes its statical properties (equal-time correlations).
Resumo:
A permutation is said to avoid a pattern if it does not contain any subsequence which is order-isomorphic to it. Donald Knuth, in the first volume of his celebrated book "The art of Computer Programming", observed that the permutations that can be computed (or, equivalently, sorted) by some particular data structures can be characterized in terms of pattern avoidance. In more recent years, the topic was reopened several times, while often in terms of sortable permutations rather than computable ones. The idea to sort permutations by using one of Knuth’s devices suggests to look for a deterministic procedure that decides, in linear time, if there exists a sequence of operations which is able to convert a given permutation into the identical one. In this thesis we show that, for the stack and the restricted deques, there exists an unique way to implement such a procedure. Moreover, we use these sorting procedures to create new sorting algorithms, and we prove some unexpected commutation properties between these procedures and the base step of bubblesort. We also show that the permutations that can be sorted by a combination of the base steps of bubblesort and its dual can be expressed, once again, in terms of pattern avoidance. In the final chapter we give an alternative proof of some enumerative results, in particular for the classes of permutations that can be sorted by the two restricted deques. It is well-known that the permutations that can be sorted through a restricted deque are counted by the Schrӧder numbers. In the thesis, we show how the deterministic sorting procedures yield a bijection between sortable permutations and Schrӧder paths.
Resumo:
Chiroptical spectroscopies play a fundamental role in pharmaceutical analysis for the stereochemical characterisation of bioactive molecules, due to the close relationship between chirality and optical activity and the increasing evidence of stereoselectivity in the pharmacological and toxicological profiles of chiral drugs. The correlation between chiroptical properties and absolute stereochemistry, however, requires the development of accurate and reliable theoretical models. The present thesis will report the application of theoretical chiroptical spectroscopies in the field of drug analysis, with particular emphasis on the huge influence of conformational flexibility and solvation on chiroptical properties and on the main computational strategies available to describe their effects by means of electronic circular dichroism (ECD) spectroscopy and time-dependent density functional theory (TD-DFT) calculations. The combination of experimental chiroptical spectroscopies with state-of-the-art computational methods proved to be very efficient at predicting the absolute configuration of a wide range of bioactive molecules (fluorinated 2-arylpropionic acids, β-lactam derivatives, difenoconazole, fenoterol, mycoleptones, austdiol). The results obtained for the investigated systems showed that great care must be taken in describing the molecular system in the most accurate fashion, since chiroptical properties are very sensitive to small electronic and conformational perturbations. In the future, the improvement of theoretical models and methods, such as ab initio molecular dynamics, will benefit pharmaceutical analysis in the investigation of non-trivial effects on the chiroptical properties of solvated systems and in the characterisation of the stereochemistry of complex chiral drugs.
Resumo:
The first mechanical Automaton concept was found in a Chinese text written in the 3rd century BC, while Computer Vision was born in the late 1960s. Therefore, visual perception applied to machines (i.e. the Machine Vision) is a young and exciting alliance. When robots came in, the new field of Robotic Vision was born, and these terms began to be erroneously interchanged. In short, we can say that Machine Vision is an engineering domain, which concern the industrial use of Vision. The Robotic Vision, instead, is a research field that tries to incorporate robotics aspects in computer vision algorithms. Visual Servoing, for example, is one of the problems that cannot be solved by computer vision only. Accordingly, a large part of this work deals with boosting popular Computer Vision techniques by exploiting robotics: e.g. the use of kinematics to localize a vision sensor, mounted as the robot end-effector. The remainder of this work is dedicated to the counterparty, i.e. the use of computer vision to solve real robotic problems like grasping objects or navigate avoiding obstacles. Will be presented a brief survey about mapping data structures most widely used in robotics along with SkiMap, a novel sparse data structure created both for robotic mapping and as a general purpose 3D spatial index. Thus, several approaches to implement Object Detection and Manipulation, by exploiting the aforementioned mapping strategies, will be proposed, along with a completely new Machine Teaching facility in order to simply the training procedure of modern Deep Learning networks.
Resumo:
Analytics is the technology working with the manipulation of data to produce information able to change the world we live every day. Analytics have been largely used within the last decade to cluster people’s behaviour to predict their preferences of items to buy, music to listen, movies to watch and even electoral preference. The most advanced companies succeded in controlling people’s behaviour using analytics. Despite the evidence of the super-power of analytics, they are rarely applied to the big data collected within supply chain systems (i.e. distribution network, storage systems and production plants). This PhD thesis explores the fourth research paradigm (i.e. the generation of knowledge from data) applied to supply chain system design and operations management. An ontology defining the entities and the metrics of supply chain systems is used to design data structures for data collection in supply chain systems. The consistency of this data is provided by mathematical demonstrations inspired by the factory physics theory. The availability, quantity and quality of the data within these data structures define different decision patterns. Ten decision patterns are identified, and validated on-field, to address ten different class of design and control problems in the field of supply chain systems research.
Resumo:
Slot and van Emde Boas Invariance Thesis states that a time (respectively, space) cost model is reasonable for a computational model C if there are mutual simulations between Turing machines and C such that the overhead is polynomial in time (respectively, linear in space). The rationale is that under the Invariance Thesis, complexity classes such as LOGSPACE, P, PSPACE, become robust, i.e. machine independent. In this dissertation, we want to find out if it possible to define a reasonable space cost model for the lambda-calculus, the paradigmatic model for functional programming languages. We start by considering an unusual evaluation mechanism for the lambda-calculus, based on Girard's Geometry of Interaction, that was conjectured to be the key ingredient to obtain a space reasonable cost model. By a fine complexity analysis of this schema, based on new variants of non-idempotent intersection types, we disprove this conjecture. Then, we change the target of our analysis. We consider a variant over Krivine's abstract machine, a standard evaluation mechanism for the call-by-name lambda-calculus, optimized for space complexity, and implemented without any pointer. A fine analysis of the execution of (a refined version of) the encoding of Turing machines into the lambda-calculus allows us to conclude that the space consumed by this machine is indeed a reasonable space cost model. In particular, for the first time we are able to measure also sub-linear space complexities. Moreover, we transfer this result to the call-by-value case. Finally, we provide also an intersection type system that characterizes compositionally this new reasonable space measure. This is done through a minimal, yet non trivial, modification of the original de Carvalho type system.