953 resultados para palindromic polynomial
Resumo:
[EN] Fe(II) oxidation kinetics were studied in seawater and in seawater enriched with exudates excreted by Phaeodactylum tricornutum as an organic ligand model. The exudates produced after 2, 4, and 8 days of culture at 6.21 .. 107, 2.29 .. 108, and 4.98 .. 108 cell L?1 were selected. The effects of pH (7.2?8.2), temperature (5?35 ºC), and salinity (10?36.72) on the Fe(II) oxidation rate were studied. All the data were compared with the results for seawater without exudates (control). The Fe(II) rate constant decreased as a function of culture time and cell concentration in the culture at different pH, temperature, and salinity. All the experimental data obtained in this study were fitted to a polynomial function in order to quantify the fractional contribution of the organic exudates from the diatoms to the Fe(II) oxidation rate in natural seawater. Experimental results showed that the organic exudates excreted by P. tricornutum affect Fe(II) oxidation, increasing the lifetime of Fe(II) in seawater. A kinetic model approach was carried out to account for the speciation of each Fe(II) type together with its contribution to the overall rate.
Resumo:
The main object of this thesis is the analysis and the quantization of spinning particle models which employ extended ”one dimensional supergravity” on the worldline, and their relation to the theory of higher spin fields (HS). In the first part of this work we have described the classical theory of massless spinning particles with an SO(N) extended supergravity multiplet on the worldline, in flat and more generally in maximally symmetric backgrounds. These (non)linear sigma models describe, upon quantization, the dynamics of particles with spin N/2. Then we have analyzed carefully the quantization of spinning particles with SO(N) extended supergravity on the worldline, for every N and in every dimension D. The physical sector of the Hilbert space reveals an interesting geometrical structure: the generalized higher spin curvature (HSC). We have shown, in particular, that these models of spinning particles describe a subclass of HS fields whose equations of motions are conformally invariant at the free level; in D = 4 this subclass describes all massless representations of the Poincar´e group. In the third part of this work we have considered the one-loop quantization of SO(N) spinning particle models by studying the corresponding partition function on the circle. After the gauge fixing of the supergravity multiplet, the partition function reduces to an integral over the corresponding moduli space which have been computed by using orthogonal polynomial techniques. Finally we have extend our canonical analysis, described previously for flat space, to maximally symmetric target spaces (i.e. (A)dS background). The quantization of these models produce (A)dS HSC as the physical states of the Hilbert space; we have used an iterative procedure and Pochhammer functions to solve the differential Bianchi identity in maximally symmetric spaces. Motivated by the correspondence between SO(N) spinning particle models and HS gauge theory, and by the notorious difficulty one finds in constructing an interacting theory for fields with spin greater than two, we have used these one dimensional supergravity models to study and extract informations on HS. In the last part of this work we have constructed spinning particle models with sp(2) R symmetry, coupled to Hyper K¨ahler and Quaternionic-K¨ahler (QK) backgrounds.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
The horizontal and vertical system neurons (HS and VS cells) are part of a conserved set of lobula plate giant neurons (LPGNs) in the optic lobes of the adult brain. Structure and physiology of these cells are well known, predominantly from studies in larger Dipteran flies. Our knowledge about the ontogeny of these cells is limited and stems predominantly from laser ablation studies in larvae of the house fly Musca domestica. These studies suggested that the HS and VS cells stem from a single precursor, which, at least in Musca, has not yet divided in the second larval instar. A regulatory mutation (In(1)omb[H31]) in the Drosophila gene optomotor-blind (omb) leads to the selective loss of the adult HS and VS cells. This mutation causes a transient reduction in omb expression in what appears to be the entire optic lobe anlage (OLA) late in embryogenesis. Here, I have reinitiated the laser approach with the goal of identifying the presumptive embryonic HS/VS precursor cell in Drosophila. The usefulness of the laser ablation approach which has not been applied, so far, to cells lying deep within the Drosophila embryo, was first tested on two well defined embryonic sensory structures, the olfactory antenno-maxillary complex (AMC) and the light-sensitive Bolwing´s organ (BO). In the case of the AMC, the efficiency of the ablation procedure was demonstrated with a behavioral assay. When both AMCs were ablated, the response to an attractive odour (n-butanol) was clearly reduced. Interestingly, the larvae were not completely unresponsive but had a delayed response kinetics, indicating the existence of a second odour system. BO will be a useful test system for the selectivity of laser ablation when used at higher spatial resolution. An omb-Gal4 enhancer trap line was used to visualize the embryonic OLA by GFP fluorescence. This fluorescence allowed to guide the laser beam to the relevant structure within the embryo. The success of the ablations was monitored in the adult brain via the enhancer trap insertion A122 which selectively visualizes the HS and VS cell bodies. Due to their tight clustering, individual cells could not be identified in the embryonic OLA by conventional fluorescence microscopy. Nonetheless, systematic ablation of subdomains of the OLA allowed to localize the presumptive HS/VS precursor to a small area within the OLA, encompassing around 10 cells. Future studies at higher resolution should be able to identify the precursor as (an) individual cell(s). Most known lethal omb alleles do not complement the HS/VS phenotype of the In(1)omb[H31] allele. This is the expected behaviour of null alleles. Two lethal omb alleles that had been isolated previously by non-complementation of the omb hypomorphic allele bifid, have been reported, however, to complement In(1)omb[H31]. This report was based on low resolution paraffin histology of adult heads. Four mutations from this mutagenesis were characterized here in more detail (l(1)omb[11], l(1)omb[12], l(1)omb[13], and l(1)omb[15]). Using A122 as marker for the adult HS and VS cells, I could show, that only l(1)omb[11] can partly complement the HS/VS cell phenotype of In(1)omb[H31]. In order to identify the molecular lesions in these mutants, the exons and exon/intron junctions were sequenced in PCR-amplified material from heterozygous flies. Only in two mutants could the molecular cause for loss of omb function be identified: in l(1)omb[13]), a missense mutation causes the exchange of a highly conserved residue within the DNA-binding T-domain; in l(1)omb[15]), a nonsense mutation causes a C-terminal truncation. In the other two mutants apparently regulatory regions or not yet identified alternative exons are affected. To see whether mutant OMB protein in the missense mutant l(1)omb[13] is affected in DNA binding, electrophoretic shift assays on wildtype and mutant T-domains were performed. They revealed that the mutant no longer is able to bind the consensus palindromic T-box element.
Resumo:
The thesis is framed within the field of the stochastic approach to flow and transport themes of solutes in natural porous materials. The methodology used to characterise the uncertainty associated with the modular predictions is completely general and can be reproduced in various contexts. The theme of the research includes the following among its main objectives: (a) the development of a Global Sensitivity Analysis on contaminant transport models in the subsoil to research the effects of the uncertainty of the most important parameters; (b) the application of advanced techniques, such as Polynomial Chaos Expansion (PCE), for obtaining surrogate models starting from those which conduct traditionally developed analyses in the context of Monte Carlo simulations, characterised by an often not negligible computational burden; (c) the analyses and the understanding of the key processes at the basis of the transport of solutes in natural porous materials using the aforementioned technical and analysis resources. In the complete picture, the thesis looks at the application of a Continuous Injection transport model of contaminants, of the PCE technique which has already been developed and applied by the thesis supervisors, by way of numerical code, to a Slug Injection model. The methodology was applied to the aforementioned model with original contribution deriving from surrogate models with various degrees of approximation and developing a Global Sensitivity Analysis aimed at the determination of Sobol’ indices.
Resumo:
We deal with five problems arising in the field of logistics: the Asymmetric TSP (ATSP), the TSP with Time Windows (TSPTW), the VRP with Time Windows (VRPTW), the Multi-Trip VRP (MTVRP), and the Two-Echelon Capacitated VRP (2E-CVRP). The ATSP requires finding a lest-cost Hamiltonian tour in a digraph. We survey models and classical relaxations, and describe the most effective exact algorithms from the literature. A survey and analysis of the polynomial formulations is provided. The considered algorithms and formulations are experimentally compared on benchmark instances. The TSPTW requires finding, in a weighted digraph, a least-cost Hamiltonian tour visiting each vertex within a given time window. We propose a new exact method, based on new tour relaxations and dynamic programming. Computational results on benchmark instances show that the proposed algorithm outperforms the state-of-the-art exact methods. In the VRPTW, a fleet of identical capacitated vehicles located at a depot must be optimally routed to supply customers with known demands and time window constraints. Different column generation bounding procedures and an exact algorithm are developed. The new exact method closed four of the five open Solomon instances. The MTVRP is the problem of optimally routing capacitated vehicles located at a depot to supply customers without exceeding maximum driving time constraints. Two set-partitioning-like formulations of the problem are introduced. Lower bounds are derived and embedded into an exact solution method, that can solve benchmark instances with up to 120 customers. The 2E-CVRP requires designing the optimal routing plan to deliver goods from a depot to customers by using intermediate depots. The objective is to minimize the sum of routing and handling costs. A new mathematical formulation is introduced. Valid lower bounds and an exact method are derived. Computational results on benchmark instances show that the new exact algorithm outperforms the state-of-the-art exact methods.
Resumo:
The thesis applies the ICC tecniques to the probabilistic polinomial complexity classes in order to get an implicit characterization of them. The main contribution lays on the implicit characterization of PP (which stands for Probabilistic Polynomial Time) class, showing a syntactical characterisation of PP and a static complexity analyser able to recognise if an imperative program computes in Probabilistic Polynomial Time. The thesis is divided in two parts. The first part focuses on solving the problem by creating a prototype of functional language (a probabilistic variation of lambda calculus with bounded recursion) that is sound and complete respect to Probabilistic Prolynomial Time. The second part, instead, reverses the problem and develops a feasible way to verify if a program, written with a prototype of imperative programming language, is running in Probabilistic polynomial time or not. This thesis would characterise itself as one of the first step for Implicit Computational Complexity over probabilistic classes. There are still open hard problem to investigate and try to solve. There are a lot of theoretical aspects strongly connected with these topics and I expect that in the future there will be wide attention to ICC and probabilistic classes.
Resumo:
A year of satellite-borne lidar CALIOP data is analyzed and statistics on occurrence and distribution of bulk properties of cirri are provided. The relationship between environmental and cloud physical parameters and the shape of the backscatter profile (BSP) is investigated. It is found that CALIOP BSP is mainly affected by cloud geometrical thickness while only minor impacts can be attributed to other quantities such as optical depth or temperature. To fit mean BSPs as functions of geometrical thickness and position within the cloud layer, polynomial functions are provided. It is demonstrated that, under realistic hypotheses, the mean BSP is linearly proportional to the IWC profile. The IWC parameterization is included into the RT-RET retrieval algorithm, that is exploited to analyze infrared radiance measurements in presence of cirrus clouds during the ECOWAR field campaign. Retrieved microphysical and optical properties of the observed cloud are used as input parameters in a forward RT simulation run over the 100-1100 cm-1 spectral interval and compared with interferometric data to test the ability of the current single scattering properties database of ice crystal to reproduce realistic optical features. Finally a global scale investigation of cirrus clouds is performed by developing a collocation algorithm that exploits satellite data from multiple sensors (AIRS, CALIOP, MODIS). The resulting data set is utilized to test a new infrared hyperspectral retrieval algorithm. Retrieval products are compared to data and in particular the cloud top height (CTH) product is considered for this purpose. A better agreement of the retrieval with the CALIOP CTH than MODIS is found, even if some cases of underestimation and overestimation are observed.
Resumo:
In the present dissertation we consider Feynman integrals in the framework of dimensional regularization. As all such integrals can be expressed in terms of scalar integrals, we focus on this latter kind of integrals in their Feynman parametric representation and study their mathematical properties, partially applying graph theory, algebraic geometry and number theory. The three main topics are the graph theoretic properties of the Symanzik polynomials, the termination of the sector decomposition algorithm of Binoth and Heinrich and the arithmetic nature of the Laurent coefficients of Feynman integrals.rnrnThe integrand of an arbitrary dimensionally regularised, scalar Feynman integral can be expressed in terms of the two well-known Symanzik polynomials. We give a detailed review on the graph theoretic properties of these polynomials. Due to the matrix-tree-theorem the first of these polynomials can be constructed from the determinant of a minor of the generic Laplacian matrix of a graph. By use of a generalization of this theorem, the all-minors-matrix-tree theorem, we derive a new relation which furthermore relates the second Symanzik polynomial to the Laplacian matrix of a graph.rnrnStarting from the Feynman parametric parameterization, the sector decomposition algorithm of Binoth and Heinrich serves for the numerical evaluation of the Laurent coefficients of an arbitrary Feynman integral in the Euclidean momentum region. This widely used algorithm contains an iterated step, consisting of an appropriate decomposition of the domain of integration and the deformation of the resulting pieces. This procedure leads to a disentanglement of the overlapping singularities of the integral. By giving a counter-example we exhibit the problem, that this iterative step of the algorithm does not terminate for every possible case. We solve this problem by presenting an appropriate extension of the algorithm, which is guaranteed to terminate. This is achieved by mapping the iterative step to an abstract combinatorial problem, known as Hironaka's polyhedra game. We present a publicly available implementation of the improved algorithm. Furthermore we explain the relationship of the sector decomposition method with the resolution of singularities of a variety, given by a sequence of blow-ups, in algebraic geometry.rnrnMotivated by the connection between Feynman integrals and topics of algebraic geometry we consider the set of periods as defined by Kontsevich and Zagier. This special set of numbers contains the set of multiple zeta values and certain values of polylogarithms, which in turn are known to be present in results for Laurent coefficients of certain dimensionally regularized Feynman integrals. By use of the extended sector decomposition algorithm we prove a theorem which implies, that the Laurent coefficients of an arbitrary Feynman integral are periods if the masses and kinematical invariants take values in the Euclidean momentum region. The statement is formulated for an even more general class of integrals, allowing for an arbitrary number of polynomials in the integrand.
Resumo:
Sei $\pi:X\rightarrow S$ eine \"uber $\Z$ definierte Familie von Calabi-Yau Varietaten der Dimension drei. Es existiere ein unter dem Gauss-Manin Zusammenhang invarianter Untermodul $M\subset H^3_{DR}(X/S)$ von Rang vier, sodass der Picard-Fuchs Operator $P$ auf $M$ ein sogenannter {\em Calabi-Yau } Operator von Ordnung vier ist. Sei $k$ ein endlicher K\"orper der Charaktetristik $p$, und sei $\pi_0:X_0\rightarrow S_0$ die Reduktion von $\pi$ \uber $k$. F\ur die gew\ohnlichen (ordinary) Fasern $X_{t_0}$ der Familie leiten wir eine explizite Formel zur Berechnung des charakteristischen Polynoms des Frobeniusendomorphismus, des {\em Frobeniuspolynoms}, auf dem korrespondierenden Untermodul $M_{cris}\subset H^3_{cris}(X_{t_0})$ her. Sei nun $f_0(z)$ die Potenzreihenl\osung der Differentialgleichung $Pf=0$ in einer Umgebung der Null. Da eine reziproke Nullstelle des Frobeniuspolynoms in einem Teichm\uller-Punkt $t$ durch $f_0(z)/f_0(z^p)|_{z=t}$ gegeben ist, ist ein entscheidender Schritt in der Berechnung des Frobeniuspolynoms die Konstruktion einer $p-$adischen analytischen Fortsetzung des Quotienten $f_0(z)/f_0(z^p)$ auf den Rand des $p-$adischen Einheitskreises. Kann man die Koeffizienten von $f_0$ mithilfe der konstanten Terme in den Potenzen eines Laurent-Polynoms, dessen Newton-Polyeder den Ursprung als einzigen inneren Gitterpunkt enth\alt, ausdr\ucken,so beweisen wir gewisse Kongruenz-Eigenschaften unter den Koeffizienten von $f_0$. Diese sind entscheidend bei der Konstruktion der analytischen Fortsetzung. Enth\alt die Faser $X_{t_0}$ einen gew\ohnlichen Doppelpunkt, so erwarten wir im Grenz\ubergang, dass das Frobeniuspolynom in zwei Faktoren von Grad eins und einen Faktor von Grad zwei zerf\allt. Der Faktor von Grad zwei ist dabei durch einen Koeffizienten $a_p$ eindeutig bestimmt. Durchl\auft nun $p$ die Menge aller Primzahlen, so erwarten wir aufgrund des Modularit\atssatzes, dass es eine Modulform von Gewicht vier gibt, deren Koeffizienten durch die Koeffizienten $a_p$ gegeben sind. Diese Erwartung hat sich durch unsere umfangreichen Rechnungen best\atigt. Dar\uberhinaus leiten wir weitere Formeln zur Bestimmung des Frobeniuspolynoms her, in welchen auch die nicht-holomorphen L\osungen der Gleichung $Pf=0$ in einer Umgebung der Null eine Rolle spielen.
Resumo:
This work presents a comprehensive methodology for the reduction of analytical or numerical stochastic models characterized by uncertain input parameters or boundary conditions. The technique, based on the Polynomial Chaos Expansion (PCE) theory, represents a versatile solution to solve direct or inverse problems related to propagation of uncertainty. The potentiality of the methodology is assessed investigating different applicative contexts related to groundwater flow and transport scenarios, such as global sensitivity analysis, risk analysis and model calibration. This is achieved by implementing a numerical code, developed in the MATLAB environment, presented here in its main features and tested with literature examples. The procedure has been conceived under flexibility and efficiency criteria in order to ensure its adaptability to different fields of engineering; it has been applied to different case studies related to flow and transport in porous media. Each application is associated with innovative elements such as (i) new analytical formulations describing motion and displacement of non-Newtonian fluids in porous media, (ii) application of global sensitivity analysis to a high-complexity numerical model inspired by a real case of risk of radionuclide migration in the subsurface environment, and (iii) development of a novel sensitivity-based strategy for parameter calibration and experiment design in laboratory scale tracer transport.
Resumo:
In distributed systems like clouds or service oriented frameworks, applications are typically assembled by deploying and connecting a large number of heterogeneous software components, spanning from fine-grained packages to coarse-grained complex services. The complexity of such systems requires a rich set of techniques and tools to support the automation of their deployment process. By relying on a formal model of components, a technique is devised for computing the sequence of actions allowing the deployment of a desired configuration. An efficient algorithm, working in polynomial time, is described and proven to be sound and complete. Finally, a prototype tool implementing the proposed algorithm has been developed. Experimental results support the adoption of this novel approach in real life scenarios.
Resumo:
This dissertation studies the geometric static problem of under-constrained cable-driven parallel robots (CDPRs) supported by n cables, with n ≤ 6. The task consists of determining the overall robot configuration when a set of n variables is assigned. When variables relating to the platform posture are assigned, an inverse geometric static problem (IGP) must be solved; whereas, when cable lengths are given, a direct geometric static problem (DGP) must be considered. Both problems are challenging, as the robot continues to preserve some degrees of freedom even after n variables are assigned, with the final configuration determined by the applied forces. Hence, kinematics and statics are coupled and must be resolved simultaneously. In this dissertation, a general methodology is presented for modelling the aforementioned scenario with a set of algebraic equations. An elimination procedure is provided, aimed at solving the governing equations analytically and obtaining a least-degree univariate polynomial in the corresponding ideal for any value of n. Although an analytical procedure based on elimination is important from a mathematical point of view, providing an upper bound on the number of solutions in the complex field, it is not practical to compute these solutions as it would be very time-consuming. Thus, for the efficient computation of the solution set, a numerical procedure based on homotopy continuation is implemented. A continuation algorithm is also applied to find a set of robot parameters with the maximum number of real assembly modes for a given DGP. Finally, the end-effector pose depends on the applied load and may change due to external disturbances. An investigation into equilibrium stability is therefore performed.
Resumo:
In this thesis we provide a characterization of probabilistic computation in itself, from a recursion-theoretical perspective, without reducing it to deterministic computation. More specifically, we show that probabilistic computable functions, i.e., those functions which are computed by Probabilistic Turing Machines (PTM), can be characterized by a natural generalization of Kleene's partial recursive functions which includes, among initial functions, one that returns identity or successor with probability 1/2. We then prove the equi-expressivity of the obtained algebra and the class of functions computed by PTMs. In the the second part of the thesis we investigate the relations existing between our recursion-theoretical framework and sub-recursive classes, in the spirit of Implicit Computational Complexity. More precisely, endowing predicative recurrence with a random base function is proved to lead to a characterization of polynomial-time computable probabilistic functions.
Resumo:
This work is focused on the study of saltwater intrusion in coastal aquifers, and in particular on the realization of conceptual schemes to evaluate the risk associated with it. Saltwater intrusion depends on different natural and anthropic factors, both presenting a strong aleatory behaviour, that should be considered for an optimal management of the territory and water resources. Given the uncertainty of problem parameters, the risk associated with salinization needs to be cast in a probabilistic framework. On the basis of a widely adopted sharp interface formulation, key hydrogeological problem parameters are modeled as random variables, and global sensitivity analysis is used to determine their influence on the position of saltwater interface. The analyses presented in this work rely on an efficient model reduction technique, based on Polynomial Chaos Expansion, able to combine the best description of the model without great computational burden. When the assumptions of classical analytical models are not respected, and this occurs several times in the applications to real cases of study, as in the area analyzed in the present work, one can adopt data-driven techniques, based on the analysis of the data characterizing the system under study. It follows that a model can be defined on the basis of connections between the system state variables, with only a limited number of assumptions about the "physical" behaviour of the system.