7 resultados para direct methods
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
L’acquifero freatico costiero ravennate è intensamente salinizzato fino a diversi km nell’entroterra. Il corpo dell’acquifero è formato da sabbie che poggiano su un substrato argilloso ad una profondità media di 25 m, i depositi affioranti sono sabbie e argille. Il lavoro svolto consiste in una caratterizzazione dello stato di salinizzazione con metodologie indirette (geoelettrica) e metodologie dirette (letture dei parametri fisici delle acque in pozzo). I sondaggi elettrici verticali (V.E.S.) mostrano stagionalità dovuta alle differenti quantità di pioggia e quindi di ricarica, le aree con depositi superficiali ad alta conducibilità idraulica (sabbie) hanno una lente d’acqua dolce compresa tra 0,1 e 2,25 m di spessore, al di sotto della quale troviamo una zona di mescolamento con spessori che vanno da 1,00 a 12,00 m, mentre quando in superficie abbiamo depositi a bassa conducibilità idraulica (limi sabbiosi e argille sabbiose) la lente d’acqua dolce scompare e la zona di mescolamento è sottile. Le misure dirette in pozzo mostrano una profondità della tavola d’acqua quasi ovunque sotto il livello del mare in entrambi i mesi monitorati, Giugno e Dicembre 2010, presentando una profondità leggermente maggiore nel mese di Dicembre. Dalla ricostruzione litologica risulta un acquifero composto da 4×109 m3 di sabbia, per cui ipotizzando una porosità media del 30% sono presenti 1,2×109 m3 di acqua. Dalla modellazione numerica (Modflow-SEAWAT 2000) risulta che l’origine dell’acqua salata che si trova in falda trova più facilmente spiegazione ipotizzando la sua presenza fin dalla formazione dell’acquifero, residuo delle acque marine che regredivano. Un’altra problematica analizzata è valutare l’applicazione della metodologia a minifiltri in uno studio sulla salinizzazione delle acque di falda. É stata implementata la costruzione di un transetto sperimentale, che ha permesso la mappatura dell’interfaccia acqua dolce/salmastra/salata con una precisione finora non raggiungibile.
Resumo:
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
Resumo:
The goal of this thesis work is to develop a computational method based on machine learning techniques for predicting disulfide-bonding states of cysteine residues in proteins, which is a sub-problem of a bigger and yet unsolved problem of protein structure prediction. Improvement in the prediction of disulfide bonding states of cysteine residues will help in putting a constraint in the three dimensional (3D) space of the respective protein structure, and thus will eventually help in the prediction of 3D structure of proteins. Results of this work will have direct implications in site-directed mutational studies of proteins, proteins engineering and the problem of protein folding. We have used a combination of Artificial Neural Network (ANN) and Hidden Markov Model (HMM), the so-called Hidden Neural Network (HNN) as a machine learning technique to develop our prediction method. By using different global and local features of proteins (specifically profiles, parity of cysteine residues, average cysteine conservation, correlated mutation, sub-cellular localization, and signal peptide) as inputs and considering Eukaryotes and Prokaryotes separately we have reached to a remarkable accuracy of 94% on cysteine basis for both Eukaryotic and Prokaryotic datasets, and an accuracy of 90% and 93% on protein basis for Eukaryotic dataset and Prokaryotic dataset respectively. These accuracies are best so far ever reached by any existing prediction methods, and thus our prediction method has outperformed all the previously developed approaches and therefore is more reliable. Most interesting part of this thesis work is the differences in the prediction performances of Eukaryotes and Prokaryotes at the basic level of input coding when ‘profile’ information was given as input to our prediction method. And one of the reasons for this we discover is the difference in the amino acid composition of the local environment of bonded and free cysteine residues in Eukaryotes and Prokaryotes. Eukaryotic bonded cysteine examples have a ‘symmetric-cysteine-rich’ environment, where as Prokaryotic bonded examples lack it.
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.
Resumo:
Proper ion channels’ functioning is a prerequisite for a normal cell and disorders involving ion channels, or channelopathies, underlie many human diseases. Long QT syndromes (LQTS) for example may arise from the malfunctioning of hERG channel, caused either by the binding of drugs or mutations in HERG gene. In the first part of this thesis I present a framework to investigate the mechanism of ion conduction through hERG channel. The free energy profile governing the elementary steps of ion translocation in the pore was computed by means of umbrella sampling simulations. Compared to previous studies, we detected a different dynamic behavior: according to our data hERG is more likely to mediate a conduction mechanism which has been referred to as “single-vacancy-like” by Roux and coworkers (2001), rather then a “knock-on” mechanism. The same protocol was applied to a model of hERG presenting the Gly628Ser mutation, found to be cause of congenital LQTS. The results provided interesting insights about the reason of the malfunctioning of the mutant channel. Since they have critical functions in viruses’ life cycle, viral ion channels, such as M2 proton channel, are considered attractive targets for antiviral therapy. A deep knowledge of the mechanisms that the virus employs to survive in the host cell is of primary importance in the identification of new antiviral strategies. In the second part of this thesis I shed light on the role that M2 plays in the control of electrical potential inside the virus, being the charge equilibration a condition required to allow proton influx. The ion conduction through M2 was simulated using metadynamics technique. Based on our results we suggest that a potential anion-mediated cation-proton exchange, as well as a direct anion-proton exchange could both contribute to explain the activity of the M2 channel.
Resumo:
This work of thesis involves various aspects of crystal engineering. Chapter 1 focuses on crystals containing crown ether complexes. Aspects such as the possibility of preparing these materials by non-solution methods, i.e. by direct reaction of the solid components, thermal behavior and also isomorphism and interconversion between hydrates are taken into account. In chapter 2 a study is presented aimed to understanding the relationship between hydrogen bonding capability and shape of the building blocks chosen to construct crystals. The focus is on the control exerted by shape on the organization of sandwich cations such as cobalticinium, decamethylcobalticinium and bisbenzenchromium(I) and on the aggregation of monoanions all containing carboxylic and carboxylate groups, into 0-D, 1-D, 2-D and 3-D networks. Reactions conducted in multi-component molecular assemblies or co-crystals have been recognized as a way to control reactivity in the solid state. The [2+2] photodimerization of olefins is a successful demonstration of how templated solid state synthesis can efficiently synthesize unique materials with remarkable stereoselectivity and under environment-friendly conditions. A demonstration of this synthetic strategy is given in chapter 3. The combination of various types of intermolecular linkages, leading to formation of high order aggregation and crystalline materials or to a random aggregation resulting in an amorphous precipitate, may not go to completeness. In such rare cases an aggregation process intermediate between crystalline and amorphous materials is observed, resulting in the formation of a gel, i.e. a viscoelastic solid-like or liquid-like material. In chapter 4 design of new Low Molecular Weight Gelators is presented. Aspects such as the relationships between molecular structure, crystal packing and gelation properties and the application of this kind of gels as a medium for crystal growth of organic molecules, such as APIs, are also discussed.
Resumo:
In the past two decades the work of a growing portion of researchers in robotics focused on a particular group of machines, belonging to the family of parallel manipulators: the cable robots. Although these robots share several theoretical elements with the better known parallel robots, they still present completely (or partly) unsolved issues. In particular, the study of their kinematic, already a difficult subject for conventional parallel manipulators, is further complicated by the non-linear nature of cables, which can exert only efforts of pure traction. The work presented in this thesis therefore focuses on the study of the kinematics of these robots and on the development of numerical techniques able to address some of the problems related to it. Most of the work is focused on the development of an interval-analysis based procedure for the solution of the direct geometric problem of a generic cable manipulator. This technique, as well as allowing for a rapid solution of the problem, also guarantees the results obtained against rounding and elimination errors and can take into account any uncertainties in the model of the problem. The developed code has been tested with the help of a small manipulator whose realization is described in this dissertation together with the auxiliary work done during its design and simulation phases.