25 resultados para Oxidoreductases Acting on CH-NH Group Donors
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Recentemente, sempre più attenzione è stata rivolta all' utilizzo di coloranti organici come assorbitori di luce per la preparazione di strati fotoattivi in celle solari organiche (OPV). I coloranti organici presentano un'elevata abilità nella cattura della luce solare grazie all'elevato coefficiente di estinzione molare e buone proprietà fotofisiche. Per questi motivi sono eccellenti candidati per l'incremento della conversione fotoelettrica in OPV. In questa tesi viene descritta una nuova strategia per l'incorporazione di derivati porfirinici in catena laterale a copolimeri tiofenici. Gli studi svolti hanno dimostrato che poli(3-bromoesil)tiofene può essere variamente funzionalizzato con idrossitetrafenilporfirina (TPPOH), per l'ottenimento di copolimeri utilizzabili come materiali p-donatori nella realizzazione di OPV. I copolimeri poli[3-(6-bromoesil)tiofene-co-(3-[5-(4-fenossi)-10,15,20-trifenilporfirinil]esil tiofene] P[T6Br-co-T6TPP] contenenti differenti quantità di porfirina, sono stati sintetizzati sia con metodi non regiospecifici che regiospecifici, con lo scopo di confrontarene le proprietà e di verificare se la strutture macromolecolare che presenta una regiochimica di sostituzione sempre uguale, promuove o meno il trasporto della carica elettrica, migliorando di conseguenza l'efficienza. E' stato inoltre effettuato un ulteriore confronto tra questi derivati e derivati simili P[T6H-co-T6TPP] che non contengono l'atomo di bromo in catena laterale con lo scopo di verificare se l'assenza del gruppo reattivo, migliora o meno la stabilità termica e chimica dei film polimerici, agendo favorevolmete sulle performance dei dispositivi fotovoltaici. Tutti i copolimeri sono stati caratterizzati con differenti tecniche: spettroscopia NMR, FT-IR e UV-Vis, analisi termiche DSC e TGA, e GPC. Le celle solari Bulk Heterojunction, preparate utilizzando PCBM come materiale elettron-accettore e i copolimeri come materilai elettron-donatori, sono state testate utilizzando un multimetro Keithley e il Solar Simulator.
Resumo:
Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.
Resumo:
Self-incompatibility (SI) systems have evolved in many flowering plants to prevent self-fertilization and thus promote outbreeding. Pear and apple, as many of the species belonging to the Rosaceae, exhibit RNase-mediated gametophytic self-incompatibility, a widespread system carried also by the Solanaceae and Plantaginaceae. Pear orchards must for this reason contain at least two different cultivars that pollenize each other; to guarantee an efficient cross-pollination, they should have overlapping flowering periods and must be genetically compatible. This compatibility is determined by the S-locus, containing at least two genes encoding for a female (pistil) and a male (pollen) determinant. The female determinant in the Rosaceae, Solanaceae and Plantaginaceae system is a stylar glycoprotein with ribonuclease activity (S-RNase), that acts as a specific cytotoxin in incompatible pollen tubes degrading cellular RNAs. Since its identification, the S-RNase gene has been intensively studied and the sequences of a large number of alleles are available in online databases. On the contrary, the male determinant has been only recently identified as a pollen-expressed protein containing a F-box motif, called S-Locus F-box (abbreviated SLF or SFB). Since F-box proteins are best known for their participation to the SCF (Skp1 - Cullin - F-box) E3 ubiquitine ligase enzymatic complex, that is involved in protein degradation through the 26S proteasome pathway, the male determinant is supposed to act mediating the ubiquitination of the S-RNases, targeting them for the degradation in compatible pollen tubes. Attempts to clone SLF/SFB genes in the Pyrinae produced no results until very recently; in apple, the use of genomic libraries allowed the detection of two F-box genes linked to each S haplotype, called SFBB (S-locus F-Box Brothers). In Japanese pear, three SFBB genes linked to each haplotype were cloned from pollen cDNA. The SFBB genes exhibit S haplotype-specific sequence divergence and pollen-specific expression; their multiplicity is a feature whose interpretation is unclear: it has been hypothesized that all of them participate in the S-specific interaction with the RNase, but it is also possible that only one of them is involved in this function. Moreover, even if the S locus male and female determinants are the only responsible for the specificity of the pollen-pistil recognition, many other factors are supposed to play a role in GSI; these are not linked to the S locus and act in a S-haplotype independent manner. They can have a function in regulating the expression of S determinants (group 1 factors), modulating their activity (group 2) or acting downstream, in the accomplishment of the reaction of acceptance or rejection of the pollen tube (group 3). This study was aimed to the elucidation of the molecular mechanism of GSI in European pear (Pyrus communis) as well as in the other Pyrinae; it was divided in two parts, the first focusing on the characterization of male determinants, and the second on factors external to the S locus. The research of S locus F-box genes was primarily aimed to the identification of such genes in European pear, for which sequence data are still not available; moreover, it allowed also to investigate about the S locus structure in the Pyrinae. The analysis was carried out on a pool of varieties of the three species Pyrus communis (European pear), Pyrus pyrifolia (Japanese pear), and Malus × domestica (apple); varieties carrying S haplotypes whose RNases are highly similar were chosen, in order to check whether or not the same level of similarity is maintained also between the male determinants. A total of 82 sequences was obtained, 47 of which represent the first S-locus F-box genes sequenced from European pear. The sequence data strongly support the hypothesis that the S locus structure is conserved among the three species, and presumably among all the Pyrinae; at least five genes have homologs in the analysed S haplotypes, but the number of F-box genes surrounding the S-RNase could be even greater. The high level of sequence divergence and the similarity between alleles linked to highly conserved RNases, suggest a shared ancestral polymorphism also for the F-box genes. The F-box genes identified in European pear were mapped on a segregating population of 91 individuals from the cross 'Abbé Fétel' × 'Max Red Bartlett'. All the genes were placed on the linkage group 17, where the S locus has been placed both in pear and apple maps, and resulted strongly associated to the S-RNase gene. The linkage with the RNase was perfect for some of the F-box genes, while for others very rare single recombination events were identified. The second part of this study was focused on the research of other genes involved in the SI response in pear; it was aimed on one side to the identification of genes differentially expressed in compatible and incompatible crosses, and on the other to the cloning and characterization of the transglutaminase (TGase) gene, whose role may be crucial in pollen rejection. For the identification of differentially expressed genes, controlled pollinations were carried out in four combinations (self pollination, incompatible, half-compatible and fully compatible cross-pollination); expression profiles were compared through cDNA-AFLP. 28 fragments displaying an expression pattern related to compatibility or incompatibility were identified, cloned and sequenced; the sequence analysis allowed to assign a putative annotation to a part of them. The identified genes are involved in very different cellular processes or in defense mechanisms, suggesting a very complex change in gene expression following the pollen/pistil recognition. The pool of genes identified with this technique offers a good basis for further study toward a better understanding of how the SI response is carried out. Among the factors involved in SI response, moreover, an important role may be played by transglutaminase (TGase), an enzyme involved both in post-translational protein modification and in protein cross-linking. The TGase activity detected in pear styles was significantly higher when pollinated in incompatible combinations than in compatible ones, suggesting a role of this enzyme in the abnormal cytoskeletal reorganization observed during pollen rejection reaction. The aim of this part of the work was thus to identify and clone the pear TGase gene; the PCR amplification of fragments of this gene was achieved using primers realized on the alignment between the Arabidopsis TGase gene sequence and several apple EST fragments; the full-length coding sequence of the pear TGase gene was then cloned from cDNA, and provided a precious tool for further study of the in vitro and in vivo action of this enzyme.
Resumo:
The present work consists of the investigation of the navigation of Pioneer 10 and 11 probes becoming known as the “Pioneer Anomaly”: the trajectories followed by the spacecrafts did not match the ones retrieved with standard navigation software. Mismatching appeared as a linear drift in the Doppler data received by the spacecrafts, which has been ascribed to a constant sunward acceleration of about 8.5×10-10 m/s2. The study presented hereafter tries to find a convincing explanation to this discrepancy. The research is based on the analysis of Doppler tracking data through the ODP (Orbit Determination Program), developed by NASA/JPL. The method can be summarized as: seek for any kind of physics affecting the dynamics of the spacecraft or the propagation of radiometric data, which may have not been properly taken into account previously, and check whether or not these might rule out the anomaly. A major effort has been put to build a thermal model of the spacecrafts for predicting the force due to anisotropic thermal radiation, since this is a model not natively included in the ODP. Tracking data encompassing more than twenty years of Pioneer 10 interplanetary cruise, plus twelve years of Pioneer 11 have been analyzed in light of the results of the thermal model. Different strategies of orbit determination have been implemented, including single arc, multi arc and stochastic filters, and their performance compared. Orbital solutions have been obtained without the needing of any acceleration other than the thermal recoil one indicating it as the responsible for the observed linear drift in the Doppler residuals. As a further support to this we checked that inclusion of additional constant acceleration as does not improve the quality of orbital solutions. All the tests performed lead to the conclusion that no anomalous acceleration is acting on Pioneers spacecrafts.
Resumo:
Running economy (RE), i.e. the oxygen consumption at a given submaximal speed, is an important determinant of endurance running performance. So far, investigators have widely attempted to individuate the factors affecting RE in competitive athletes, focusing mainly on the relationships between RE and running biomechanics. However, the current results are inconsistent and a clear mechanical profile of an economic runner has not been yet established. The present work aimed to better understand how the running technique influences RE in sub-elite middle-distance runners by investigating the biomechanical parameters acting on RE and the underlying mechanisms. Special emphasis was given to accounting for intra-individual variability in RE at different speeds and to assessing track running rather than treadmill running. In Study One, a factor analysis was used to reduce the 30 considered mechanical parameters to few global descriptors of the running mechanics. Then, a biomechanical comparison between economic and non economic runners and a multiple regression analysis (with RE as criterion variable and mechanical indices as independent variables) were performed. It was found that a better RE was associated to higher knee and ankle flexion in the support phase, and that the combination of seven individuated mechanical measures explains ∼72% of the variability in RE. In Study Two, a mathematical model predicting RE a priori from the rate of force production, originally developed and used in the field of comparative biology, was adapted and tested in competitive athletes. The model showed a very good fit (R2=0.86). In conclusion, the results of this dissertation suggest that the very complex interrelationships among the mechanical parameters affecting RE may be successfully dealt with through multivariate statistical analyses and the application of theoretical mathematical models. Thanks to these results, coaches are provided with useful tools to assess the biomechanical profile of their athletes. Thus, individual weaknesses in the running technique may be identified and removed, with the ultimate goal to improve RE.
Resumo:
This artwork reports on two different projects that were carried out during the three years of Doctor of the Philosophy course. In the first years a project regarding Capacitive Pressure Sensors Array for Aerodynamic Applications was developed in the Applied Aerodynamic research team of the Second Faculty of Engineering, University of Bologna, Forlì, Italy, and in collaboration with the ARCES laboratories of the same university. Capacitive pressure sensors were designed and fabricated, investigating theoretically and experimentally the sensor’s mechanical and electrical behaviours by means of finite elements method simulations and by means of wind tunnel tests. During the design phase, the sensor figures of merit are considered and evaluated for specific aerodynamic applications. The aim of this work is the production of low cost MEMS-alternative devices suitable for a sensor network to be implemented in air data system. The last two year was dedicated to a project regarding Wireless Pressure Sensor Network for Nautical Applications. Aim of the developed sensor network is to sense the weak pressure field acting on the sail plan of a full batten sail by means of instrumented battens, providing a real time differential pressure map over the entire sail surface. The wireless sensor network and the sensing unit were designed, fabricated and tested in the faculty laboratories. A static non-linear coupled mechanical-electrostatic simulation, has been developed to predict the pressure versus capacitance static characteristic suitable for the transduction process and to tune the geometry of the transducer to reach the required resolution, sensitivity and time response in the appropriate full scale pressure input A time dependent viscoelastic error model has been inferred and developed by means of experimental data in order to model, predict and reduce the inaccuracy bound due to the viscolelastic phenomena affecting the Mylar® polyester film used for the sensor diaphragm. The development of the two above mentioned subjects are strictly related but presently separately in this artwork.
Resumo:
Self-organisation is increasingly being regarded as an effective approach to tackle modern systems complexity. The self-organisation approach allows the development of systems exhibiting complex dynamics and adapting to environmental perturbations without requiring a complete knowledge of the future surrounding conditions. However, the development of self-organising systems (SOS) is driven by different principles with respect to traditional software engineering. For instance, engineers typically design systems combining smaller elements where the composition rules depend on the reference paradigm, but typically produce predictable results. Conversely, SOS display non-linear dynamics, which can hardly be captured by deterministic models, and, although robust with respect to external perturbations, are quite sensitive to changes on inner working parameters. In this thesis, we describe methodological aspects concerning the early-design stage of SOS built relying on the Multiagent paradigm: in particular, we refer to the A&A metamodel, where MAS are composed by agents and artefacts, i.e. environmental resources. Then, we describe an architectural pattern that has been extracted from a recurrent solution in designing self-organising systems: this pattern is based on a MAS environment formed by artefacts, modelling non-proactive resources, and environmental agents acting on artefacts so as to enable self-organising mechanisms. In this context, we propose a scientific approach for the early design stage of the engineering of self-organising systems: the process is an iterative one and each cycle is articulated in four stages, modelling, simulation, formal verification, and tuning. During the modelling phase we mainly rely on the existence of a self-organising strategy observed in Nature and, hopefully encoded as a design pattern. Simulations of an abstract system model are used to drive design choices until the required quality properties are obtained, thus providing guarantees that the subsequent design steps would lead to a correct implementation. However, system analysis exclusively based on simulation results does not provide sound guarantees for the engineering of complex systems: to this purpose, we envision the application of formal verification techniques, specifically model checking, in order to exactly characterise the system behaviours. During the tuning stage parameters are tweaked in order to meet the target global dynamics and feasibility constraints. In order to evaluate the methodology, we analysed several systems: in this thesis, we only describe three of them, i.e. the most representative ones for each of the three years of PhD course. We analyse each case study using the presented method, and describe the exploited formal tools and techniques.
Resumo:
In fluid dynamics research, pressure measurements are of great importance to define the flow field acting on aerodynamic surfaces. In fact the experimental approach is fundamental to avoid the complexity of the mathematical models for predicting the fluid phenomena. It’s important to note that, using in-situ sensor to monitor pressure on large domains with highly unsteady flows, several problems are encountered working with the classical techniques due to the transducer cost, the intrusiveness, the time response and the operating range. An interesting approach for satisfying the previously reported sensor requirements is to implement a sensor network capable of acquiring pressure data on aerodynamic surface using a wireless communication system able to collect the pressure data with the lowest environmental–invasion level possible. In this thesis a wireless sensor network for fluid fields pressure has been designed, built and tested. To develop the system, a capacitive pressure sensor, based on polymeric membrane, and read out circuitry, based on microcontroller, have been designed, built and tested. The wireless communication has been performed using the Zensys Z-WAVE platform, and network and data management have been implemented. Finally, the full embedded system with antenna has been created. As a proof of concept, the monitoring of pressure on the top of the mainsail in a sailboat has been chosen as working example.
Resumo:
This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Carath´eodory type. The Carnot-Carath´eodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling “X-ellipticity” and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincar´e inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Guti´errez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of “quasi-boundedness”. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.
Resumo:
Mycotoxins are contaminants of agricultural products both in the field and during storage and can enter the food chain through contaminated cereals and foods (milk, meat, and eggs) obtained from animals fed mycotoxin contaminated feeds. Mycotoxins are genotoxic carcinogens that cause health and economic problems. Ochratoxin A and fumonisin B1 have been classified by the International Agency for Research on Cancer in 1993, as “possibly carcinogenic to humans” (class 2B). To control mycotoxins induced damages, different strategies have been developed to reduce the growth of mycotoxigenic fungi as well as to decontaminate and/or detoxify mycotoxin contaminated foods and animal feeds. Critical points, target for these strategies, are: prevention of mycotoxin contamination, detoxification of mycotoxins already present in food and feed, inhibition of mycotoxin absorption in the gastrointestinal tract, reduce mycotoxin induced damages when absorption occurs. Decontamination processes, as indicate by FAO, needs the following requisites to reduce toxic and economic impact of mycotoxins: it must destroy, inactivate, or remove mycotoxins; it must not produce or leave toxic and/or carcinogenic/mutagenic residues in the final products or in food products obtained from animals fed decontaminated feed; it must be capable of destroying fungal spores and mycelium in order to avoiding mycotoxin formation under favorable conditions; it should not adversely affect desirable physical and sensory properties of the feedstuff; it has to be technically and economically feasible. One important approach to the prevention of mycotoxicosis in livestock is the addition in the diets of the non-nutritionally adsorbents that bind mycotoxins preventing the absorption in the gastrointestinal tract. Activated carbons, hydrated sodium calcium aluminosilicate (HSCAS), zeolites, bentonites, and certain clays, are the most studied adsorbent and they possess a high affinity for mycotoxins. In recent years, there has been increasing interest on the hypothesis that the absorption in consumed food can be inhibited by microorganisms in the gastrointestinal tract. Numerous investigators showed that some dairy strains of LAB and bifidobacteria were able to bind aflatoxins effectively. There is a strong need for prevention of the mycotoxin-induced damages once the toxin is ingested. Nutritional approaches, such as supplementation of nutrients, food components, or additives with protective effects against mycotoxin toxicity are assuming increasing interest. Since mycotoxins have been known to produce damages by increasing oxidative stress, the protective properties of antioxidant substances have been extensively investigated. Purpose of the present study was to investigate in vitro and in vivo, strategies to counteract mycotoxin threat particularly in swine husbandry. The Ussing chambers technique was applied in the present study that for the first time to investigate in vitro the permeability of OTA and FB1 through rat intestinal mucosa. Results showed that OTA and FB1 were not absorbed from rat small intestine mucosa. Since in vivo absorption of both mycotoxins normally occurs, it is evident that in these experimental conditions Ussing diffusion chambers were not able to assess the intestinal permeability of OTA and FB1. A large number of LAB strains isolated from feces and different gastrointestinal tract regions of pigs and poultry were screened for their ability to remove OTA, FB1, and DON from bacterial medium. Results of this in vitro study showed low efficacy of isolated LAB strains to reduce OTA, FB1, and DON from bacterial medium. An in vivo trial in rats was performed to evaluate the effects of in-feed supplementation of a LAB strain, Pediococcus pentosaceus FBB61, to counteract the toxic effects induced by exposure to OTA contaminated diets. The study allows to conclude that feed supplementation with P. pentosaceus FBB61 ameliorates the oxidative status in liver, and lowers OTA induced oxidative damage in liver and kidney if diet was contaminated by OTA. This P. pentosaceus FBB61 feature joined to its bactericidal activity against Gram positive bacteria and its ability to modulate gut microflora balance in pigs, encourage additional in vivo experiments in order to better understand the potential role of P. pentosaceus FBB61 as probiotic for farm animals and humans. In the present study, in vivo trial on weaned piglets fed FB1 allow to conclude that feeding of 7.32 ppm of FB1 for 6 weeks did not impair growth performance. Deoxynivalenol contamination of feeds was evaluated in an in vivo trial on weaned piglets. The comparison between growth parameters of piglets fed DON contaminated diet and contaminated diet supplemented with the commercial product did not reach the significance level but piglet growth performances were numerically improved when the commercial product was added to DON contaminated diet. Further studies are needed to improve knowledge on mycotoxins intestinal absorption, mechanism for their detoxification in feeds and foods, and nutritional strategies to reduce mycotoxins induced damages in animals and humans. The multifactorial approach acting on each of the various steps could be a promising strategy to counteract mycotoxins damages.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
Two analytical models are proposed to describe two different mechanisms of lava tubes formation. A first model is introduced to describe the development of a solid crust in the central region of the channel, and the formation of a tube when crust widens until it reaches the leve\'es. The Newtonian assumption is considered and the steady state Navier- Stokes equation in a rectangular conduit is solved. A constant heat flux density assigned at the upper flow surface resumes the combined effects of two thermal processes: radiation and convection into the atmosphere. Advective terms are also included, by the introduction of velocity into the expression of temperature. Velocity is calculated as an average value over the channel width, so that lateral variations of temperature are neglected. As long as the upper flow surface cools, a solid layer develops, described as a plastic body, having a resistance to shear deformation. If the applied shear stress exceeds this resistance, crust breaks, otherwise, solid fragments present at the flow surface can weld together forming a continuous roof, as it happens in the sidewall flow regions. Variations of channel width, ground slope and effusion rate are analyzed, as parameters that strongly affect the shear stress values. Crust growing is favored when the channel widens, and tube formation is possible when the ground slope or the effusion rate reduce. A comparison of results is successfully made with data obtained from the analysis of pictures of actual flows. The second model describes the formation of a stable, well defined crust along both channel sides, their growing towards the center and their welding to form the tube roof. The fluid motion is described as in the model above. Thermal budget takes into account conduction into the atmosphere, and advection is included considering the velocity depending both on depth and channel width. The solidified crust has a non uniform thickness along the channel width. Stresses acting on the crust are calculated using the equations of the elastic thin plate, pinned at its ends. The model allows to calculate the distance where crust thickness is able to resist the drag of the underlying fluid and to sustain its weight by itself, and the level of the fluid can lower below the tube roof. Viscosity and thermal conductivity have been experimentally investigated through the use of a rotational viscosimeter. Analyzing samples coming from Mount Etna (2002) the following results have been obtained: the fluid is Newtonian and the thermal conductivity is constant in a range of temperature above the liquidus. For lower temperature, the fluid becomes non homogeneous, and the used experimental techniques are not able to detect any properties, because measurements are not reproducible.
Resumo:
Until recently the debate on the ontology of spacetime had only a philosophical significance, since, from a physical point of view, General Relativity has been made "immune" to the consequences of the "Hole Argument" simply by reducing the subject to the assertion that solutions of Einstein equations which are mathematically different and related by an active diffeomorfism are physically equivalent. From a technical point of view, the natural reading of the consequences of the "Hole Argument” has always been to go further and say that the mathematical representation of spacetime in General Relativity inevitably contains a “superfluous structure” brought to light by the gauge freedom of the theory. This position of apparent split between the philosophical outcome and the physical one has been corrected thanks to a meticulous and complicated formal analysis of the theory in a fundamental and recent (2006) work by Luca Lusanna and Massimo Pauri entitled “Explaining Leibniz equivalence as difference of non-inertial appearances: dis-solution of the Hole Argument and physical individuation of point-events”. The main result of this article is that of having shown how, from a physical point of view, point-events of Einstein empty spacetime, in a particular class of models considered by them, are literally identifiable with the autonomous degrees of freedom of the gravitational field (the Dirac observables, DO). In the light of philosophical considerations based on realism assumptions of the theories and entities, the two authors then conclude by saying that spacetime point-events have a degree of "weak objectivity", since they, depending on a NIF (non-inertial frame), unlike the points of the homogeneous newtonian space, are plunged in a rich and complex non-local holistic structure provided by the “ontic part” of the metric field. Therefore according to the complex structure of spacetime that General Relativity highlights and within the declared limits of a methodology based on a Galilean scientific representation, we can certainly assert that spacetime has got "elements of reality", but the inevitably relational elements that are in the physical detection of point-events in the vacuum of matter (highlighted by the “ontic part” of the metric field, the DO) are closely dependent on the choice of the global spatiotemporal laboratory where the dynamics is expressed (NIF). According to the two authors, a peculiar kind of structuralism takes shape: the point structuralism, with common features both of the absolutist and substantival tradition and of the relationalist one. The intention of this thesis is that of proposing a method of approaching the problem that is, at least at the beginning, independent from the previous ones, that is to propose an approach based on the possibility of describing the gravitational field at three distinct levels. In other words, keeping the results achieved by the work of Lusanna and Pauri in mind and following their underlying philosophical assumptions, we intend to partially converge to their structuralist approach, but starting from what we believe is the "foundational peculiarity" of General Relativity, which is that characteristic inherent in the elements that constitute its formal structure: its essentially geometric nature as a theory considered regardless of the empirical necessity of the measure theory. Observing the theory of General Relativity from this perspective, we can find a "triple modality" for describing the gravitational field that is essentially based on a geometric interpretation of the spacetime structure. The gravitational field is now "visible" no longer in terms of its autonomous degrees of freedom (the DO), which, in fact, do not have a tensorial and, therefore, nor geometric nature, but it is analyzable through three levels: a first one, called the potential level (which the theory identifies with the components of the metric tensor), a second one, known as the connections level (which in the theory determine the forces acting on the mass and, as such, offer a level of description related to the one that the newtonian gravitation provides in terms of components of the gravitational field) and, finally, a third level, that of the Riemann tensor, which is peculiar to General Relativity only. Focusing from the beginning on what is called the "third level" seems to present immediately a first advantage: to lead directly to a description of spacetime properties in terms of gauge-invariant quantites, which allows to "short circuit" the long path that, in the treatises analyzed, leads to identify the "ontic part” of the metric field. It is then shown how to this last level it is possible to establish a “primitive level of objectivity” of spacetime in terms of the effects that matter exercises in extended domains of spacetime geometrical structure; these effects are described by invariants of the Riemann tensor, in particular of its irreducible part: the Weyl tensor. The convergence towards the affirmation by Lusanna and Pauri that the existence of a holistic, non-local and relational structure from which the properties quantitatively identified of point-events depend (in addition to their own intrinsic detection), even if it is obtained from different considerations, is realized, in our opinion, in the assignment of a crucial role to the degree of curvature of spacetime that is defined by the Weyl tensor even in the case of empty spacetimes (as in the analysis conducted by Lusanna and Pauri). In the end, matter, regarded as the physical counterpart of spacetime curvature, whose expression is the Weyl tensor, changes the value of this tensor even in spacetimes without matter. In this way, going back to the approach of Lusanna and Pauri, it affects the DOs evolution and, consequently, the physical identification of point-events (as our authors claim). In conclusion, we think that it is possible to see the holistic, relational, and non-local structure of spacetime also through the "behavior" of the Weyl tensor in terms of the Riemann tensor. This "behavior" that leads to geometrical effects of curvature is characterized from the beginning by the fact that it concerns extensive domains of the manifold (although it should be pointed out that the values of the Weyl tensor change from point to point) by virtue of the fact that the action of matter elsewhere indefinitely acts. Finally, we think that the characteristic relationality of spacetime structure should be identified in this "primitive level of organization" of spacetime.
Resumo:
The modern stratigraphy of clastic continental margins is the result of the interaction between several geological processes acting on different time scales, among which sea level oscillations, sediment supply fluctuations and local tectonics are the main mechanisms. During the past three years my PhD was focused on understanding the impact of each of these process in the deposition of the central and northern Adriatic sedimentary successions, with the aim of reconstructing and quantifying the Late Quaternary eustatic fluctuations. In the last few decades, several Authors tried to quantify past eustatic fluctuations through the analysis of direct sea level indicators, among which drowned barrier-island deposits or coral reefs, or indirect methods, such as Oxygen isotope ratios (δ18O) or modeling simulations. Sea level curves, obtained from direct sea level indicators, record a composite signal, formed by the contribution of the global eustatic change and regional factors, as tectonic processes or glacial-isostatic rebound effects: the eustatic signal has to be obtained by removing the contribution of these other mechanisms. To obtain the most realistic sea level reconstructions it is important to quantify the tectonic regime of the central Adriatic margin. This result has been achieved integrating a numerical approach with the analysis of high-resolution seismic profiles. In detail, the subsidence trend obtained from the geohistory analysis and the backstripping of the borehole PRAD1.2 (the borehole PRAD1.2 is a 71 m continuous borehole drilled in -185 m of water depth, south of the Mid Adriatic Deep - MAD - during the European Project PROMESS 1, Profile Across Mediterranean Sedimentary Systems, Part 1), has been confirmed by the analysis of lowstand paleoshorelines and by benthic foraminifera associations investigated through the borehole. This work showed an evolution from inner-shelf environment, during Marine Isotopic Stage (MIS) 10, to upper-slope conditions, during MIS 2. Once the tectonic regime of the central Adriatic margin has been constrained, it is possible to investigate the impact of sea level and sediment supply fluctuations on the deposition of the Late Pleistocene-Holocene transgressive deposits. The Adriatic transgressive record (TST - Transgressive Systems Tract) is formed by three correlative sedimentary bodies, deposited in less then 14 kyr since the Last Glacial Maximum (LGM); in particular: along the central Adriatic shelf and in the adjacent slope basin the TST is formed by marine units, while along the northern Adriatic shelf the TST is represented by costal deposits in a backstepping configuration. The central Adriatic margin, characterized by a thick transgressive sedimentary succession, is the ideal site to investigate the impact of late Pleistocene climatic and eustatic fluctuations, among which Meltwater Pulses 1A and 1B and the Younger Dryas cold event. The central Adriatic TST is formed by a tripartite deposit bounded by two regional unconformities. In particular, the middle TST unit includes two prograding wedges, deposited in the interval between the two Meltwater Pulse events, as highlighted by several 14C age estimates, and likely recorded the Younger Dryas cold interval. Modeling simulations, obtained with the two coupled models HydroTrend 3.0 and 2D-Sedflux 1.0C (developed by the Community Surface Dynamics Modeling System - CSDMS), integrated by the analysis of high resolution seismic profiles and core samples, indicate that: 1 - the prograding middle TST unit, deposited during the Younger Dryas, was formed as a consequence of an increase in sediment flux, likely connected to a decline in vegetation cover in the catchment area due to the establishment of sub glacial arid conditions; 2 - the two-stage prograding geometry was the consequence of a sea level still-stand (or possibly a fall) during the Younger Dryas event. The northern Adriatic margin, characterized by a broad and gentle shelf (350 km wide with a low angle plunge of 0.02° to the SE), is the ideal site to quantify the timing of each steps of the post LGM sea level rise. The modern shelf is characterized by sandy deposits of barrier-island systems in a backstepping configuration, showing younger ages at progressively shallower depths, which recorded the step-wise nature of the last sea level rise. The age-depth model, obtained by dated samples of basal peat layers, is in good agreement with previous published sea level curves, and highlights the post-glacial eustatic trend. The interval corresponding to the Younger Dyas cold reversal, instead, is more complex: two coeval coastal deposits characterize the northern Adriatic shelf at very different water depths. Several explanations and different models can be attempted to explain this conundrum, but the problem remains still unsolved.
Resumo:
This work is concerned with the increasing relationships between two distinct multidisciplinary research fields, Semantic Web technologies and scholarly publishing, that in this context converge into one precise research topic: Semantic Publishing. In the spirit of the original aim of Semantic Publishing, i.e. the improvement of scientific communication by means of semantic technologies, this thesis proposes theories, formalisms and applications for opening up semantic publishing to an effective interaction between scholarly documents (e.g., journal articles) and their related semantic and formal descriptions. In fact, the main aim of this work is to increase the users' comprehension of documents and to allow document enrichment, discovery and linkage to document-related resources and contexts, such as other articles and raw scientific data. In order to achieve these goals, this thesis investigates and proposes solutions for three of the main issues that semantic publishing promises to address, namely: the need of tools for linking document text to a formal representation of its meaning, the lack of complete metadata schemas for describing documents according to the publishing vocabulary, and absence of effective user interfaces for easily acting on semantic publishing models and theories.